Quizzing and random question selector

Reposting here so there is a permanent page for my quizzing Excel file and any updates over the year.

I would be interested in your feedback on how it works out in the classroom.

Good luck all for the new academic year.




This spreadsheet helps to organise questions, and generate quizes.

The intention is for each lesson, a quiz of six questions is used: three new questions based on the last lesson’s learning, and three random questions from all previous lessons.

For each new lesson, type in the next three numbers in column E, e.g. 013, 014 and 015 for Lesson 3. Then move to the ‘Questions’ tab and type in your questions and answers in the appropriate row.

You can pre-populate your questions in the ‘Questions’ tab – as long as you haven’t referenced future questions in your Planner (column E), they won’t be selected in the random question generator.

To select the three random questions, press F9, which will generate three random question numbers  (yellow box) (up to the most recently used question in your planner). Type, or copy, these into the remaining  three rows for the lesson. (Note if you copy /paste from K2-4, you’ll need to  use Paste Special/Values).

Note – the random question selection includes an aspect of weighting. All questions are weighted such that the more often and more recently a question has been used, the less likely it is to be selected in the random question generator. This should help improve the coverage of all the questions over the year.

The ‘LessonQs’ and ‘LessonQ&A’ tabs will auto-update with the relevant questions and answers (i.e. the last six in the Column E). This can then be displayed to the class.

Separately, a completely random six question quiz can be made with the ‘6RandomQs’ and ‘6RandomQ&A’ tabs – just press F9.

Separately, you can generate a quiz based on six selected questions by filling in  the ‘Six  selected question’ cells (orange) and the ‘6SelectedQs’ and ‘6SelectedQ&A’ tabs will auto-update .

(Note for any Excel-aficionado – the coding is a massive bodge-job – please don’t judge too harshly! I will tidy it up in a later version if this proves useful.)


A response to Wellcome’s concerns about practical work assessment

A response to Wellcome’s concerns about practical work assessment

TES has reported that the Wellcome Trust says

‘removing science practicals from A levels could challenge ‘authenticity’ of qualification’.

I can’t find a press release for these comments, so I’m assuming that this was a direct conversation between Wellcome and TES. Wellcome Trust has well-known concerns over the removal of controlled assessment/coursework (for example here and here, cited about half way down this article, amongst others).

The discussion in the TES article on how the new regime works is pretty accurate: demonstration of competency over 12 practicals (although neglecting that this is a minimum), noting that 15% of marks in the written exam are based on ‘theory and application of practical skills’ (again not noting this is a minimum percentage), and the concerns that there were over the validity and reliability of the previous coursework regime.

The report then runs through the set of concerns that Wellcome has. I find myself disagreeing with a lot of what is said.

‘breaking the link between experiments and grades could cause students to focus exclusively on the assessed aspects of their course, reducing their motivation for practical work’

I find this a rather reductionist view of education, which assumes students are only interested in the final grade and will neglect everything else. I have certainly had some students who are only interested in the final grade (a means to an end, usually those for whom chemistry was the ‘third’ subject), but the majority of my students have studied chemistry broadly and enthusiastically, and were well engaged with their practical work, regardless of whether it was part of a formal assessment.

‘In a time when we know there’s a lot of pressure on the delivery of science, it may be that teachers are less able to prioritise the delivery of practical science – even though they may very much want to do so.’

I’ve only been involved in teaching for a little over a decade and there hasn’t really been a time when there wasn’t pressure on my departments to ‘produce results’ and of ‘tightening budgets’. The very high stakes nature of controlled assessment (at GCSE) and coursework (at A level) often skewed how practical work was used during the course, with the practical work overly focussed on what was assessed rather than broadly over the course. My experience of the new A-level course is that it has allowed teachers to more authentically use practical work during the lessons, and in some cases given them the ‘ammunition’ needed to use more practical work in their lessons. The assessment of practical work throughout the two-year course, and it is a DIRECT assessment of students’ practical skills, is ongoing and can be used formatively. If students are struggling with a practical during an assessed task, teachers can now intervene and re-teach/support/instruct, and then re-assess the students later on in the course. In the old days of coursework, this wouldn’t have been possible as it was all carried out under high-control.

In terms of the actual marks that went towards the final grade, it has always been a small amount. Looking at the last year of coursework tasks for OCR A Level Chemistry A, about 4.5% of the final mark was based on actual practical skills. While the coursework modules accounted for 20% of the final marks, the majority of the questions asked within these papers were analysis and evaluation questions based on the observations/measurements taken during the two practical tasks. (I’m assuming the percentages are broadly similar for other qualifications.) These papers were carried out under exam conditions, just like in the new qualification exams, but internally marked, moderated, then externally moderated, with all the administrative burden and issues with reliability this entails.

We now have the practical endorsement – a holistic judgment of the competence of a student in a wide variety of practical skills, apparatus, and techniques, carried out over the two years of the course. 99% of students have been awarded the endorsement this year, and this is a massive credit to the students, and to their teachers in adapting to a new assessment regime and in delivering what I think is a much more authentic practical experience.

‘downgrading the significance of practicals at A level could make it more difficult for students to decide whether they wanted to go on to pursue science at university’

For all the reasons above, I just don’t believe that the significance of practical work has been downgraded, and again the assumption that if it doesn’t count towards a grade it isn’t significant. For example, the science trips I have taken students on (for example to the Diamond Light Source in Oxfordshire) have had just as much impact on students’ decision to pursue science at university, and this had no direct relevance to their final grade.

A quote from the Wellcome Trust report from February 2017 (https://wellcome.ac.uk/sites/default/files/science-education-tracker-report-feb17.pdf) is given:

‘35% were encouraged to learn science because of practical work’.

This isn’t quite what the report says – the actual statement was ‘enjoying practical work’ as being the second most important factor in encouraging young people to learn science. I think there is a subtle distinction here – enjoyment can encourage learning, but enjoying practical work may mean they like being in science lessons (more than other subjects?) but not necessarily that they are learning from the practical work itself – that’s a whole other topic in itself.

‘Stripping out marked practicals could also challenge the “authenticity of the grade of the A level and whether it’s really reflecting how good you are at doing science”’.

Again the majority of the coursework marks were based on analysis/evaluation which can just as well be assessed in the exam hall as under high control in the science lab, with only 5% of the final mark was based on actual practical skills. Having a whole endorsement that is a result of the assessment of practical skills over two years seems a better, more authentic experience. I’ve spoken to quite a few undergraduate lecturers over the last couple of years, and there are plenty of examples of high grade students who don’t do well in first-year labs, and lower-grade students who excel in the labs. The correlation between practical skills and the ability to attain high grades isn’t concrete.

The TES article then concludes with commentary about the variability of the requirements universities place on students to pass the endorsement. There is quite a lot of variation out there – some universities have a blanket requirement on all STEM degrees, some on some courses, and some universities are ‘waiting and seeing’. This is probably not surprising given this large change to the assessment regime, and no doubt universities will reflect on their students’ abilities this year and decide on whether or not to make it a requirement.

I am with the majority of teachers cited in the Ofqual initial research who see the reform as being positive for teaching and learning of practical skills. The new way of assessment is in line with how I have taught science over the years, and it allows students to build up (assessed) competence over time with ongoing support, rather than having to perform under the artificial constraints of coursework/controlled assessment tasks. I think we need to trust that teachers will do their best in whatever circumstances they are in to give their students a broad and balanced science education, including practical work, and not allow the assessment-tail to completely wag the teaching-and-learning dog.


Late to the party – Core knowledge booklets

Core knowledge booklets for Chemistry and Physics

As I’m going back to teaching after two years working at one of the UK exam boards, I’ve started riffling through my folders to get myself back in the swing of things. I came across these booklets that I put together four years ago now for the legacy AQA Chemistry and Physics GCSEs – only C1, C2, P1 and P2 by the looks of things.


I was interested to see what I wrote on the cover of these booklets:

This is the core knowledge that you need to know in this unit. Knowing the information in this booklet will help you to understand the more complex chemistry that you will study, and to express yourself clearly in class, in assessments and in exams. Knowing the information below is an essential FIRST STEP to gaining top grades – but it is not sufficient. Only regular study, discussion and practice of the chemistry will consistently lead to the top grades.

This seems a good summary of what I expected from my students, and what I encouraged the teacher in my departments to expect as well.

The students were issued the booklets at the start of the year, and expected to keep them in their exercise books for reference to and regular use of. As and when students had completed work in class, I directed them to spend time self- and peer- quizzing using these booklets, and they made for useful resources for homework and revision. A lot of this seems similar to what some others are advocating.

With the rise of Knowledge Organisers and the like, I thought I’d share them again. Please feel free to adapt and use. As and when I update them for my new classes and the new qualifications, I’ll repost.

AQA Chemistry C1 (2011 Core Science)

AQA Chemistry C2 (2011 Additional Science)

AQA Physics P1 (2011 Core Science)

AQA Physics P2 (2011 Additional Science)


Complications and errors in teaching

Commonalities in teaching and medicine; learning from our mistakes and always aiming for perfection.

Estimated reading time: 6-7 minutes


I finished another book by one of my favourite popular-expert authors recently, ‘Complications’ by Atul Gawande. I originally heard about Atul’s work on the BBC radio show Desert Island Discs back in 2015. His ‘Checklist Manifesto identifies the use of checklists in highly complex professional arenas such as piloting airliners and in surgery. In brief summary, he discusses how many of the things humans do are so complex that no one can hope to complete all that is required without making mistakes, such as forgetting important steps. Aircraft test pilots developed checklists to help them fly the increasingly complex planes in the 1950s. As the planes were becoming more complex, the test pilots kept crashing them as they forgot critical steps, not through lack of expertise, rather the limitations of human memory.

The quest for perfection

‘Complications’ discusses Atul’s progression through his medical training, and how the inevitable errors made by doctors are dealt with. Much of what he writes chimes with my own development as a teacher, and in supporting others through their careers.

… This is the uncomfortable truth about teaching. By traditional ethics and public insistence… a patient’s right of the best care possible must trump the objective of training novices. We want perfection without practice. Yet everyone is harmed if no one is trained for the future.” (page 24)

Here Atul discusses the tension between what we want from our doctors and what the system requires. As individual patients, we want the best possible care from the best doctors. However, a medical system with a sole focus on individual outcomes could never function. The system needs continually to train new doctors. Part of this training is treating individual patients, and the training doctors will make mistakes. New members of the profession need to learn, need to practice, need space to improve.

This has direct corollaries with teaching. As parents, we want the best possible teachers teaching our children. As a teaching profession, we always need new teachers coming in, and they will need to be educated, and they will need to practice. In the process of their development, they will make mistakes and inevitably the education of the children will not be as good as if they were taught by more effective teachers.

The problem with choice

If learning is necessary but causes harm, then above all it ought to apply to everyone alike. Given a choice, people wriggle out, and those choices are not offered equally…. If choice cannot go to everyone, maybe it is better when it is not allowed at all.” (page 32)

Here Atul discusses how, when his child was in hospital, he insisted on having the attending physician (senior doctor) treat his child, not the resident (junior doctor) who was assigned to his child’s case. He knows the medical system well and so could argue to get the best possible treatment for his child. Other members of the public, who don’t know the system so well, are more likely to accept the medical treatments as given.

As a former head of department, I was most acutely aware of these tensions when assigning teaching groups. Which classes are the NQTs (newly qualified teachers) assigned to; the most experienced teachers; the exam groups (Year 11, Year 13); those most likely to be getting the grades which have the greatest effects on accountability measures (C/D grade borderline groups)? My own solution, and what I think was the most equitable, tended to be that all teachers taught across the ability range and year groups where possible, and support provided to less experienced teachers as required. My thinking ran in line with Atul’s – we can only provide the service (be it medical or educational) based of the skills and abilities of those professionals we have available at the time. The majority of science departments are likely to have a range of teachers of experience, expertise and enthusiasm. If we can’t have all of the students in our departments taught by the most experienced, enthusiastic experts, then the allocation of teachers should be on the most equitable basis possible, without the possibility of ‘special pleading’ for particular children or groups of children. Inevitably, the power of a head of department is not absolute, and some decisions were over-ruled by senior management who may have had different priorities to mine.

How to deal with mistakes

One of the most powerful passages in the book was the discussion of the Morbidity and Mortality conference (M&M), a place where mistakes in patient care are discussed in open forum – everyone attends from the most junior doctor to the head of department.

In its way, the M & M is an impressively sophisticated and human institution. Unlike the courts or the media, it recognizes that human error is not generally something that can be deterred by punishment. The M & M sees avoiding error as largely a matter of will – of staying sufficiently informed and alert to anticipate the myriad ways that something can go wrong and then trying to head off each potential problem before it happens. It isn’t damnable that an error occurs, but there is more shame to it. In fact, the M & M’s ethos can seem paradoxical. On the one hand, it reinforces the very American idea that error is intolerable. On the other hand, the very existence of the M & M, its place on the weekly schedule, amounts to an acknowledgment that mistakes are an inevitable part of medicine.” (page 73)

Again, the chime with teaching of students. We must always seek to avoid error in what we do when we teach students, but have to accept that errors do occurs. By exposing these in a safe professional environment, we can all learn from the situation and improve everyone’s  on-going practice. As professionals we should always be seeking to improve our practice, and do the best we possibly can for our students, and recognise that mistakes we make are not just learning opportunities for our own practice but that of others. How high stakes accountability, for example graded lesson observations and performance-related pay, can skew this ideology is a topic for another time. Suffice it to say I always had strong reservations about the helpfulness of these techniques in driving good professional improvement.  

How do we continually improve?

No matter what measures are take, doctors will sometimes falter, and it isn’t reasonable to ask that we achieve perfection. What is reasonable is to ask that we never cease to aim for it.” (page 74)

We are human. We make mistakes. This is inevitable. No set of resources, force of will, system of incentives or sanction will ever get us to a place where teachers teach perfectly and all students learn to their optimum ability. What we can insist on is that all teachers recognise that there is always more to learn, always improvements to be made. And the system needs to recognise that teachers need access to forums to allow them to share and learn from each other.

And this access to forums, I think, is one of the biggest problems facing the teaching profession at the moment. I have delivered training and support across England over the last couple of years, and the attendance at events is significantly down of what it was only a few years back. Schools are finding it very hard to release teachers to attend CPD events, and it is quite common now for teachers to use their own free time to attend events such as TeachMeets. I am a big fan of such events, but am always troubled by a system that seems to have normalised an expectation that teacher improvement should come in teacher’s own time.

Can we make improvements? Yes – at a system level, only government can change the funding situation that has driven the reduction in capacity in schools to allow teachers to have time away from the classroom. Government policy can be influenced by engaging with professional bodies like the Chartered College of Teaching, unions and learned societies such as the Royal Society of Chemistry. Yes – at school level, department and school meetings can schedule significant time for discussing good practice and learning from mistakes in a safe professional environment. Yes – at an individual level, by recognising that we aren’t perfect, we won’t always get it right, but that the aim for perfection is a worthy goal, and one we can all work towards across a career.


Teaching shortcuts and when they can trip you up…

Reading time: 4-5 minutes

Along with others of late, including Kristy Turner , Niki Kaiser and Adam Boxer, I have been mulling misconceptions and teaching. Cognitive Load Theory is also being discussed a lot and I’ll be attending Niki’s conference soon to hash out some more on these ideas and how to apply it to the classroom. Much of what I have read resonates with my previous teaching, and I’ve written about it in relation to practical work.


What prompted this particular post was a session with Steve Barnes and David Read at the Wessex Group conference a couple of weeks back, and a Twitter chat recently. Both were related to aspects of equilibrium, a concept many students find hard, especially when questions are a bit different from what they have seen before. With the increased demand in the new A level papers, and the increased emphasis on applying knowledge to unknown situations, this seems like a timely issue.

Eric Scerri noted a couple of years back the problems with Le Chatelier – it works sometimes, but can break down quite quickly, and can potentially stop us from having to think thinking too hard about the details of the context. Somewhat like Kristy’s SEABODI, students tend to go for a stock answer to the almost inevitable NH3 or SO3 production questions, but aren’t necessarily thinking deeply about their understanding.

At the Wessex Group conference, Steve used a set of equilibrium questions, drawing on work by Juan Quilez, to highlight these problems. We also discussed the nitrogen dioxide-dinitrogen tetroxide equilibrium. When applying Le Chatelier to this equilibrium, and looking at changing temperature, the expected changes to equilibrium position and hence observations are borne out, as seen in numerous videos. Increasing the pressure by compression is more complex. The expected shift in the position of equilibrium is to the right to decrease the pressure. A not uncommon prediction of the observation would be that the mixture lightens, as NO2 is converted into N2O4. The problem is that as the total volume has decreased, the mixture actually darkens initially as the NO2 becomes more concentrated, then the colour lightens as the NO2 is converted to N2O4. So the shift in equilibrium position may be correctly predicted, but the predicted observation may be wrong (or at least incomplete) because the full system was not considered or the question is not carefully phrased.

I recognise that at times I have come to rely on shortcuts and when the questions becomes more complex, the shortcuts can break down. Now, Le Chatelier is model like any other – we tend to use the simplest model that will allow us to explain the observations. When the model breaks, we use or develop a more sophisticated one. This issue has tied up with other conversations recently on what is meant by ‘mastery’ in science – is it mastering the concepts at the appropriate level, or introducing more sophisticated concepts earlier on so we aren’t ‘lying’ to the students. I’m not sure it would be appropriate to introduce GCSE and / or A level models directly in KS3. I may be able to get the students to repeat back the facts, but I doubt they’d be able to use them confidently or competently. Of course, this is a whole bucket of worms on assessing understanding – for another time perhaps.

What finally prompted this post was a resource I was reviewing on fuel cells. I have always found electrochemistry one of the harder topics to teach effectively, and tend to take more of a pause before launching into it. I have taught electrolysis plenty at GCSE, but for whatever reason never really gone into galvanic cells and fuel cells in detail. I had developed shortcuts for electrolysis along the lines of ‘it’s the reverse of normal chemical reactions’ and ‘cathodes are negative as cations are attracted to them’.  I got myself into a muddle with the hydrogen fuel cell, on working out the polarity of electrodes, not helped by some vaguely written resources. A quick shout out to Twitter set me straight (thanks to Peter Hoare and Adrian Dingle), but it was a useful reminder of the need to check my understanding of the fundamentals from time to time.

Does this have any wider relevance – certainly for me going back to teaching after a couple of years out. For others – perhaps. I think it points to the importance of subject knowledge CPD. I’m a strong advocate of pedagogical content knowledge CPD, but spending time on deepening my personal understanding of the content knowledge is probably worthwhile from time to time. At this point in my teaching career, I would put depth ahead of breadth now. I have sufficient breadth of chemistry to teach my students effectively at the level I’m teaching at, but I think increasing the depth of my understanding as the years pass can only be a good thing. There are some great resources out there. I’m a particular fan of knockhardy and chemguide. I also have a copy of Chemistry3 close by – I’ve consigned my other university books to the lab shelf – I think one general undergraduate chemistry text is sufficient for what I need for now.

Any thoughts?  When was the last time a student asked you a question that you couldn’t quite answer to your satisfaction? What resources do you use the support your depth of understanding?


Review and reflections on #MICER17

A review and some personal reflections on the MICER 2017 conference.


At the sumptuous RSC Library at Burlington House, we gathered Methods in Chemistry Education Research 2017,  a day of lectures, activities and catching up with friends and colleagues. From school teachers to a Professor Emeritus, we gathered with a common purpose – to spend a day thinking about methods in chemical education research.

The day started with Dr Suzanne Fergus (@suzannefergus), Principal Lecturer in Pharmaceutical Chemistry at University of Hertfordshire (also 2016 RSC Award Winner for Higher Education Teaching). Through the context of her journey into ChemEdRes, Suzanne discussed the difference between anecdote of what works in our own teaching situation, and what constitutes genuine research. Critical features included i) contextualization within the current literature, ii) robust data collection and evaluation, and iii) novelty of work. While replication of others work in our own context can help increased generalisability of ideas, the new learning from such replication needs to be made explicit.

We worked through an exercise in formulating a RESEARCH QUESTION, central to ensuring high-quality research, and ultimately in getting our studies published. In my previous teaching of A-level sciences, I have come across research questions in Biology fieldwork, but their use in Chemistry research are not common. The worksheet proved a useful structure to start the challenging process of formulating high quality and usable research questions. Benefits of starting the research process with the research question include i) helping connect with the literature; ii) influence on the methods used; iii) focus on the presentation of the work and iv) focus on the discussion of the conclusion.

One of Suzanne’s papers (DOI: 10.1021/ed2004966) was highlighted as a useful example of how ChemEdRes can be written. The ‘New Directions’ journal was also suggested as a good starting point for those looking to get into academic publishing. Suzanne also suggested other less formal (more ‘social’) ways of publishing to help build one’s confidence in sharing our thoughts with a wider community. This included speaking at TeachMeets, small conferences, engaging in Twitter conversations, writing personal and professional blogs and writing for institutional publications. On a personal level, Suzanne’s talk gave me that last little push to start a personal blog!

Suzanne’s colleague Dr Stewart Kirton (@skirtonUH), Head of Pharmaceutical Chemistry, University of Hertfordshire, then took us through the use of Likert scales in providing an assessment of the impact of our interventions. While analysis of attainment in assessments is a major source of such information, surveying students’ perceptions is an increasingly used source of information. I used such surveys throughout my time in secondary teaching, and their use is becoming more common at university level with the ‘Teaching Excellent Framework’.

Stewart took us through a process for developing valid questions and Likert-scale responses, including:

  • the importance of trialling the questions with peers
  • trialling with the subjects of the questions (usually your students
  • ensure each question is only examining one idea
  • avoiding jargon
  • think carefully about the possible responses – including ‘Don’t Know’ is acceptable
  • phrase questions positively (if possible)
  • sticking to around eight questions – using many more than this and the students will likely run out of steam!

Our activity involved drafting some questions to help evaluate a programme run by final year students to help second-year students prepare for interviews for industry years. A particularly useful online app was used to share our ideas (www.mentimeter.com) – a virtual notice board where you can send in your responses via smartphone/laptops.

Stewart finished with a clear exhortation on NOT taking the average of responses when using numerical responses on Likert scales (e.g. 1=strongly agree to 5=strongly disagree). Simply put, these numbers are not interval data, where the difference between successive values are identical and meaningful, rather they are ordinal data, i.e. can be ordered but the differences between them are meaningless. Stewart’s suggestion was to present the relative ratio of each response to each question and analyse pre- and post- intervention where appropriate.

After coffee, and meeting up some friends I have made on Twitter over the last year, Dr Orla Kelly (@orlakelly5), Senior Lecturer in Social, Environmental and Science Education, Dublin City University, discussed the evaluation of classroom practice, with a focus on ‘Classroom Action Research’. Orla started with a definition from the Open University, of ‘systematic and collaborative collection of evidence on which to base reflection’. She provided a summary of the cycle of action research as ‘Plan / Act / Observe / Reflect’. Orla’s extensive use of Problem-Based Learning in undergraduate labs provided a context for the talk, and had strong resonances with Suzanne’s earlier talk.

Prof Graham Scott (@grahamscott14), Professor of Bioscience Education at University of Hull then expanded the speaker repertoire beyond chemists to a biologist helping bring a perspective from a related relevant field. Graham’s key message was of the advantage of moving away from our ‘science comfort blanket’ and embracing the discomfiture of collecting and using the more qualitative data derived from interviews. Graham took us through his research journey of using interviews in studying various different areas, from an analysis of student’s and teacher’s perceptions of a course, to barriers to using biological fieldwork in primary schools.

Key ideas in making effective use of interviews included i) establishing a suitable dynamic between the interviewer and interviewee (location, time available, consideration of any prior professional relationship); ii) clearly constructed questions that will elicit the information required (including the use of trialling) and iii) the importance of audio/video recording and the processes of transcribing and analysing the data.

I have used interviews in previous research and as part of evaluating the effectiveness of my previous school departments. While I experienced many of the problems that Graham described, I wholeheartedly agree with Graham that the quality of information you can derive makes them well worth the effort.

After a much-needed lunch (energy levels were flagging by 1.10pm!) we had time to chat and look at posters. This was a nice aspect of the conference, starting off the conference a good month before the get-together, and get feedback on our ideas online. My particular interest right now is in how microscale chemistry can be integrated into my teaching, and whether it has sustained benefit to students’ learning.



Prof Keith Taber (@DrKeithSTaber) took us on a tour of ethics in educational research. It has been many years since my MEd days with Keith as my supervisor, but his erudite and rigorous style continues to shine through and it was a pleasure to be part of the audience.

Starting with a brief tour of various ethical frameworks, including deontology and utilitarianism, we discussed the importance of voluntary informed consent from the subjects of our research, and the responsibility we bear as researchers. These include reporting our findings as completely and fairly as possible, including not selectively reporting our findings, and highlighting the known limitations. We discussed the issues around anonymity and confidentiality, and the particular problems that the easy access to worldwide information via the internet can provide. We discussed the particular cases of the Milgram studies and the Tuskegee Syphilis experiment, highlighting areas of real contention in the ethics of research.

The key message from this double session was that while the rules of ethics can be relatively easily stated, the actual decisions we have to make as researchers can be very nuanced and rightly deserve careful consideration before, during and after our studies.

The day finished with Prof Georgios Tsaparlis, Professor Emeritus of Science Education in the Department of Chemistry at the University of Ioannina, Greece, winner of the 2016 RSC Education Award. Georgios’ work on problem-solving has spanned decades, and a whistle stop tour was presented in this final session. Ideas around how the limitations of our cognitive architecture, with a particular focus on working memory, were discussed. The importance of scaffolding and exercise, as well as success for students, in developing problem-solving skill was clearly emphasised.

MICER17 proved to be all I had been looking forward to, and a great venue to meet new people, make connections and expand my professional network. Most of all, it has helped put the human face to the world of ChemEdRes. Reading articles in CERP or J Chem Educ can be a little daunting to those new to ChemEdRes, and the barrier to entry to the world can seem impossibly high. However, it really is an inclusive and welcoming community, one that I look forward to contributing to in the coming years. Many thanks to Micheal Seery (@seerymk) and Claire McDonnell (@clairemcdonndit) for all the hard work in bringing this together.


Can we make practical work more effective?

Some thoughts on making practical work more effective by considering the cognitive load of practical work.

Reading time: 5-6 minutes

As discussed previously, I’ve been spending a lot of time thinking about practical work of late. I’ve presented most of the below at a few places over the last couple of weeks, including the RSC SaFE National Teachers Conference, and the East Midlands Chemistry Teachers Conference. It seems to be going down well, so I thought I’d summarise here.

The full slides are available, along with a hand-out summary I’ve produced for Niki Kaiser’s #CogSciSci conference tomorrow (Monday 10th July) at Notre Dame High School.

I’ve started these discussions with a quick run through of the why of practical work. I think it is important that we think carefully from time to time about why we do practical work, given the expense (in time and cost) and the opportunity costs (could we get to the same learning more effectively another way). I reference the National Curriculum, the Ofsted ‘Maintaining Curiosity‘ report, an OCR longitudinal survey, and some ‘historical’ literature (Hodson, 1990, SSR, 70(256, 33-40). The key point I make is that the reasons teachers seem to state for using practical work hasn’t changed much over the decades, and we’re still questioning how effective it is.

Quotes from the National Curriculum (Slide 5)

  • …essential aspects of knowledge, methods, processes and uses of science…
  • …curiosity about natural phenomena…
  • … explain what is occurring, predict how things will behave, and analyse causes

Quote from ‘Maintaining curiosity’ (Slide 6)

  • In the best schools visited, teachers ensured that pupils understood the ‘big ideas’ of science. They made sure that pupils mastered the investigative and practical skills that underpin the development of scientific knowledge and could discover for themselves the relevance and usefulness of those ideas.

Top five reasons identified by teachers for using practical work from the OCR longitudinal study (Slide 7)

  • To encourage accurate observation and description
  • To develop conceptual understanding
  • To develop reporting, presenting, data analysis and discussion skills
  • To experience the process of finding facts by investigation
  • To develop manipulative skills and techniques

Top five reasons for using practical work, identified in Hodson (1990) (Slide 8)

  • Motivation
  • Teaching laboratory skills
  • Enhancing learning of scientific knowledge
  • Insight into and developing scientific method
  • Developing ‘scientific attitudes’

I looked at how practical work can be ineffective, and referenced a nice succinct quote from Clackson and Wright (1992, SSR, 74(266), 39-42),

  • Although practical work is commonly considered to be invaluable in scientific teaching, research shows that it is not necessarily so valuable in scientific learning. The evidence points to the uncomfortable conclusion that much laboratory work has been of little benefit in helping pupils and students understand concepts.

and posit three key ideas about why practical can be ineffective:

  • chemistry is hard
  • the practicals used are overloaded
  • there is too much to think about.

A brief segue into Johnstone’s triangle follows (Slide 12). I use the context of dissolving table salt, which seems to help get the idea of the triangle across. Interestingly, on asking, I’d say fewer than 20% of teachers were aware of the triangle explicitly (although they may well understand it and have worked it out for themselves implicitly).

Another segue then into the structure of memory, using the model presented in Baddeley’s 2000 paper – I’m aware isn’t the most up-to-date paper and the model has no-doubt moved on, but I’m always concerned about overloading (ironically?) people when talking about these new areas. (Slide 13)

I’ve summarised Cognitive Load Theory via Intrinsic, Extraneous and Germane load as below – again, probably not a full and up-to-date reflection, but I think sufficient to get the idea across, especially when summarised using Greer’s model. (Slide 16):


So practical work can be ineffective because what we’re asking the students to think about is intrinsically hard (lots of abstract and mostly invisible concepts), we’re trying to do too much in the practicals (plan a method, collect data, process the data etc etc etc) and all the domains of thinking overload their capacity to actually think about what we want them to think about, and we end up with a lot of following the recipe, but little learning.

So my ‘Key Questions’ are:

  • How do we get beyond them just ‘following the recipe’?
    • How do we get them to think?
  • How do we reduce the cognitive load inherent in some practical work?
    • How do get them thinking about the right stuff?
  • How do we maximise the benefit of practical work for our students in our classrooms?
    • How can we design practicals that are effective at promoting learning?

and ‘Key Ideas’ are:

  • Ensure practicals have a clear goal
    • Don’t overload them/students
  • Use of microscale activities
    • e.g. electrolysis, to reduce extraneous load
  • Working up to complex practical tasks
    • e.g. titration, to improve instrinic load

How do we make practical work more effective? This is some further hashing out of ideas from over the last couple of weeks. I’ve had the titration ideas out there for a while, but the analysis of the electrolysis has taken my thinking a bit further.

Firstly, looking at extraneous load – the problem of solely written instructions, and how carefully labelled diagrams may be a better method. The key learning of this practical exemplified below would be the observation rather than the ‘ability to follow written instructions’. I was challenged on this one at the East Midlands conference, along the lines of how this would help with exam preparation, and the students needing to know how to write practical methods. My response was on the wider point of not overloading practicals (not everything has to be about exam preparation) and having a clear focus on what is required. If the observation is key, then everything else should be ‘subservient’ to that. (Slide 21)

I then had a go at a Cognitive Load ‘analysis’ – I’m not sure if this is a thing, but it was a useful exercise to demonstrate what students may be thinking about. I used the comparison between electrolysis using the standard Nuffield apparatus, and the microscale copper chloride electrolysis. (Slide 23 and 24)


  • application of a current to an aqueous solution of copper chloride produces copper and chlorine
  • qualitative tests for chlorine


  • charge on ions
  • nature of ions in solution
  • flow of charge
  • formation of metal/covalent substances


  • nature of apparatus
  • quality of practical instruction
  • quality and reliability of equipment
  • classroom environment

Doing the full micro-scale practical as written is potentially itself overloaded – far too much to put onto students the first time. (Slide 25)

But this can be easily relieved by leaving out all the ‘indicators’ and building up to them. (Slide 28)

I then went through my ‘breaking down titration‘ – in summary my first teaching of this went straight in with a 20 minute demonstration of titration theory and practice to a Yr11 Triple group and then me expecting them to replicate this with a written method. I was still a very green teacher at this point, and learnt some very valuable lessons. My next attempt was somewhat more nuanced, and led to much better learning – including the use of micro-titration. (Slide 29)

Finally, I discussed an introduction to rates of reaction using simplified kit (£10 2d.p. balanced from Amazon, vinegar from the kitchen cupboard and chalk from the garden). I’m impressed with the quality of the data I managed to get from this, and I think it may be a useful introduction to rates, without having to worry about all the standard kit. (Slide 31 and 32)

Following David Didau’s ‘what if I’m wrong’ motif (having finished ‘What if everything…‘ a couple of months back), I also reference a report by Moreno who neatly summarises some of the concerns about CLT.

  • It doesn’t deal with affective factors, including motivation
  • Extraneous load may be strongly inter-related with germane load
  • There are ambiguous and contradictory studies on the effectiveness of CLT in explaining outcomes.

Summarising with four key bullet points (Slide 37), I reiterated what is for me is a critically important part this – that we continue to think about and discuss our practice, and look to see whether we can improve on what we are doing in the classroom. I don’t particularly mind that I may well be coming to the same conclusions other already have. For me, expanding my personal knowledge and effectiveness is a critical part of my professional development. If I can help others with their’s, then so much the better, and if I can push at the boundaries of the public knowledge, then that’s a bonus.


21/7/17: Rehosted slides and powerpoint on social.ocr.org.uk and indicated specific slides in body of text rather than screenshots.

A morning with PGCE students

An overview of a session with Chemistry PGCE students, discussing practical work and pedagogy.

Estimated reading time – 6-7 minutes

I spent an enjoyable morning in the company of the latest cohort of Chemistry PGCE students at my former college Homerton, Cambridge. Elaine Wilson had invited me in to talk about practical work, something I’ve spent a lot of my time working on in my current job. I particularly like talking with teachers early in their teaching career. They enjoy a good discussion, and can provide a fresh perspective on teaching, perhaps ‘unencumbered’ by years spent dealing with the stresses and strains of the classroom (not that PGCE is a walk in the park of course!)


This particular cohort is entering a different world – no A level coursework or GCSE controlled assessment in any year groups. They’ll be right in there with linear qualifications, terminal exams, and practical work only formally assessed in written exams or directly through Practical Endorsement at A Level. It’s an interesting time to be sure, although with all the changes, departmental systems, curricula, resources etc may not be as well embedded as 3-4 years ago.

The session covered a lot of (minds-on) ground, and I pointed the students at a range of reports and resources that would make interesting reading for them over the summer. We started off by briefly reviewing the work of the SCORE reports on Practical Work (2008 and 2013), discussing the National Curriculum and the Ofsted ‘Maintaining Curiosity’ report from 2013. I’ve spent some time thinking about this particular Ofsted report as I’ll be speaking at the ASE Guildford conference later this month.

A pause in my talking allowed the students to discuss together the reasons they use practical work and then looking at the reasons given in a survey (ASE login required – Wilson et al. (2016), SSR, 98(362), 119-128) carried out by OCR recently. The discussion afterwards was particularly interesting, as many of the students reflected critically on their progression over their two teaching practices. A lot saw that they started out with a ‘classic’ view that ‘science is a practical subject’ and therefore ‘we do practical work’. Over the year they became more nuanced in their use of practical work, and were working towards carefully identifying what the purpose(s) of the practical was before committing to it.

After a brief foray into the distinguishing features of Direct and Indirect Assessment of Practical Skills, as research by Abrahams, Reiss and Sharpe, we looked at the new practical requirements at GCSE. Compared with the A Levels, the requirements are much lighter touch and give teachers and science departments significant latitude in how they provide the required ‘broad and balanced practical experience’. The different exam boards have implemented this is different ways, but fundamentally, students are expected to complete a minimum number of practicals (8 for each separate science, 16 for combined sciences), to give the students opportunities to use the required apparatus and techniques and make records of their practical work.

I have spent a good amount of time producing suggested practicals for GCSE Chemistry over the last year, including adaptations of the high-quality resources already available from RSC LearnChemistry and CLEAPSS. Given the new freedoms teachers have, I didn’t want to go down the route of ‘the exam board practical’ and reinventing the wheel. I think the resources (under Practical Activities) produced are useful, and will certainly be making use of them with my classes next year.

The main thrust of the hands-on practical session was to identify and try out different ways of ‘doing’ the practical, recognising the many school students can end up doing very similar practicals multiple times over their schooling. For example, it’s not uncommon for them to germinate a broad-bean in a jam-jar, carry out chromatography of pen ink, and study heat loss in cardboard model houses at primary, secondary and then further education. While not suggesting that repetition isn’t an important part of learning, there are different ways of getting to the same learning end-point that may bring additional benefits of a wider appreciation of practical science, and to help students move away from the idea that there is one ‘correct’ answer.

All of the practicals we tried out were adaptations from practicals developed by CLEAPSS (with acknowledgements to them). The first, and my favourite is the electrolysis of copper chloride in petri dish. Quick to set-up, and very adaptable, within 10 minutes the students got to several relevant observation, and thinking about how they could use it in different ways in the classroom. There was general agreement that having all of the ‘indicators’ the first time round would likely overload the students, and that starting off with just copper chloride and the two electrodes would allow the pupils to focus on making accurate observations. Given the perpetual issues around using power packs in lessons, the use of 9V batteries was particular enjoyed!


We briefly tried out a further adaptation of the disappearing cross reaction. The reaction box is normally made from food containers. As I didn’t have enough around the house the night before when I was making them up, I used some spare petri-dishes instead. They mostly worked, although they polystyrene proved much more brittle than the polythene (polypropylene perhaps?) of the food boxes, so making the holes big enough is definitely important!


We discussed how rates of reaction can be introduced with a greatly simplified equipment. This is a potentially useful introduction, with reduced extraneous cognitive load due to the familiar apparatus compared with the relative complexity of some practical setups in standard practicals (thinking Mg+HCl). I showed a brief practical I did in my kitchen using vinegar + chalk and using one of the very cheap 2dp balances now available online, and collected some pretty reasonable data.

We finished with a quick go on my adaptation of the CLEAPSS precipitation on a sheet, ‘March of the precipitANTs’. There was a discussion about whether the students would have the fine-motor skills to deal with moving around a few crystals on the sheet, rather than aliquoting out solutions into test tubes. I contended that most students, given the opportunity and time, could make a good job of this. They are required to carry out fine tasks throughout their schooling, including developing handwriting, art and D&T work etc, and I think close fine work is a useful skill to develop in science (education of the head, heart and HAND?)


We spent some time discussing progression of learning, something I have written about in more detail here. In summary, I used the context of teaching and learning of titration. My first go at this was early in my teaching career, and was a (pedagogic) disaster. I gave a Year 11 group a 20 minute demonstration on how to carry out a titration, and then sent them off with a practical sheet and expected them to replicate what they had just seen. Understandably, very few managed to produce anything like useable results! My subsequent attempts have had a significantly more thought through instructional progression, and they were much more successful.

We finished with a very brief run through of the requirements of the A-level Practical endorsement, and how large-scale investigative work can still be done in Chemistry A Level.

Overall, the session was a refreshing break from the office, and it was good to see that there are still vibrant and enthusiastic people coming into the profession. I look forward to joining them back at the chalkface in September.