Who we are
Twitter Journal Club is (as the name may suggest) a Twitter-based journal club. We meet fortnightly on Sunday nights at 8pm UK time (7pm GMT) to discuss & critique a variety of medical papers.
By a US Web Designers
- No public Twitter messages.
Subscribe to receive updates when we blog
Category Archives: Week 4
When I was introducing this paper I chose to highlight that one of the reviewer’s thoughts that it was a poor quality study. I don’t know if that influenced the discussion or even non-participation in last week’s #twitjc, but there were several tweets expressing disappointment with the paper during the discussion. At first glance this appeared to be an accessible paper on medical education which would provoke a lot of discussion. But when you look more closely it contains complex analyses; the results of which many of those reading the paper did not manage to get to grips with. The poor presentation of some of the results in the additional file did not help. And the authors reach conclusions which are hard to justify.
Overall the main finding of the research was that medical schools seemed to have little impact on how students performed in post-graduate examination. The better the intake of students were at passing exams at 18, then the more likely they were to pass exams a decade later. This prompted me to ask if that suggested that rather than a national exit exam we needed a national entrance exam. @twsy suggested that if we wanted to look at the ‘value added’ by the medical school then we would need a national entrance and exit exam.
Some medical schools have graduates who take longer to pass professional exams. Is this an issue that should concern medical schools? And if it is what should we do about it? The correlation demonstrated in this paper suggests that if we wanted to have uniform outcomes for graduates of all medical schools then we should have uniform intake. It is unlikely that it would be socially acceptable to make students complete a national entrance exam and then allocate them to medical students across the UK to ensure an equal mix of academic performance. So we are left with the current situation.
We asked if performance of graduates in post-grad exams was a good indicator of the performance of a medical school. We didn’t think that it was, but we weren’t sure how performance of a medical school should be assessed, or if it should be at all. As an aside there was some discussion about what would make a good doctor at the individual level. Was it ‘head knowledge’ or good communication? It was pointed out that UKFPO was now trialling an assessment of situational judgement as a way of allocating doctors to further training. This is certainly something I would like to learn more about.
Access to the reviewers’ comments was generally lauded. We would like to see this more, as it can help understanding of the paper. In my own opinion it would be interesting to have seen some editorial comment on how two such different reviews were rationalised so that the outcome was that the paper was published.
I know that some people missed out on participating in this discussion, so I hope that you will take the opportunity to leave a comment here.
Should we have discussed this paper? Yes, we should. Many people will have heard of it before, and now they hopefully have a better understanding of it’s findings and linitations. #win!
Thank you Fi for asking me to write an introduction and suggest some discussion points for tonight’s Twitter journal club. Having a special interest in medical education I was very happy to see this paper by McManus et al. suggested for Week 4 of the journal club.
Discussion point 1: What factors do you think might explain variation in performance in MRCP between medical schools?
What did authors do? They looked at outcomes in MRCP (Member of the Royal College of Physicians) examination for entrants from all medical schools between 2003 and 2005. They found that in the Part 1 and 2 exams, Cambridge, Oxford, and Newcastle graduates did significantly better than average , and the performance of Liverpool, Aberdeen, Dundee and Belfast students was significantly worse. In the PACES section (a clinical examination based on a modified OSCE) Oxford students performed significantly and Liverpool, Dundee and London students significantly worse.
This first part of the analysis is quite easy to understand but the authors then go on to construct a multi-level model to see if they can explain variation between the medical schools.
Since it is known that ethnicity and gender are correlated with MRCP performance, and they had this as individual level data, they adjusted for this.
Discussion point 2: Is it surprising that the average offer to those applying to a medical school may predict performance of graduates in MRCP?
Two complex analyses were performed in this study: a multilevel model, and a structural equation model. Unfortunately the results of the multilevel model are not produced in an easy to understand format, although there is a figure in an additional file which is downloadable.
The authors looked for correlations between medical school performances in MRCP and a plethora of other factors. This information was pulled from other sources such as the Guardian tables, a survey of the cohort of medical students who started university in 1990/1, and the offers which each medical school made to students in the mid-1990s.
They found correlations between:
- Offers made to students (A level or Scottish higher grades)- the higher the offer the better the performance
- The proportion of final year medical students reporting being interested in a career as a physician, and reporting interesting medical teaching, and better performance in MRCP.
- The higher the percentage of graduates taking MRCP, the better the performance.
However when these factors were analysed together, it was only admission grades that seemed significant.
They also looked at correlations with data from the Guardian tables. In a multiple regression, again only admission criteria were found to be significant.
In the multilevel model, the entrance qualifications of graduates were found to explain 62% of variance, which in this type of study is a large amount. 38% of variance remained unexplained but a commenter has suggested that the contribution of entrance qualifications may be under-estimated because of the ‘ceiling effect’- many entrants may have been offered the highest grades of 3 As.
Discussion point 3: Are the authors correct to conclude that this analysis suggests that a national exit exam should be introduced?
What do the results of this study mean? To place the study in context it is perhaps useful to start with the last words of the authors in the paper. They believe that this analysis supports the case for the “introduction of a national licensing examination” in the UK.
We don’t have a national exit exam in the UK. Instead the General Medical Council (GMC) regulates medical education through individual medical schools. Quality assurance of medical education and the final exams which must be passed to gain provisional entry to the medical register rests with the GMC and external examiners.
This analysis shows that different medical schools admit students with different school qualifications, and that the higher the entrance requirements, the greater the success may subsequently be in MRCP. McManus et al refer to a study which shows this is also true of performance in MRCGP exams, but I cannot find that publication.
Discussion point 4: How should we judge the performance of medical schools? Is performance of graduates in post-graduate examinations important?
When this study was published it contributed to discussion about whether a national exit exam should be introduced. Ian Noble, a Sheffield medical student, writing in the BMJ, suggested that medical schools should be judged on whether the graduates they produced performed as competent foundation doctors, not on how well graduates performed in subsequent examinations. Since it is rare for graduates to be pulled up for poor clinical performance this suggests that we have no problem for a national exit exam to solve.
Discussion point 5: How helpful is it to read reviewer’s comments on a paper? Is this something that all journals should aim for?
This paper is published in BMC Medicine which also posts the comments of the peer-reviewers. One of the reviewers suggested that this paper should not be published as although it involved a commendable analysis of multiple datasets it did not “help me to understand the problems in medical education better, nor does it help me to improve medical education or to advance medical education as a science”. The authors’ response to this criticism is also published. The reviewers and the main author also had a dispute over another analysis published in BMC Medicine on gender and ethnicity and success in MRCP. But that discussion was pre-submission so is private correspondence between those involved.
Conflict of Interest: I’m a Belfast graduate!