Who we are
Twitter Journal Club is (as the name may suggest) a Twitter-based journal club. We meet fortnightly on Sunday nights at 8pm UK time (7pm GMT) to discuss & critique a variety of medical papers.
By a US Web Designers
- No public Twitter messages.
Subscribe to receive updates when we blog
Category Archives: Discussion Outlines
1. This study uses an unorthodox set of methods to arrive at its conclusions. Is the methodology sound enough to give us adequately robust results?
2. This study is limited to one country; how applicable are the results to other countries and cultures?
3. Do the results of this study indicate a greater need for education amongst journalists, and more stringent guidelines on reporting suicides?
4. What might be the reasons behind the protective effect of articles about suicidal ideation where suicide was not attempted? Does this indicate a potential role for the media in public education on this matter?
5. In what way do the results of this study have implications for other forms of media?
1. The paper does not mention what was involved in early palliative care. Does this detract from the beneficial effect shown? How does it affect if we can apply the results to clinical practice in other hospitals?
2. Did the paper look for appropriate outcomes to measure? How useful were these outcomes, and was the result clinically relevant?
3. A survivial advantage was shown amongst those patients in the early palliative care group over those on standard treatment alone. What are the possible mechanisms behind this?
4. How can the results of this study be used to guide treatment in other cancers, including those which may be curable?
1. Was the inclusion group in this trial too wide, especially in regards to ages (from a 60 day old baby to a 12 year old child)?
2. An editorial in the Archives of Disease of Childhood criticised the study and highlighted what they felt was the most harmful limitation, the reliance on one non-specific clinical feature for the diagnosis of hypovolaemic shock (see this BMJ rapid response for further details). Does this make the study invalid for evaluating fluid boluses in children with hypovolaemic shock?
3. How applicable are these results to the use of fluid boluses in febrile children in the developed world?
4. Does there need to be a similar study in developed countries? If so, would such a study ever get ethical approval?
Thank you to @welsh_gas_doctor, @trufflethebendy and @Bokarelli for proof-reading the discussion points for me
The points for this evening are as follows:
- Does the single-centre design mean that there are too few physicians being surveyed? Does it limit the range of viewpoints and practices that are examined?
- If most patients spoke to their doctor for at least five minutes about PCI, why did 88% still believe it would reduce their risk of MI?
- Why would 43% of cardiologists who identified no benefit in PCI in a hypothetical scenario proceed with it anyway?
- What can doctors do differently to communicate the benefits of treatment with their patients?
- With regards to the consent form, does this paper raise questions about the nature of informed consent?
The discussion points for this week are as follows:
1. This paper is a retrospective cohort study – what is the place of observational studies in influencing or changing clinical practice?
2. Endpoints measured – where they robust enough to show that beta blockers are safe in COPD in this patient population?
3. The paper used a database of patients in one geographical area. Should we be trying to build up the links needed to produce this kind of data across the UK more generally?
4. Is there a need for prospective research into whether beta blockers are safe in patients with COPD?
The NEJM paper published in 2009 has had an impact worldwide with the introduction of surgical checklists in over 3000 hospitals. This paper highlighted an important patient safety issue and aimed to tackle this with a relatively simple intervention. The discussion points below are meant to be a broad starting point for the evening, I hope that in particular the methodology of the paper will be discussed in detail.
1. This study ran for less than a year in eight healthcare settings and there have been many criticisms made of the methodology of the paper (see this blogpost & this letters page for examples of the criticisms). Is this adequate enough to support the widespread implementation of the checklist purely based on this paper?
2. In the discussion of the paper the authors mention the Hawthorn effect as a possible mechanism of improvement, i.e. an improvement in performance due to the subject’s knowledge of being observed. However this has also been raised as a flaw in the study, the fact that the participants knew they were in a trial could have lead to the improvements shown rather than it being due to the checklist. Does this reduce the validity of the study and its findings?
3. The checklist is a relatively simple intervention, is there a risk that this could become a tick-box exercise rather than being given due care and attention?
4. In a letter responding to the paper members of NCEPOD stated that they supported the initiative but were concerned that the implied decrease in the perioperative rate of death was unlikely to be as great in the UK as reported in the paper. Does this make the study any less relevant to practice in developed countries?
If there is time I would also like to discuss how the paper is relevant to practice in less developed countries. Thank you to @fidouglas, @amcunningham & @assidens for their help
Thank you Fi for asking me to write an introduction and suggest some discussion points for tonight’s Twitter journal club. Having a special interest in medical education I was very happy to see this paper by McManus et al. suggested for Week 4 of the journal club.
Discussion point 1: What factors do you think might explain variation in performance in MRCP between medical schools?
What did authors do? They looked at outcomes in MRCP (Member of the Royal College of Physicians) examination for entrants from all medical schools between 2003 and 2005. They found that in the Part 1 and 2 exams, Cambridge, Oxford, and Newcastle graduates did significantly better than average , and the performance of Liverpool, Aberdeen, Dundee and Belfast students was significantly worse. In the PACES section (a clinical examination based on a modified OSCE) Oxford students performed significantly and Liverpool, Dundee and London students significantly worse.
This first part of the analysis is quite easy to understand but the authors then go on to construct a multi-level model to see if they can explain variation between the medical schools.
Since it is known that ethnicity and gender are correlated with MRCP performance, and they had this as individual level data, they adjusted for this.
Discussion point 2: Is it surprising that the average offer to those applying to a medical school may predict performance of graduates in MRCP?
Two complex analyses were performed in this study: a multilevel model, and a structural equation model. Unfortunately the results of the multilevel model are not produced in an easy to understand format, although there is a figure in an additional file which is downloadable.
The authors looked for correlations between medical school performances in MRCP and a plethora of other factors. This information was pulled from other sources such as the Guardian tables, a survey of the cohort of medical students who started university in 1990/1, and the offers which each medical school made to students in the mid-1990s.
They found correlations between:
- Offers made to students (A level or Scottish higher grades)- the higher the offer the better the performance
- The proportion of final year medical students reporting being interested in a career as a physician, and reporting interesting medical teaching, and better performance in MRCP.
- The higher the percentage of graduates taking MRCP, the better the performance.
However when these factors were analysed together, it was only admission grades that seemed significant.
They also looked at correlations with data from the Guardian tables. In a multiple regression, again only admission criteria were found to be significant.
In the multilevel model, the entrance qualifications of graduates were found to explain 62% of variance, which in this type of study is a large amount. 38% of variance remained unexplained but a commenter has suggested that the contribution of entrance qualifications may be under-estimated because of the ‘ceiling effect’- many entrants may have been offered the highest grades of 3 As.
Discussion point 3: Are the authors correct to conclude that this analysis suggests that a national exit exam should be introduced?
What do the results of this study mean? To place the study in context it is perhaps useful to start with the last words of the authors in the paper. They believe that this analysis supports the case for the “introduction of a national licensing examination” in the UK.
We don’t have a national exit exam in the UK. Instead the General Medical Council (GMC) regulates medical education through individual medical schools. Quality assurance of medical education and the final exams which must be passed to gain provisional entry to the medical register rests with the GMC and external examiners.
This analysis shows that different medical schools admit students with different school qualifications, and that the higher the entrance requirements, the greater the success may subsequently be in MRCP. McManus et al refer to a study which shows this is also true of performance in MRCGP exams, but I cannot find that publication.
Discussion point 4: How should we judge the performance of medical schools? Is performance of graduates in post-graduate examinations important?
When this study was published it contributed to discussion about whether a national exit exam should be introduced. Ian Noble, a Sheffield medical student, writing in the BMJ, suggested that medical schools should be judged on whether the graduates they produced performed as competent foundation doctors, not on how well graduates performed in subsequent examinations. Since it is rare for graduates to be pulled up for poor clinical performance this suggests that we have no problem for a national exit exam to solve.
Discussion point 5: How helpful is it to read reviewer’s comments on a paper? Is this something that all journals should aim for?
This paper is published in BMC Medicine which also posts the comments of the peer-reviewers. One of the reviewers suggested that this paper should not be published as although it involved a commendable analysis of multiple datasets it did not “help me to understand the problems in medical education better, nor does it help me to improve medical education or to advance medical education as a science”. The authors’ response to this criticism is also published. The reviewers and the main author also had a dispute over another analysis published in BMC Medicine on gender and ethnicity and success in MRCP. But that discussion was pre-submission so is private correspondence between those involved.
Conflict of Interest: I’m a Belfast graduate!
CRASH-2 is an extremely interesting paper and I have to admit I have found it hard to narrow down the discussion points for this week’s journal club. A huge amount of credit has to be given to @GabrielScally for turning my fairly incoherent ramblings into a sleek set of points to be used for this evening, his help has been utterly invaluable.
1. This trial had a peer reviewed protocol and the protocol seems very robust. Was the study population relevant to the question to be answered (with respect to patients deemed with or at-risk of significant haemorrhage)?
2. Did the paper ask the right question (i.e. Was the primary outcome of mortality in 28 days the right one to measure)?
3. With regards to the reporting of adverse events in the paper, do we think this was adequate to detect any adverse effects from the treatment?
4. Tranexamic acid was shown to be effective, but the precise mechanism is unclear. Should research into exactly how it works in trauma patients be a major priority, or is it enough that it does?
5. The subgroup analysis showed that the earlier the treatment is given is better – after 3 hours the treatment actually increased bleeding rates. But gaining consent for participation in the trial delayed treatment (see Roberts et al). Should consent be necessary in the emergency setting?
During the discussion tonight the abbreviation TA will be used for tranexamic acid and I will be tweeting this before the evening’s events start at 8.00pm BST.
(For an interesting take on ethics committees and their impact on trials see this post by @bengoldacre on his Bad Science blog and then keep going, his column Bad Science in the Guardian is a must read)
We’ve come up with an outline for how we think this evening’s discussion should be structured, based on the following points:
- What does this paper tell us? How relevant is this paper today? Should our priorities be for individual benefit? Or for population benefit? Where should we draw the line (i.e. have to recognise adverse side effects, cost etc)?
- How does the paper present risk to individuals and populations? How do we present this to patients? (Absolute vs Relative Risk Reductions and Number Needed to Treat)
- Has there been a separation of prevention and treatment & has this changed in last 30 years? Are some specialities more into prevention than others? Why? How can we go about integrating prevention with existing treatment? Screening programmes?
- Should more research be dedicated to prevention rather than cure?
- How do we define a patient? Should those receiving preventive treatment for eg hypertension assume the patient role? Is the distinction between therapeutics and preventive medicine blurred?
Based on feedback from last week, we’re going to try and keep the discussion to an hour in length (but obviously you can keep discussing afterwards using the hashtag should you wish!).