Last Tuesday, as you all know, we visited Vanderbilt’s medical school. I honestly didn’t expect to see as much as we did (CELA, the lounge, ER). The sessions offered by students and admissions were helpful as well.
(One of the simulation exercises in CELA. Med students use devices like these to practice surgical skills by moving the objects from one peg to another. The idea is to understand three-dimensional entities from a two-dimensional perspective. And to do so in 48 seconds or less.)
While we were touring CELA, I caught on to the recurring theme of groundbreaking changes in healthcare. The doctors would often say admiringly, “I didn’t have this when I was in med school.” For them, certain scenarios could only be experienced if their teachers came across certain patients in their practice. Med students would watch their instructors handle it once, have their own hands guided the next time, and that was it. Now, students can practice things like laparoscopy, intubation, patient histories, and breaking bad news with either well-programmed mannequins or well-trained actors. Modern ideas and technology enable us to teach medicine safely and comprehensively in ways not previously possible.
Additionally, we learned about Vanderbilt’s new curriculum from their admissions department. These changes place a greater focus on patient interaction, immersion, leadership, individualized experience, variation in backgrounds, and quality of interests (as opposed to quantity). In other words, they want doctors who do more than look good on paper.
Many people can tell us about how wonderful these programs are (and get paid to do so), but how does this affect patient outcomes? Talking to the 4th year med students and touring the ER provided the most direct evidence. The two students we spoke to appeared to be well-rounded and genuinely passionate about their field of study. The aforementioned technologies have most likely aided both students and doctors with efficiency and competency. However, Vanderbilt Medical Center statistics from medicare.gov don’t seem to show too many significant differences in patient outcome when compared to the national average. I could be a bad statistician/googler, or maybe there aren’t any transparent data of healthcare quality over time. Both are equally likely. This could end with a lecture on problems with healthcare safety in the US, but I don’t feel nearly patient enough (and neither do you, I’m assuming).