NPs better than PAs in DM care

Published

A recent article suggests NPs have better outcomes in the management of DM. Maybe there is more to "nursing" care of patients? Many wonder why NPs programs cover the content (family theory/health promotion, others) instead of doing what the PA program covers (medical model). Wouldn't it be great if PA/MD programs started wondering if they should be covering what NP programs cover!

http://www.annfammed.org/cgi/content/full/6/1/14

like how to make a useless study with such a small number?

or

how to read a study without recognizing its limitations?

A recent article suggests NPs have better outcomes in the management of DM. Maybe there is more to "nursing" care of patients? Many wonder why NPs programs cover the content (family theory/health promotion, others) instead of doing what the PA program covers (medical model). Wouldn't it be great if PA/MD programs started wondering if they should be covering what NP programs cover!

http://www.annfammed.org/cgi/content/full/6/1/14

Actually if you read the article the outcomes are the same. The practices with NPs (note that they did not study who actually saw the patient) are better at processes for following three select biochemical markers for diabetes.

Another likely explanation is the presence of NPs in a practice is a surrogate marker for size of practice. Ie. the smaller the practice the better the practice is at following processes (at least the ones mentioned).

Other than that there are so many limitations on this study including observer bias, sample size, choice of measures, lack of study of specific provider characteristics, and geographic cross section make this essentially useless.

David Carpenter, PA-C

like how to make a useless study with such a small number?

or

how to read a study without recognizing its limitations?

aww come on play nice in the sandbox.

Yes it was a extremly limited study and major limitations. It was published in a MD journal not a NP one so that was a suprise to have this content published.

But my take, this does not have clinical significance due to study design and limitations however it does provide background for further study.

Jeremy

aww come on play nice in the sandbox.

Yes it was a extremly limited study and major limitations. It was published in a MD journal not a NP one so that was a suprise to have this content published.

But my take, this does not have clinical significance due to study design and limitations however it does provide background for further study.

Jeremy

So how many family medicine practices need to part of a study? This was regional, covering parts of NJ and Penn auditing 846 known patients with diabetes. Studies need to start somewhere, I didn't note where any funding was received. Looks like a good pilot to submit for funding a much larger study.

I wondered about the quality of the journal, looks like a new journal with early recognition as a quality journal http://www.aafp.org/annals/x28117.html

Also found the authors are all non-nurses representing biostatistics, family medicine MDs and epidemiology. What a great study to encourage our FNP graduate students to replicate.

Specializes in Critical Care, Emergency, Education, Informatics.

Personally I think that "Mine is Bigger Than Yours" studies are a waste of time and do a great diservice to our patient population.

How about a study that looks for ways to improve care of DM patients across the continum.

So how many family medicine practices need to part of a study? This was regional, covering parts of NJ and Penn auditing 846 known patients with diabetes. Studies need to start somewhere, I didn't note where any funding was received. Looks like a good pilot to submit for funding a much larger study.

I wondered about the quality of the journal, looks like a new journal with early recognition as a quality journal http://www.aafp.org/annals/x28117.html

Also found the authors are all non-nurses representing biostatistics, family medicine MDs and epidemiology. What a great study to encourage our FNP graduate students to replicate.

We don't need to advocate anyone doing more poor quality studies. The funding for this was from the National Heart, Lung, and Blood institute as wells as an AAFP research grant. The original study which can be found here:

http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1886493

Interestingly this study on the use of EMR and diabetes measures showed that practices that did not use EMR had better processes and outcomes than practices that did.

Also there is this article on their new "tool":

http://www.ncbi.nlm.nih.gov/pubmed/17489913?ordinalpos=5&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_RVDocSum

So you have essentially the same data set being used over and over for more and more diverse reasons. The data set was barely adequate for the original purpose - looking at one practice level variable (EMR) and its effect on a few clinical measures. To expand the data set to look at provider types clearly shows its inadequacy.

The other part of the data set that is a problem is that the sample size is fixed (20) no matter how big the practice is. This means that a large practice that has say 20,000 patients on their panel with say 1400 diabetics (given the national incidence) is rated on the same 20 patients that a small practice with with less diabetics (note in the study some practices had less than 20 diabetics so they had 100% audits). Not surprisingly given the practice demographics the larger practices employed PAs (see previous posts about surrogate markers) there is a larger chance of a Type I error in those practices. You only have to look at a Bayes theorem to see that the smaller the sample percentage the larger the chance of error. Or to put it another way the smaller the percentage of the sample size that you take the greater the probability of error.

The other problem with the outcomes is that despite the title of this post it does not measure NP practice. Instead it suggests the mere presence of an NP increases the processes. Under this theory there is no need for the NP to actually see patients or be involved in their care. The influence of an NP being in the building is enough to substantially alter the thought pattern of all providers;).

Or you could interpret this as saying that NPs are better at following directions than PAs:devil:. Could be, my practice has a hard enough time getting me to wear socks:D.

There needs to be more research done here, but this does not help. It is a classic demonstration of statisticians trying to coax outcomes out of data that does not exist. It is the reason that studies of this type should be a collaborative effort between medical providers and statisticians (and one of the reasons that I make good money doing consulting work). My guess is that the MDs are not familiar with NPP providers given some on the comments made in the paper.

The first paper was interesting making the point (within the limitations of the study) that EMRs are not the panacea for good processes for diabetic care. The second paper overstepped the bounds of what the data is capable of and is simply garbage.

David Carpenter, PA-C

We don't need to advocate anyone doing more poor quality studies.

I just want to say thanks to David for posting stuff like this. I'm surprised at how many people in medicine/nursing (and that includes MD/DOs, RNs, PAs, NPs, CRNAs, etc) can't read a study critically.

Before I went to nursing school, I worked a little in clinical research (even had one study that I authored published, THAT was a learing experience!). One day I got to sit in on a class at the local med school. I don't remember what the class was exactly, but one of the attendings came and did a guest lecture on critically reading journal articles and studies. It was a great lecture. He took two articles, both studies published in the New England Journal of Medicine and JAMA and broke them down. At the end of the lecture, you realized that the studies were both essentially worthless.

He went over all that statistics stuff that we don't get in school (except those of us who have taken a real stats class) and looked at the numbers. These two studies were vastly underpowered and there was all sorts of room for interpretation of the data. The conclusions the studies authors reached was a leap at best. And yet, after peer review, they were both published in prestigious journals.

We need to rememebr to really read studies with a critical eye and not just assume that because the conclusion sounds good (read: we like or agree with the conclusion :specs:) and it's published in a journal that it is a good study.

Bryan

I just want to say thanks to David for posting stuff like this. I'm surprised at how many people in medicine/nursing (and that includes MD/DOs, RNs, PAs, NPs, CRNAs, etc) can't read a study critically.

Before I went to nursing school, I worked a little in clinical research (even had one study that I authored published, THAT was a learing experience!). One day I got to sit in on a class at the local med school. I don't remember what the class was exactly, but one of the attendings came and did a guest lecture on critically reading journal articles and studies. It was a great lecture. He took two articles, both studies published in the New England Journal of Medicine and JAMA and broke them down. At the end of the lecture, you realized that the studies were both essentially worthless.

He went over all that statistics stuff that we don't get in school (except those of us who have taken a real stats class) and looked at the numbers. These two studies were vastly underpowered and there was all sorts of room for interpretation of the data. The conclusions the studies authors reached was a leap at best. And yet, after peer review, they were both published in prestigious journals.

We need to rememebr to really read studies with a critical eye and not just assume that because the conclusion sounds good (read: we like or agree with the conclusion :specs:) and it's published in a journal that it is a good study.

Bryan

If possible everyone should take a good EBM course that will expose them to critical analysis and basic statistics. These are pretty widely available. For me the real wake up call was a class called clinical trial design. One thing that it taught me was that there are very few statisticians that understand medicine. The other thing that it taught me was that the trial design process is based not only on good statistical design but also on how much money you have to pay for the trial. Trials are very expensive and frequently the limiting factor is how much money you have to pay for the trial. We did a class project of a researcher that wanted to study a drug in a really rare condition. We figured that to see the effect that he wanted we would need to enroll 1200 patients. Unfortunately there were only 250 patients in the world with the disease. Sometimes reality gets in the way of study design.

I can see the hallmarks of underfunding in the study mentioned above. The selection of a fixed number of patients to me kind of indicates that. Nurse coders are very expensive (even if you train them yourselves) and going through charts that may be really large is time consuming. They probably figured that they could pay for about 1000 charts and made the numbers work for that. Unfortunately the consequence is a poorly designed initial study and a worthless additional study.

David Carpenter, PA-C

if possible everyone should take a good ebm course that will expose them to critical analysis and basic statistics. these are pretty widely available. for me the real wake up call was a class called clinical trial design. one thing that it taught me was that there are very few statisticians that understand medicine. the other thing that it taught me was that the trial design process is based not only on good statistical design but also on how much money you have to pay for the trial. trials are very expensive and frequently the limiting factor is how much money you have to pay for the trial. we did a class project of a researcher that wanted to study a drug in a really rare condition. we figured that to see the effect that he wanted we would need to enroll 1200 patients. unfortunately there were only 250 patients in the world with the disease. sometimes reality gets in the way of study design.

i can see the hallmarks of underfunding in the study mentioned above. the selection of a fixed number of patients to me kind of indicates that. nurse coders are very expensive (even if you train them yourselves) and going through charts that may be really large is time consuming. they probably figured that they could pay for about 1000 charts and made the numbers work for that. unfortunately the consequence is a poorly designed initial study and a worthless additional study.

david carpenter, pa-c

david,

i did take more than one stats course and a clinical trials course during my graduate work. i agree with your premise that the numbers are too small. i do feel the study design techniques are appropriate for the data and the hypotheses in question. obviously, the study is small, with only 9 observations with nps and 9 with pas. my take that the research needs to be replicated, too simply bash the results and say the information is worthless is your opinion, not fact. your point regarding cost is very plausible. i suspect their rationale was cost and time as to why the authors choose to exploit additional findings from the original research data set. as you know this is a very common among academics seeking to publish.

the attention the results have generated are valuable in that others hopefully will seek to replicate the research on another population while learning from some of the pitfalls associated with the original design. research data is far more powerful than anecdotal information as nps continue to increase the validation of the role in the provision of healthcare. thus, instead stomping on the findings we should be making suggestions how to realistically replicate the study, resulting in better research and even better patient care.

david,

i did take more than one stats course and a clinical trials course during my graduate work. i agree with your premise that the numbers are too small. i do feel the study design techniques are appropriate for the data and the hypotheses in question. obviously, the study is small, with only 9 observations with nps and 9 with pas. my take that the research needs to be replicated, too simply bash the results and say the information is worthless is your opinion, not fact. your point regarding cost is very plausible. i suspect their rationale was cost and time as to why the authors choose to exploit additional findings from the original research data set. as you know this is a very common among academics seeking to publish.

the attention the results have generated are valuable in that others hopefully will seek to replicate the research on another population while learning from some of the pitfalls associated with the original design. research data is far more powerful than anecdotal information as nps continue to increase the validation of the role in the provision of healthcare. thus, instead stomping on the findings we should be making suggestions how to realistically replicate the study, resulting in better research and even better patient care.

i have no problem with small pilot studies if they are properly designed. this one was not. the original study was marginally designed to examine the impact of emr on diabetes processes. that was acceptable and showed some important points in developing emrs and that they may not be a panacea for implementing processes that they were envisioned as.

this study took the exact same data. separated out the two practices that used both pas and nps and then tried to apply it to provider type. along the way it turned into absolute propaganda in several areas. the first is the conclusion is the abstract:

"family practices employing nps performed better than those with physicians only and those employing pas, especially with regard to diabetes process measures. the reasons for these differences are not clear. "

what this should have added is that the outcomes were identical. this is clearly a biased slant for the np population.

the second question is why did they exclude the np/pa population. this clearly represents a population that should respond to the np presence. however, the decision was made to exclude them. probably because they represent large group practices which would show the confounding factor of group size.

the next problem is the conclusion in the paper itself:

"in conclusion, family medicine practices with nps performed better at providing some types of diabetes care (primarily monitoring tests) than physician-only practices and especially better than practices using pas. with the burgeoning use of pas and nps in attempts to cut costs and try different models of clinical care,35 these results point to a need for additional research to confirm these associations and to explore their causes. given the lack of literature examining the roles and contributions of both nps and pas within the context of family medicine practices, even additional descriptive studies would be helpful. such studies should be part of the process of discovering how teams of clinicians that include midlevel practitioners can be used most effectively and efficiently in primary care practice."

this is fundamentally false. the nps did not provide better care. they were better at following processes as defined by the study group. there was no difference in outcomes. this is the fundamental definition of good care in ebm.

then of course there are sections that are pure aanp propoganda:

"for example, pas are trained to work in environments where they are supervised by physicians, whereas nps may treat patients independently. in addition, nps may add new perspectives within a team of clinicians because of their background in nursing as well as their emphasis on the well-being of the whole patient, prevention of illness, and patient education.21,22"

this once again either shows bias by the authors since it does not represent np or pa practice in those practices or in particular the np and pa practice acts in those states at the time the study was done. the description of pa training is done without reference to studies done here and is simply opinion or realistically propaganda.

if you want to know the controversy look at the title of this thread. the title is wholly unsupported by the study but the same unsupported claim is made in the study. the study could be equally titled as "practices with nps are more likely to order more tests without a demonstrated improvement in outcome therefore increasing patient cost and decreasing efficiency".

interesting the study that the authors completely missed is the one from kaiser:

http://www.jaapa.com/issues/j20021101/hooker1002.html

this shows that compared to physicians in select diagnosis pas uses less labs and for the most part less medications. the overall cost per episode of care is less. this is an example that looked at cost for employment and did not attempt to overreach its target. look at the power here for example.

this article may have come out after the paper was submitted, but it accurately describes the problems in accounting for the pa and np portion of the work effort in medical practices. problems that the original study did not even address much less control for.

morgan p, et al. missing in action: care of pas and nps in national health surveys. health services research. 2007

i will stand by my original comments.the paper as written is not usable. it is simply propaganda that was written by someone who has no real understanding of pa and np practice or has a fairly blatant agenda (i'll be charitable here).

david carpenter, pa-c

david,

i did take more than one stats course and a clinical trials course during my graduate work. i agree with your premise that the numbers are too small. i do feel the study design techniques are appropriate for the data and the hypotheses in question. obviously, the study is small, with only 9 observations with nps and 9 with pas. my take that the research needs to be replicated, too simply bash the results and say the information is worthless is your opinion, not fact. your point regarding cost is very plausible. i suspect their rationale was cost and time as to why the authors choose to exploit additional findings from the original research data set. as you know this is a very common among academics seeking to publish.

the attention the results have generated are valuable in that others hopefully will seek to replicate the research on another population while learning from some of the pitfalls associated with the original design. research data is far more powerful than anecdotal information as nps continue to increase the validation of the role in the provision of healthcare. thus, instead stomping on the findings we should be making suggestions how to realistically replicate the study, resulting in better research and even better patient care.

while there is little research with the only purpose being comparing pas and nps there is a substantial amount that did exactly that indirectly by comparing midlevels to physicians. i suspect this data set was chosen for cost as well as because it showed what they wanted to see. one could easily review the current literature (i have done so) and see that there have been well designed studies that indirectly showed little difference in outcomes between provider type. i suggest that if you have an interest in the comparisons between pas and nps productivity, outcomes, and practice patterns you do a critical review of the literature yourself and see if you can prove your preconceived notion that nps are better. i would share mine but i would prefer not to give you my full name and location.

+ Join the Discussion