Critique of Study: More BSNs equal better pt outcomes

Nurses General Nursing

Published

Specializes in Critical Care.

This is EXTREMELY long but worthwhile to read if the topic interests you.

Linda Aiken et. al., in 2003 released through the JAMA a landmark Study, "Educational Levels of Hospital Nurses and Surgical Patient Mortality"

The study is oft cited and contends that evidence suggests that higher rates of BSN education at the bedside directly translates to improved pt outcomes.

This is my critique:

Why the study, "Educational Levels of Hospital Nurses and Surgical Patient Mortality" is flawed.

1. Academic Laziness

The original data pool was used for an earlier study about staffing levels and mortality GENERALLY. That data was just copied onto this template for this study. But it wasn't just copied; it was copied with the full assurance of the authors that the results of the first study that used this data could be 'factored out' of this, subsequent study.

2. Discrimination Bias (Hospital Selection)

Before analyzing the data, the authors first decided that it would be necessary to 'exclude' hospitals that didn't fit their data set. Some were excluded for valid reasons (they didn't report to the data set. Although, however valid, the exclusion ITSELF taints the data. THIS IS EXPECIALLY TRUE SINCE THIS EXCLUSION INCLUDES ALL VA HOSPITALS - a known source for high BSN recruitment. The very hospitals that might yield some useful data on the subject were ELIMINATED from the study!) - but some were excluded because the data they generated didn't meet the authors' needs. In other words, INCLUSION of said data would disturb the conclusions of the study.

So the authors warrant that exclusion of some data is relevant. Ok, I can concede that point as I understand that large standard deviation multiples (outlying data) can skew the majority of data. But, excluding large amounts of data that are quite possibly within a single standard deviation of what is being studied on the basis that such data wasn't available serves the purpose of undermining the whole study. It is a frank admission that the data itself is incomplete, and so, suspect.

This is the compounded error of the academic laziness mentioned above. The data set was copied from another study with the full understanding that it didn't meet the needs of this study, AND COULD NOT MEET THE NEEDS OF THIS STUDY because of its lack of inclusion of hospitals MOST LIKELY to represent a significant sample of this study. Rather than develop data that was 'pertinent' to THIS study, that academic laziness now calls for this lacking, and possibly highly relevant, data to merely be excluded from consideration.

3. Degree Bias

The Authors state in the study, "Conventional wisdom" is that nurses' experience is more important than their educational levels." It is this ''conventional wisdom' that the study aims to examine. But how does it do so? By buying into the exact same conventional wisdom!: "Because there is no evidence that the relative proportions of nurses holding diplomas and associate degrees affect the patient outcomes studied, these two categories of nurses were collapsed into a single category"

HOLD ON. In a study about how degrees affect patient outcomes, an essential tenet of the study is to disregard degrees held??? After such manipulation, how can you say with a straight face that a study that disregards the relationship between degrees can make a conclusion REGARDING the relationships between degrees?

4. Lack of Substantiating Data

"It was later verified that this decision did not bias the result."

This statement, or others like it, appear throughout this 'study' without mention of the methods used to 'verify'.

"Previous empirical work demonstrated. . ." - um, exactly WHAT empirical work was that?

In fact, the study makes lots of claims and manipulates the data in lots of ways that it nevertheless insists that you have to trust its 'independent verification' that such didn't bias the results. Of course, that is without being provided access to said independent verification.

You have to love the 'self-affirming' validity of it all.

5. Data Manipulation

A. The data was 'manipulated' to grant varying degrees of credibility depending upon whether it was received by a 'teaching' hospital vs. a 'non'-teaching hospital.

B. The data was 'manipulated' to grant varying degrees of credibility to hospitals that are more 'technological' (e.g. have transplant services) as opposed to less.

C. "An important potential confounding variable to both clinical judgment and education was the mean number of years of experience working as an RN": telling comment, but never fear, the data was 'manipulated' to take this into account.

D. Nursing workloads might affect patient outcomes. (Indeed, THIS was the previous study that this study's data set was copied from.) But, in this case, the data was 'manipulated' to take those workloads into account.

E. "Estimated and controlled for the risk of having a board certified Surgeon instead of a non-board certified Surgeon." The use of 2 'dummy variables' comparing MD licenses to general vs specialty board certification was "a reasonable way for controlling surgeon qualifications in our models."

In fact the authors admit to manipulating the data 133 ways! But all of these 'manipulations' were later 'verified' to have produced no bias.

6. Key Criteria Conjecture

The study's two key criteria: deaths within 30 days of hospital admissions and deaths within 30 days of complications due to 'failure to rescue'. But how were these criteria established?

In the first case, they were established by comparing the data set to vital statistic records (death records). I doubt they accurately compared 235,000 individual patients (data points) against another data set (death records) that was probably multiple times in size, but OK - I'll buy this for the moment.

In the second case, however, 'failure to rescue' was defined - NOT BY EXAMINING ACTUAL CASES OF FAILURE TO RESCUE - but by establishing different ICD-9 secondary codes from admit to discharge. An assumption is made that a different code meant that a 'failure to rescue' had occurred. What?!

RE-READ THAT LAST! By making dubious assumptions on data sets (hospital reporting statistics) - this study conjectures how this translates to 'failure to rescue' and then makes conclusions based on what this 'failure to rescue' might mean! ALL BY ITSELF, THIS NEGATES THE ENTIRE STUDY.

But, it was 'verified' to not bias the study results. How was this part 'verified'? Well, you're gonna love this: "expert consensus as well as empirical evidence to distinguish complications from pre-existing co-morbidities."

In other words, the experts (the study authors) know which data is valid for purposes of inclusion into the study - and which data isn't. The 'experts' consensus is the key element that ensures non-bias.

There are no 'double blind' studies. No sample populations of RNs. The criteria for inclusion of 'data' is solely based on the 'consensus' of the 'experts' creating the study. And these 'experts': backed by AACN (Amer Assoc of Colleges of Nursing) - An organization committed to BSN-entry and an organization which maintains, on its website, a valiant defense of this study:

http://www.aacn.nche.edu/Media/TalkingPoints2.htm

No, no possibility of bias here.

Let me ask you this: if you know of a study conducted by Republican Pollsters - where they alone determined whose answers were valid - would you trust a result that brags that 'Most Americans Love President Bush!' But here's the question I really want to ask: WHY wouldn't you trust such a result?

7. Risk Adjustment.

Still trust this study? Try this one: "Patient outcomes were risk-adjusted by including 133 variables in our models, including age, sex, whether an admission was a transfer from another hospital, whether it was an emergency admission, a series of 48 variables including surgery type, dummy variables including the presence of 28 chronic, pre-existing conditions as classified by ICD-9 codes, and interaction terms chosen on the basis of their ability to predict mortality and failure to rescue in the current data set."

So the data was manipulated 133 ways, excluding some data. But, and this is key: there are SO VERY MANY variables that could effect patient outcomes that you have to adjust for EVERYTHING except for what you're looking to find. Right? This is not only what the authors contend, but they contend that they SUCCESSFULLY adjusted the data, 133 different ways, for just this purpose, and completely without bias. Amazing.

8. Logistics Regression Models

So, after the study took in all this manipulated 'data', it compared hospitals with higher BSN-RNs to those with less, and reached a conclusion. Right? Wrong.

It took the data and ran a 'logistics regression model' as to what might happen in a given hospital "if there were a 10% increase in BSN RNs."

This study doesn't even compare the relative levels of RN education. Let me repeat that: THIS STUDY DOESN'T EVEN MAKE THE COMPARISONS IT PURPORTS TO HAVE STUDIED. This model and, as a result, this study doesn't compare existing situations. Instead, it makes assumptions regarding potential situations compared to current situations.

Do you get this: the study wasn't designed to test real conditions. The study was designed to create hypothetical situations and comment on the validity of said models based on highly modified and incomplete data.

THIS STUDY SPECIFICALLY COMMENTS ONLY ON HYPOTHETICAL SITUATIONS. Study Disclaimer: ANY RELATIONSHIP TO REAL CONDITIONS IS ONLY IMPLIED BY THE AUTHORS.

Now, see if this isn't a key statement: "The associations of educational compositions, staffing, experience of nurses, and surgeon board certifications with patient outcomes were computed before and after controlling for patient characteristics and hospital characteristics." Indeed.

9. Direct Standardization Models.

Apparently, even after all the above manipulation, there were still 'clusters of data' that had to be 'standardized' using 'robust estimations'. The study does at least have the guts to admit that such 'standardizations' turns the final conclusion into an 'estimation'. Too bad it only makes that admission in the body of the study, and not in the 'abstracts'.

10. Alternative Correlations

The study admits that fewer than 11% of hospitals in Penn in 1999 (the area/year of the study) had 50% or greater BSNs (Excluding the VA hospital system, which were completely ignored by the study.) And then the study cites co-factors that could unduly influence the study under these situations: "Hospitals with higher percentages of BSN or masters prepared nurses tended to be larger and have post-graduate medical training programs, as well as high-tech facilities. These hospitals also had slightly less experienced nurses on average AND SIGNIFICANTLY LOWER MEAN WORKLOADS (emphasis mine). The strong associations between the educational composition of hospitals and other hospital characteristics, including workloads, makes clear the need to control for these latter characteristics in estimating the effects of nurse education on patient mortality."

Wow. Two key things from that statement: a direct acknowledgment that this 'study' is an 'estimation' and an acknowledgment that such an 'estimation' only occurred after 'the need' to highly manipulate the data.

In fact, I think it much more likely to argue that such "co-correlations" makes any 'estimated' conclusions IMPOSSIBLE to verify.

11. Study Conclusions.

This is one of the study's least reported conclusions. See if you agree: "Nurses' years of experience were not found to be a significant predictor of mortality or failure to rescue in the full models." Re-read that and UNDERSTAND the implications of what it means.

The authors admit that their "estimations" can only lead to an "implication" that increased education means better nurses. OK. I'll agree with that. But, because the same study 'factored out' experience, I think it is impossible to estimate how even a fraction of experience affects the conclusions of the study.

Indeed, in order to arrive at its conclusion, the authors must first dismiss the 'conventional wisdom' that experience IS a factor, as they did, in the above statement. Without the above assumption, this whole body of work is worthless. If experience factors in, then the key question cannot be tied simply to education, BUT MUST BE TIED TO BOTH QUALITIES.

And so, the authors find themselves in a conundrum, in which they must first dismiss the importance of experience in order to highlight the importance of education. Amazingly enough, their study reached both conclusions: experience is meaningless to patient outcomes and THEREFORE education level is, by itself, a measurable standard.

The problem with that is, once experience is dismissed, the correlation between education, experience, and patient outcomes is NOT part of this study. Even if you COULD credibly claim that there is no correlation between experience and outcomes (a silly claim), once you add education level into the consideration, you create a new dynamic. By dismissing experience from the equation the study also dimisses its own results, which NOW have the effect of ascribing the results and effects of a real-life system (education AND experience vs outcomes) to a completely different and hypothetical system (education alone vs outcomes).

In short, the claim that experience is not a factor and can be excluded from the study of education's impact on quality is the equivalent of stating that nature is not a factor and can be isolated from nurture in the study of human behavior. In truth, the concepts are much too intricately linked for bland assurances of non-bias in the elimination of part of either equation.

Also not taken into consideration is alternative educational pathways, such as non-BSN bach degree'd RNs (To include both Accel Programs and '2nd Career' ADN nurses.)

The study also fails to note that many BSNs are prior ADN students. While the subset of BSN included ADN graduates, the subset of ADN graduates ALMOST NEVER includes BSN graduates. This would obviously skew the data unless this characteristic were isolated from the data set. In fact, the data set isn't a pool of RNs but pt discharge records and there is no way included within this study to make a distinction.

Given the broad range of said experiences and educations within nursing, negating those experiences and educational pathways also serves the purpose of negating the validity of the study itself.

My conclusion:

Saying that education is a bigger factor than experience in ANYTHING is the same as saying that nurture is a bigger factor than nature in ANYTHING. The relationships are so intricately linked as to be inseparable. As a result, these types of arguments rise to the level of philosophy.

This study claims the ability to make such distinctions, using incomplete and highly manipulated (133 ways by its own admission) data and applying that data only to hypothetical situations.

This is not science; it's propaganda.

Simply put, this flawed and un-reproducible study is worthless as anything BUT propaganda. And that's the bottom line.

~faith,

Timothy.

Timothy,

> Simply put, this flawed and un-reproducible study is worthless. And that's the bottom line.

In other words, it is a crock of er, ah, stuffing! I came to that conclusion after reading it, and I think you and I are basically in agreement about the value of this "study."

I do not trust Linda Aiken farther than I can throw her(her study that is)

I read this too and found it full of stuffing. You did a good job critiqueing it.

Specializes in Specializes in L/D, newborn, GYN, LTC, Dialysis.

There is a lengthy and interesting thread about the Aiken study here at allnurses.com. Too lazy to do a search, but it's there if you are all interested. This one has been done before, believe me.

Specializes in Education, FP, LNC, Forensics, ED, OB.
Specializes in Critical Care.

Saying that education is a bigger factor than experience in ANYTHING is the same as saying that nurture is a bigger factor than nature in ANYTHING. The relationships are so intricately linked as to be inseparable. As a result, these types of arguments rise to the level of philosophy.

But this study goes well beyond that; this study states that experience is NOT A FACTOR AT ALL in pt outcomes. It doesn't merely argue that education is a BIGGER factor; it argues that education is a factor TO THE EXCLUSION of experience.

The study makes the above claims using incomplete data (purposely excluded hospitals most likely to have the highest rates of BSN nurses) and highly manipulated data (133 ways by its own admission) and applying that data only to hypothetical as opposed to real-life situations.

This is not science; it's propaganda.

Simply put, this flawed and un-reproducible study is worthless as anything BUT propaganda. And that's the bottom line.

~faith,

Timothy.

Propaganda is the perfect word in my opinion too. Thanks for using it. Too bad Aiken isn't reading this. I would love to hear her defend her scientific research.

I realize this thread is a bit dated, but after reading it, my curiosity was piqued. So I pulled a copy of the Aiken study to see how the critique stacked up. Here's what I discovered:

Academic Laziness

Claiming that using data from previous studies as "academic laziness" is disingenuous. Using data from other research is commonplace. For example, you know that little survey the Feds do every 10 years - the census? How often does that information get used for purposes other than determining voter representation or re-districting? Your suggestion that such uses of data is "lazy" would damn countless valid and meaningful studies to the trash heap.

You also misrepresent how the data was used. First of all, the data set you question was actually gathered by the authors for the study regarding staff ratios and patient outcomes. It was during that effort that they first noticed that there might be some linkage with education. Then, for this study, they collected additional information via survey. So, they started with a data set that only partially met the needs of the current study and then augmented it with the data specifically collected for the new effort. Yet again, a fairly standard research approach.

You also neglected to point out how the researchers took their new data and crossed it with other independent research data to verify that they had a representative sample. Sounds like scrupulous attention to detail and anything but lazy.

Discrimination Bias (Hospital Selection)

The authors also went to great lengths to detail what hospitals were included, which were not, and why any exclusion was made. None of the reasons for excluding a particular hospital had anything to do with not "disturbing the conclusions." VA hospitals were excluded because they don't report the same information as other hospitals; some small hospitals were excluded because their staff didn't respond to the survey in sufficient numbers (e.g., hospital with

Additionally, since the authors made all these facts known, it denotes at least some degree of openness to scrutiny. Now, it's certainly possible that some key piece of information resides in one or more of these cases. However, under the framework of the study, they were not able to include it. As is part-and-parcel with research, such questions are left for other researchers to delve into rather than try to shoehorn incomplete data into their study.

So, in short, the reason for excluding the data sets you mention had nothing whatsoever to do with excluding outliers that might skew the results. It had everything to do with ensuring data integrity of the information used in the study. And again, your claim that they didn't collect data pertinent to this study is patently false.

Degree Bias

I have to admit I have difficulty following your logic on this point. You start off by noting that the authors say that "conventional wisdom" is that experience is more important than education, then veer off on how they grouped educational levels (?). Seemed like you were mixing apples and oranges.

You then note how they "later verified this decision...(see item 4 below)" and that you'd like to see how that was verified. Fair question. However, they weren't talking about collapsing Diploma and Assoc. programs when they said this. They were specifically addressing the fact that for the 4.3% of the nurses who checked "other" for highest education that were not included in the study. They even discussed how they ran an analysis of this 4.3% "other" category into the BSN side of the equation and then into the Assoc./Diploma side and it made no difference in the results either way.

When they said "there was no evidence that Assoc. or Diploma status affected outcomes..." they didn't cite a reference. I took that to mean that when they examined each as a standalone category in their analysis, neither showed a difference from the other - so they put them in the same category. Their methods for doing this is expanded on toward the end of the article.

As far as experience and its influence on the results, the authors in fact noted 4 variables that they felt would have an impact on the results: experience, education, doctor qualifications, and hospital technology. The big surprise (for me at least) was what they discovered regarding nurse experience (see below).

Lack of Substantiating Data

"It was later verified that this decision did not bias the result."

The authors were specifically referring to a portion of the respondent's data that was tossed because it didn't specify educational level other than "other". They verified this by first including the data in the 2-year data set and then including it in the BSN set. In either case, it had no impact on the result.

""Previous empirical work demonstrated. . ." - um, exactly WHAT empirical work was that?"

I suspect that information was contained in the 36 references cited at the end of the article. Otherwise, their information is fully footnoted and appears otherwise valid. I guess if you actually pulled the references and/or provided specifics regarding exactly which claims you felt were unsubstantiated, we'd have a better understanding.

Data Manipulation

Each of the "manipulations" you mention below are explicitly discussed in the study. You've totally misrepresented what was done.

A - No, the "teaching status" of a hospital was used as a control. For example, if you had exactly the same staff ratios at two facilities, and the only difference was that one was a teaching facility, it might reasonably be concluded that the teaching faculty would have an impact on patient outcome. So instead of just ignoring this factor, the authors ensured that like institutional characteristics were compared. This eliminates the "the data didn't take into account..." arguments.

B. Same situation as above. If a hospital has a level 1 trauma capability and performs state-of -the-art procedures, would you want it compared to the 50 bed facility in a rural setting?

C. Wrong again. Their going-in assumption was that years of experience WOULD have an impact. However, after they ran several models, using variables like surgeon qualification experience, and technology status as controls, experience turned out not to change either the predictive associations of BSN staff ratios or the predictive nature of workload.

D. Not sure you are reading the same study at this point... The authors said that workloads DID have an impact. Would you want them to compare a hospital with a 8:1 patient-to-nurse ratio with one at a 4:1 ratio without any consideration? Yet again, this was used as a control in the study. What they did point out was that there were some minor delta's regarding ratio and outcome from their earlier study.

E. LOL! The "133" manipulations you point to were the number of different patient characteristics they used to identify specific patient factors, like age, sex, and transfer status. You know, so they wouldn't get stomped for comparing the outcomes of previously healthy 25-year-old women to 70-year-old men with a history of COPD.

Key Criteria Conjecture

I suggest that you take your own advice and re-read that passage. They noted that they used the ICD-9 codes to identify people with significant chronic pre-existing conditions. You know, so they wouldn't blindly lump the people with things like end-stage renal disease into the same grouping as someone being admitted for a rhinoplasty. I'd be suspicious if they DIDN'T account for these factors.

Regarding your question of motive regarding who sponsored the study - this is always a good question. However, dismissing the study out of hand simply because you don't like who backs it does not make a valid argument. Saying a study is invalid because of who did it is a logical fallacy, a.k.a. argumentum ad hominem.

Risk Adjustment.

Refer to the "133 manipulations" discussion above.

Logistics Regression Models

Way off the mark again. The regression models were used to identify what could be expected if the ratios BSN:ASN were higher. They did determine what was actually out there (which supported their conclusion) and then used the regression model to show the impact if the ratios were uniformly increased. Since you can't measure what doesn't currently exist, these models are used to predict what might occur if the current environment changed. If you wanted to attack the study on this point, then I suggest you provide a critique of the regression model they used as opposed to misrepresenting their use.

Direct Standardization Models.

This part of the study is explaining the statistical analysis tools used. Admittedly, I'm out of my depth regarding providing any insight to the use of this particular model (Huber-White standard errors). As I understand it, they are accounting for the "standard error" portion of their probability calculations.

Alternative Correlations

I'm not sure why you characterize the 11% number with an "admission." It's just a stated fact. The context that this statistic was mentioned was in relation to what the current nurse executives want to have for teaching institutions (70%) and what is estimated to be the national average for these institutions (51%). Additionally, they point out that exec's at the community hospital level are looking for a ratio of 55%. They note that since Pennsylvania only reports a ratio of 11%, there's clearly a gap in that state from what exec's are looking for.

As for the other factors you mention, they are identified so that they could be taken into consideration. They also note that they ran their calculations with "raw" data (as in, unadjusted for these factors). The raw data showed an even larger effect of having a higher ratio of BSN's on staff. So when you took away any advantage that a "rich" facility had over a "poor one" and took away the disadvantage of having a older/sicker client base, there still was an advantage to having greater numbers of 4-year educated nurses on staff.

Study Conclusions.

With respect to education, the authors did not factor out years of experience from their models. Here's what they said:

"Furthermore, mean years of experience did not independently predict mortality or failure to rescue, nor did it alter the association between educational background or of staffing and either patient outcome. These findings suggest that the conventional wisdom that nurses' experience is more important than their educational preparation by be incorrect."

So the authors did no such thing as to dismiss education as a factor. In fact, they specifically looked at it and ran various models in an attempt to see how it affected patient outcome in relation to educational level and staff ratios. So it was not "factored out." It simply made no difference to the outcome of a patient's stay when FACTORED IN with education level and/or staff ratios. There is no "conundrum" in their statements. Period.

Regarding your comments on alternative educational pathways, you are correct in saying that they didn't take those factors into consideration... sort of. The question asked of the nurses in the survey was "what is the highest level of education". So, this would include all of these alternative study programs.

Conclusion:

The study's authors point out the limitations of their study. Namely, that it only looked at the data for one state. It certainly is within the realm of possibility that a national aggregate might reveal something different. They also point out that they only had a 50% response rate to their survey. However, they crossed their data with other State-sponsored data collections and there was a strong correlation suggesting that their data was a representative sample.

As for the OP's analysis, it's unfortunately off the mark. His use of the pejorative "manipulated" to describe how the authors characterized their data is telling. And since he misrepresents the data throughout the critique, the conclusions reached are suspect at best.

Whether the study will hold true for the nation as a whole is yet to be seen. However, it's compelling for both the profession as well as the population it serves.

Specializes in Critical Care.
I realize this thread is a bit dated, but after reading it, my curiosity was piqued. So I pulled a copy of the Aiken study to see how the critique stacked up. Here's what I discovered:

OP'S RESPONSE:

"Claiming that using data from previous studies as “academic laziness” is disingenuous. "

I disagree that my claim is disingenious. It IS academic laziness BECAUSE the authors knew that the data points from the first study could not meet the demands of the second study. Excluding VA hospitals (which hire a much higher percentage of BSNs) and small hospitals from this particular study, means that any conclusions are based upon incomplete data.

How can you drop from a study that purports to study the relationship between BSN education and quality the highest ratio employers of BSN?

The fact that the authors made it known that they were using incomplete data does not excuse the academic laziness. It does not make the data complete. And it DOES lend to discredit the results.

These data points were imperfect for this study. It may be commonplace in academic research to duplicate data where possible: but it simply wasn't possible in this study because the original data simply didn't correspond to THIS study. Indeed, it left out CRUCIAL data that, instead of being accounting for, was simply deleted from consideration in this study.

When you delete likely 1st standard deviation data from a study, the study is simply incomplete; it is pure academic folly to make conclusions based on incomplete data.

There is no excuse for this omission. It is academic laziness. Instead of devising a new model to correspond to a new study, the authors co-opted the data from their last study and dismisses out of hand the grossly imperfect 'fit'.

"And again, your claim that they didn’t collect data pertinent to this study is patently false."

I disagree. They didn't collect data exactly on point to this study because it wasn't available under the data points they borrowed from their previous study. Rather than devise a method of collecting all pertinent data to this study, they merely eliminated crucial data as 'uncollectable'.

Degree Bias

"When they said “there was no evidence that Assoc. or Diploma status affected outcomes…” they didn’t cite a reference. I took that to mean that when they examined each as a standalone category in their analysis, neither showed a difference from the other – so they put them in the same category. Their methods for doing this is expanded on toward the end of the article."

But this is my point: in a study designed to show a difference in one set of degrees, BSN vs lessor degree, the authors made assumptions on another set of degrees, ADN vs. Diploma. So, the authors are guilty of the SAME bias they are trying to disprove. But the authors' bias IN THIS CASE is a subset of their study, and so, affects the outcome of their study.

In other words, the study offers no results on the effects of BSN over ADN education because it didn't study it. It studied a combined group of ADN/Diploma. Maybe ADN alone would have OUTPERFORMED BSN in their logistics regression model. Or, maybe, Diploma would have. We'll never know.

Lack of Substantiating Data

"It was later verified that this decision did not bias the result."

"I suspect that information was contained in the 36 references cited at the end of the article. Otherwise, their information is fully footnoted and appears otherwise valid. I guess if you actually pulled the references and/or provided specifics regarding exactly which claims you felt were unsubstantiated, we’d have a better understanding."

The study was full of comments like the above comment, without footnotes. If their verification is in their references, they surely didn't bother to point it out.

Data Manipulation

"Each of the “manipulations” you mention below are explicitly discussed in the study. You’ve totally misrepresented what was done."

I understand that each was mentioned explicitly; that is why I mentioned them. I don't think I've misrepresented what they've done.

They 'adjusted' (manipulation is not an inherently 'bad' word, as you suggest) the data ONE HUNDRED THIRTY THREE ways. And still have the guts to say that in doing so, they 'factored' out all bias. Wow. I think it's sheer hubris to say that original data can undergo that much adjustment and still be valid for ANY purpose.

I think it is entirely reasonable to stipulate that the author's attempt to control 133 variables without bias is the equivalent of bailing out the Atlantic Ocean with a spoon. To state with ANY degree of certainty that they did this SUCCESSFULLY? Laughable.

Key Criteria Conjecture

"I suggest that you take your own advice and re-read that passage. They noted that they used the ICD-9 codes to identify people with significant chronic pre-existing conditions. You know, so they wouldn’t blindly lump the people with things like end-stage renal disease into the same grouping as someone being admitted for a rhinoplasty. I’d be suspicious if they DIDN’T account for these factors.

Regarding your question of motive regarding who sponsored the study – this is always a good question. However, dismissing the study out of hand simply because you don’t like who backs it does not make a valid argument. Saying a study is invalid because of who did it is a logical fallacy, a.k.a. argumentum ad hominem."

Simply put, they did not examine real life 'failure to rescues'. They extrapolated, using ICD-9 codes, what COULD HAVE BEEN failure to rescues. And they did so using those same codes to eliminate 'co-morbidities'. But here you go: a huge pile of data, sorted by the authors', quote, "by their expertise".

At this point, you have the raw data. How this raw data make it into the final package is the sole decisions of the authors of the study. They fully admit that this is the point where data is excluded 'in their expertise'. So, it makes full sense at this point to ask who is sponsoring the study.

My analogy stand: if Republican pollsters get to decide which of their polling data goes into the final report, then "AMERICA LOVES PRESIDENT BUSH."

And if Amer Assoc of Colleges of Nursing supported authors get to decide, 'in their expertise' how the data is crunched at this stage, I do not find it to be a logical fallacy to questions their motives AT THIS STAGE, since this stage is totally, by their own admission, subjected to their opinions, and the results are totally dependent upon that subjectivity.

This is why REAL science uses double-blind studies and other variables designed to make a study OBJECTIVE rather than SUBJECTIVE. It is not a logical fallacy to question the reliability of a subjective study, such as this one, that favors the sponsors of that subjectivity.

And this discrimination, at this point in the study, BY THE AUTHORS, IN THEIR EXPERTISE, is where this study becomes subjective in nature. When subjectivity is in play, as it is here, it is completely logical to examine the nature of that subjectivity.

Risk Adjustment.

I stand by my argument that adjusting data in 133 ways perverts that data all out of any objective measurement. It makes the finding of the study dubious and suspect.

Logistics Regression Models

"Way off the mark again. The regression models were used to identify what could be expected if the ratios BSN:ASN were higher. They did determine what was actually out there (which supported their conclusion) and then used the regression model to show the impact if the ratios were uniformly increased. Since you can’t measure what doesn’t currently exist, these models are used to predict what might occur if the current environment changed. If you wanted to attack the study on this point, then I suggest you provide a critique of the regression model they used as opposed to misrepresenting their use. "

Absolutely wrong. They COULD INDEED HAVE measured against what exists. There was no inherent need to use hypothetical models. They didn't purport to measure ALL-BSN facilities but what a 10% increase in BSNs at a facility could do. They had data on what percentage of BSNs to ADNs were at each facility they studied. Assuming that their 133 variables were factored properly (which THEY DID ASSUME), they could have measured facilities with higher proportions of BSNs directly against those with lower percentages. Indeed, that's what the study PURPORTS TO HAVE DONE in its abstract.

This is where exclusion of the VA hospital system comes into play. That mostly BSN system could have been the EXACT control group that they needed.

But in point of fact, they did not study nurses working at all. They studied mail in responses and state required hospital reporting statistics. And they adjusted that data 133 times. And then they used a 'logistics regression model' to see how a 10% increase would THEORETICALLY affect a hospital.

It is a study of real vs hypothetical. And that hypothetical is the subject of much data 'adjustment'. My critique is that ANY logistics regression model at this point in their work is mere fantasy.

Direct Standardization Models.

"This part of the study is explaining the statistical analysis tools used. Admittedly, I’m out of my depth regarding providing any insight to the use of this particular model (Huber-White standard errors). As I understand it, they are accounting for the “standard error” portion of their probability calculations."

But this just points to fact that after adjusting the data 133 ways, AND running it through their 'logistics regression model', their final results still needed some 'tweaking'.

Alternative Correlations

I pointed out this section because it is an admission from the authors that their results were their 'estimations' and not objective data. And then I pointed out again the folly of adjusting for 'co-factors that might unduly influence the study under these situations'.

If you have to adjust the data 133 times, then it is simply impossible to, without bias, adjust for 'co-factors that might unduly influence the study under these situations.' I stipulate that those co-factors DID INDEED unduly influence the study by the very excessive efforts to 'adjust' for them.

Study Conclusions.

"With respect to education, the authors did not factor out years of experience from their models. Here’s what they said:

“Furthermore, mean years of experience did not independently predict mortality or failure to rescue, nor did it alter the association between educational background or of staffing and either patient outcome. These findings suggest that the conventional wisdom that nurses’ experience is more important than their educational preparation by be incorrect.”

That sounds like an attempt to factor out experience to me. It's plain as day.

"So the authors did no such thing as to dismiss education as a factor. In fact, they specifically looked at it and ran various models in an attempt to see how it affected patient outcome in relation to educational level and staff ratios. So it was not “factored out.” It simply made no difference to the outcome of a patient’s stay when FACTORED IN with education level and/or staff ratios. There is no “conundrum” in their statements. Period."

I said they dismissed experience as a factor, not education. Their own statement above is designed to suggest that education can be measured independent of experience - a NECESSITY TO MAKE THEIR CLAIMS, but AN ABSURDITY TO REALITY. In fact, experience IS education.

"Regarding your comments on alternative educational pathways, you are correct in saying that they didn’t take those factors into consideration... sort of. The question asked of the nurses in the survey was “what is the highest level of education”. So, this would include all of these alternative study programs."

I am correct.

Conclusion:

"The study’s authors point out the limitations of their study."

And my point is that the limitations of their study makes their study useless as anything but propaganda.

Bottom line, from a science and academic point of view: this study is fundamentally flawed. .

~faith,

Timothy, ADN, BA-Biology, CCRN.

"The study's authors point out the limitations of their study."

And my point is that the limitations of their study makes their study useless as anything but propaganda.

~faith,

Timothy, ADN, BA-Biology, CCRN.

Just curious where you get your ability to critique research since your credentials don't show the ability to do so.

And since all studies have limitations I guess they all fit into the propaganda category.

Randy (participant in both sides of clinical trials)

Specializes in Critical Care.
Just curious where you get your ability to critique research since your credentials don't show the ability to do so.

And since all studies have limitations I guess they all fit into the propaganda category.

Randy (participant in both sides of clinical trials)

I am a research trained biologist from Texas A&M University. And I have a lick of common sense.

This study fits much more into the category of propaganda than most. The limitations to this study, from the improper 'fit' of its data set, to its submission to the EXACT bias it is trying to disprove, to the excessive adjustment of data to 'correct' undo influences that it concedes threatens the study at every turn, to its wild and unverified assertion that experience is a negligible factor in 'failure to rescues', to the hypothetical nature of its conclusions when it purported to have the data necessary to perform direct comparisons - all lead to the undeniable conclusion that this study is fundamentally flawed.

And if you're going to suggest that nurses need some set of super 'credentials' in order to critically read their 'body of knowledge', that cuts both ways. Not only does that not allow 'critiques', it doesn't allow for the 'acceptance' of such research, either.

~faith,

Timothy.

So, because they don't have a 100% data sample the whole study is flawed? Are you serious? Are you suggesting that when a pharmaceutical company tests a product, its results are worthless because they didn't run the test in all 290,000,000 individual citizens of the country? The number of VA hospitals that could not provide the requested data for the study totaled a whopping 6 hospitals.... out of a possible 210! The suggestion that this study is worthless because they don't have a 100% data sample is just patently absurd.

Instead of devising a new model to correspond to a new study...

Are you confusing sample size with data points? Please explain what type of data collection points were omitted from the study. If you say, "They omitted VA hospitals" save your breath. That's a fucntion of who was sampled and not of what was gathered from the sample.

They didn't collect data exactly on point to this study because it wasn't available under the data points they borrowed from their previous study.

Umm.. so what? That's why they collected "unique data obtained from surveys of hospital nurses." (from the 1st paragraph of the "Methods" section of the study). They started with a data set from their previous study and added infromation from a survey created for this study. If they can't use data from another study, and for whatever reason you decided to exclude current survey data, by what means do you propose for collecting new data?

Degree Bias

...in a study designed to show a difference in one set of degrees, BSN vs lessor degree, the authors made assumptions on another set of degrees, ADN vs. Diploma.

Stop. You are wrong. They didn't make any assumption. They actually conducted a data analysis of each of those degree types and found that neither one offered an advantage in ability to predict patient outcome. Here's the reference: "When proportions of RNs with hospital diplomas and associate degrees as their highest educational credentials were examined separately, the particular type of education credential for nurses with less than a bachelor's degree was not a factor in patient outcomes.[emphasis added]" (found in the second paragraph of the "comments" section of the study)

In other words, the study offers no results on the effects of BSN over ADN education because it didn't study it.

The above exerpt directly contradicts your claim. There's no other way to say it than you're conclusion on this point is just plain wrong.

Lack of Substantiating Data

The study was full of comments like the above comment, without footnotes.

You make this claim and then fail to provide any example. How can any response to your critique on this point be made without any means to examine your claim?

Data Manipulation

They 'adjusted' (manipulation is not an inherently 'bad' word, as you suggest) the data ONE HUNDRED THIRTY THREE ways.

Wrong. They identified each data item with 133 characteristics. It's like saying "Fred" is a data item and then further identifying "Fred" as a male. Does that change "Fred"? Is it a manipulation of "Fred". Of course, the answer is "No". However, what it does do is allow the researcher to account for any added risks that a male Fred might have when comparing him to a female. If you factor out the added risk Fred has just for being a male, then any difference left has to be related to another factor.

Granted, they didn't specifically identify which of the 133 factors were used, or in what combinations they were used, for making the risk adjustments. They do, however, point out that they used the same approach as the one used by "Silber and colleagues" and then cites the references. I suppose that if you wanted to argue that this wasn't proper, you could pull those studies and tease apart the methodology. What you can't reasonably do is to simply toss it out because you think it might be too complicated or troublesome for you to figure out.

Key Criteria Conjecture

Simply put, they did not examine real life 'failure to rescues'. They extrapolated, using ICD-9 codes, what COULD HAVE BEEN failure to rescues.

They did no such thing! They determined that information by crossing hospital records with PA State death records!

As for the ICD-9 data, they used those definitions to identify the difference between a complication that resulted from the patient's stay versus pre-existing co-morbidities. Moreover, the methods they used were emperical information in the afore mentioned "Silber" studies (as noted in the reference in the study).

They fully admit that this is the point where data is excluded 'in their expertise'. So, it makes full sense at this point to ask who is sponsoring the study.

Taken completely out of context. The only data excluded was the 4.3% of respondents who checked "other" for their highest level of education (i.e., it couldn't be classified as Diploma, ASN, BSN, Masters, or PhD). And it wasn't just arbitrarily tossed. They included the data as part of the Diploma/ASN data set to see if it altered the outcome (which it didn't). Then they re-ran the calculations with the 4.3% added to the BSN data set to see if it changed the outcome (which it didn't). So, since it made no discernable difference in either outcome, they excluded the data.

...AT THIS STAGE, since this stage is totally, by their own admission, subjected to their opinions, and the results are totally dependent upon that subjectivity.

Since they didn't just "decide" in some unsubstantiated manner as you seem to think, your conclusion that the study must be biased just falls apart.

Risk Adjustment.; Logistics Regression Models; Direct Standardization Models; Alternative Correlations

All of your comments in these sections turned on the mistaken idea that they changed the data 133 times. Since they didn't, all of these arguments are nonsensical

Study Conclusions.

"So the authors did no such thing as to dismiss education as a factor. In fact, they specifically looked at it and ran various models in an attempt to see how it affected patient outcome in relation to educational level and staff ratios. So it was not "factored out." It simply made no difference to the outcome of a patient's stay when FACTORED IN with education level and/or staff ratios. There is no "conundrum" in their statements. Period."

I said they dismissed experience as a factor, not education.

Whoops, my bad. It should have read, "... as to dismiss experience...".

Clearly, they wouldn't look at education in relation to education. My point remains. They specifically factored it in. It didn't make a difference in the results. That is not the same as factoring it out.

In fact, experience IS education.

Rofl. So why go to school at all. Just show up on the job site and in a few years, you'll have an education.

"Regarding your comments on alternative educational pathways, ..."

I am correct.

Wow. I didn't realize that debate was so easy, else I would have simply declared, "I am correct" at the beginning and saved everyone the trouble of reading through this post.

Conclusion:

And my point is that the limitations of their study makes their study useless as anything but propaganda.

You are certainly entitled to your opinion. My goal was to provide a few points to consider regarding your "critique" for the rest of the readers. Let 'em arrive at their own opinion.

+ Add a Comment