Published
This is EXTREMELY long but worthwhile to read if the topic interests you.
Linda Aiken et. al., in 2003 released through the JAMA a landmark Study, "Educational Levels of Hospital Nurses and Surgical Patient Mortality"
The study is oft cited and contends that evidence suggests that higher rates of BSN education at the bedside directly translates to improved pt outcomes.
This is my critique:
Why the study, "Educational Levels of Hospital Nurses and Surgical Patient Mortality" is flawed.
1. Academic Laziness
The original data pool was used for an earlier study about staffing levels and mortality GENERALLY. That data was just copied onto this template for this study. But it wasn't just copied; it was copied with the full assurance of the authors that the results of the first study that used this data could be 'factored out' of this, subsequent study.
2. Discrimination Bias (Hospital Selection)
Before analyzing the data, the authors first decided that it would be necessary to 'exclude' hospitals that didn't fit their data set. Some were excluded for valid reasons (they didn't report to the data set. Although, however valid, the exclusion ITSELF taints the data. THIS IS EXPECIALLY TRUE SINCE THIS EXCLUSION INCLUDES ALL VA HOSPITALS - a known source for high BSN recruitment. The very hospitals that might yield some useful data on the subject were ELIMINATED from the study!) - but some were excluded because the data they generated didn't meet the authors' needs. In other words, INCLUSION of said data would disturb the conclusions of the study.
So the authors warrant that exclusion of some data is relevant. Ok, I can concede that point as I understand that large standard deviation multiples (outlying data) can skew the majority of data. But, excluding large amounts of data that are quite possibly within a single standard deviation of what is being studied on the basis that such data wasn't available serves the purpose of undermining the whole study. It is a frank admission that the data itself is incomplete, and so, suspect.
This is the compounded error of the academic laziness mentioned above. The data set was copied from another study with the full understanding that it didn't meet the needs of this study, AND COULD NOT MEET THE NEEDS OF THIS STUDY because of its lack of inclusion of hospitals MOST LIKELY to represent a significant sample of this study. Rather than develop data that was 'pertinent' to THIS study, that academic laziness now calls for this lacking, and possibly highly relevant, data to merely be excluded from consideration.
3. Degree Bias
The Authors state in the study, "Conventional wisdom" is that nurses' experience is more important than their educational levels." It is this ''conventional wisdom' that the study aims to examine. But how does it do so? By buying into the exact same conventional wisdom!: "Because there is no evidence that the relative proportions of nurses holding diplomas and associate degrees affect the patient outcomes studied, these two categories of nurses were collapsed into a single category"
HOLD ON. In a study about how degrees affect patient outcomes, an essential tenet of the study is to disregard degrees held??? After such manipulation, how can you say with a straight face that a study that disregards the relationship between degrees can make a conclusion REGARDING the relationships between degrees?
4. Lack of Substantiating Data
"It was later verified that this decision did not bias the result."
This statement, or others like it, appear throughout this 'study' without mention of the methods used to 'verify'.
"Previous empirical work demonstrated. . ." - um, exactly WHAT empirical work was that?
In fact, the study makes lots of claims and manipulates the data in lots of ways that it nevertheless insists that you have to trust its 'independent verification' that such didn't bias the results. Of course, that is without being provided access to said independent verification.
You have to love the 'self-affirming' validity of it all.
5. Data Manipulation
A. The data was 'manipulated' to grant varying degrees of credibility depending upon whether it was received by a 'teaching' hospital vs. a 'non'-teaching hospital.
B. The data was 'manipulated' to grant varying degrees of credibility to hospitals that are more 'technological' (e.g. have transplant services) as opposed to less.
C. "An important potential confounding variable to both clinical judgment and education was the mean number of years of experience working as an RN": telling comment, but never fear, the data was 'manipulated' to take this into account.
D. Nursing workloads might affect patient outcomes. (Indeed, THIS was the previous study that this study's data set was copied from.) But, in this case, the data was 'manipulated' to take those workloads into account.
E. "Estimated and controlled for the risk of having a board certified Surgeon instead of a non-board certified Surgeon." The use of 2 'dummy variables' comparing MD licenses to general vs specialty board certification was "a reasonable way for controlling surgeon qualifications in our models."
In fact the authors admit to manipulating the data 133 ways! But all of these 'manipulations' were later 'verified' to have produced no bias.
6. Key Criteria Conjecture
The study's two key criteria: deaths within 30 days of hospital admissions and deaths within 30 days of complications due to 'failure to rescue'. But how were these criteria established?
In the first case, they were established by comparing the data set to vital statistic records (death records). I doubt they accurately compared 235,000 individual patients (data points) against another data set (death records) that was probably multiple times in size, but OK - I'll buy this for the moment.
In the second case, however, 'failure to rescue' was defined - NOT BY EXAMINING ACTUAL CASES OF FAILURE TO RESCUE - but by establishing different ICD-9 secondary codes from admit to discharge. An assumption is made that a different code meant that a 'failure to rescue' had occurred. What?!
RE-READ THAT LAST! By making dubious assumptions on data sets (hospital reporting statistics) - this study conjectures how this translates to 'failure to rescue' and then makes conclusions based on what this 'failure to rescue' might mean! ALL BY ITSELF, THIS NEGATES THE ENTIRE STUDY.
But, it was 'verified' to not bias the study results. How was this part 'verified'? Well, you're gonna love this: "expert consensus as well as empirical evidence to distinguish complications from pre-existing co-morbidities."
In other words, the experts (the study authors) know which data is valid for purposes of inclusion into the study - and which data isn't. The 'experts' consensus is the key element that ensures non-bias.
There are no 'double blind' studies. No sample populations of RNs. The criteria for inclusion of 'data' is solely based on the 'consensus' of the 'experts' creating the study. And these 'experts': backed by AACN (Amer Assoc of Colleges of Nursing) - An organization committed to BSN-entry and an organization which maintains, on its website, a valiant defense of this study:
http://www.aacn.nche.edu/Media/TalkingPoints2.htm
No, no possibility of bias here.
Let me ask you this: if you know of a study conducted by Republican Pollsters - where they alone determined whose answers were valid - would you trust a result that brags that 'Most Americans Love President Bush!' But here's the question I really want to ask: WHY wouldn't you trust such a result?
7. Risk Adjustment.
Still trust this study? Try this one: "Patient outcomes were risk-adjusted by including 133 variables in our models, including age, sex, whether an admission was a transfer from another hospital, whether it was an emergency admission, a series of 48 variables including surgery type, dummy variables including the presence of 28 chronic, pre-existing conditions as classified by ICD-9 codes, and interaction terms chosen on the basis of their ability to predict mortality and failure to rescue in the current data set."
So the data was manipulated 133 ways, excluding some data. But, and this is key: there are SO VERY MANY variables that could effect patient outcomes that you have to adjust for EVERYTHING except for what you're looking to find. Right? This is not only what the authors contend, but they contend that they SUCCESSFULLY adjusted the data, 133 different ways, for just this purpose, and completely without bias. Amazing.
8. Logistics Regression Models
So, after the study took in all this manipulated 'data', it compared hospitals with higher BSN-RNs to those with less, and reached a conclusion. Right? Wrong.
It took the data and ran a 'logistics regression model' as to what might happen in a given hospital "if there were a 10% increase in BSN RNs."
This study doesn't even compare the relative levels of RN education. Let me repeat that: THIS STUDY DOESN'T EVEN MAKE THE COMPARISONS IT PURPORTS TO HAVE STUDIED. This model and, as a result, this study doesn't compare existing situations. Instead, it makes assumptions regarding potential situations compared to current situations.
Do you get this: the study wasn't designed to test real conditions. The study was designed to create hypothetical situations and comment on the validity of said models based on highly modified and incomplete data.
THIS STUDY SPECIFICALLY COMMENTS ONLY ON HYPOTHETICAL SITUATIONS. Study Disclaimer: ANY RELATIONSHIP TO REAL CONDITIONS IS ONLY IMPLIED BY THE AUTHORS.
Now, see if this isn't a key statement: "The associations of educational compositions, staffing, experience of nurses, and surgeon board certifications with patient outcomes were computed before and after controlling for patient characteristics and hospital characteristics." Indeed.
9. Direct Standardization Models.
Apparently, even after all the above manipulation, there were still 'clusters of data' that had to be 'standardized' using 'robust estimations'. The study does at least have the guts to admit that such 'standardizations' turns the final conclusion into an 'estimation'. Too bad it only makes that admission in the body of the study, and not in the 'abstracts'.
10. Alternative Correlations
The study admits that fewer than 11% of hospitals in Penn in 1999 (the area/year of the study) had 50% or greater BSNs (Excluding the VA hospital system, which were completely ignored by the study.) And then the study cites co-factors that could unduly influence the study under these situations: "Hospitals with higher percentages of BSN or masters prepared nurses tended to be larger and have post-graduate medical training programs, as well as high-tech facilities. These hospitals also had slightly less experienced nurses on average AND SIGNIFICANTLY LOWER MEAN WORKLOADS (emphasis mine). The strong associations between the educational composition of hospitals and other hospital characteristics, including workloads, makes clear the need to control for these latter characteristics in estimating the effects of nurse education on patient mortality."
Wow. Two key things from that statement: a direct acknowledgment that this 'study' is an 'estimation' and an acknowledgment that such an 'estimation' only occurred after 'the need' to highly manipulate the data.
In fact, I think it much more likely to argue that such "co-correlations" makes any 'estimated' conclusions IMPOSSIBLE to verify.
11. Study Conclusions.
This is one of the study's least reported conclusions. See if you agree: "Nurses' years of experience were not found to be a significant predictor of mortality or failure to rescue in the full models." Re-read that and UNDERSTAND the implications of what it means.
The authors admit that their "estimations" can only lead to an "implication" that increased education means better nurses. OK. I'll agree with that. But, because the same study 'factored out' experience, I think it is impossible to estimate how even a fraction of experience affects the conclusions of the study.
Indeed, in order to arrive at its conclusion, the authors must first dismiss the 'conventional wisdom' that experience IS a factor, as they did, in the above statement. Without the above assumption, this whole body of work is worthless. If experience factors in, then the key question cannot be tied simply to education, BUT MUST BE TIED TO BOTH QUALITIES.
And so, the authors find themselves in a conundrum, in which they must first dismiss the importance of experience in order to highlight the importance of education. Amazingly enough, their study reached both conclusions: experience is meaningless to patient outcomes and THEREFORE education level is, by itself, a measurable standard.
The problem with that is, once experience is dismissed, the correlation between education, experience, and patient outcomes is NOT part of this study. Even if you COULD credibly claim that there is no correlation between experience and outcomes (a silly claim), once you add education level into the consideration, you create a new dynamic. By dismissing experience from the equation the study also dimisses its own results, which NOW have the effect of ascribing the results and effects of a real-life system (education AND experience vs outcomes) to a completely different and hypothetical system (education alone vs outcomes).
In short, the claim that experience is not a factor and can be excluded from the study of education's impact on quality is the equivalent of stating that nature is not a factor and can be isolated from nurture in the study of human behavior. In truth, the concepts are much too intricately linked for bland assurances of non-bias in the elimination of part of either equation.
Also not taken into consideration is alternative educational pathways, such as non-BSN bach degree'd RNs (To include both Accel Programs and '2nd Career' ADN nurses.)
The study also fails to note that many BSNs are prior ADN students. While the subset of BSN included ADN graduates, the subset of ADN graduates ALMOST NEVER includes BSN graduates. This would obviously skew the data unless this characteristic were isolated from the data set. In fact, the data set isn't a pool of RNs but pt discharge records and there is no way included within this study to make a distinction.
Given the broad range of said experiences and educations within nursing, negating those experiences and educational pathways also serves the purpose of negating the validity of the study itself.
My conclusion:
Saying that education is a bigger factor than experience in ANYTHING is the same as saying that nurture is a bigger factor than nature in ANYTHING. The relationships are so intricately linked as to be inseparable. As a result, these types of arguments rise to the level of philosophy.
This study claims the ability to make such distinctions, using incomplete and highly manipulated (133 ways by its own admission) data and applying that data only to hypothetical situations.
This is not science; it's propaganda.
Simply put, this flawed and un-reproducible study is worthless as anything BUT propaganda. And that's the bottom line.
~faith,
Timothy.
a barrage of paperwork on useless concepts that NEVER get used in day to day life.
Another point on paperwork. My wife, a teacher at the American International School in Dhaka tells me that students come back after a few years of college and tell the teachers that having to do the senior project was about the best thing that ever happened to them. They work like heck doing these projects and I think it's college level work. They actually have to go out and interview people and organizations and their work and presentation is thought so valuable that you will find crowded auditoriums full of non-government organizations, political figures in this country and even the US ambassador there to listen and ask them questions. They have been out in the field doing this "useless" project, yet people in powerful positions considers it very valuable.
My father has often told me that this whole business of graduating from school and then having to pay someone to sit for the NCLEX so that the State can grant you a 'professional' lisence - then having to go to a job and undergo orientation/training anyways makes no sense. He says that the whole lisencing process as it stands today is BS - just another way for the State and/or the 'professiona' bodies to make more money by selling the bunkum that some lisence is required by taking a particular exam and that this is somehow more required that that initial orientation/training.Is he mistaken? Is there some grain of truth to this?
Thanks
He's only partly right. A license/permit is a way to show that you have passed the minimum standards in order to protect the public. Compare it with the country I'm in. The medical labs here are a joke in most cases. Labs are set up and they do not even seek any kind of accreditation. They already have lab slips filled out with results and all they do is put your name on it. I can get any results I want for a dollar and not even get stuck. There is less chance of this happening in the USA.
It's called "education" and since you're a nursing student I hope you catch on soon.
You obviously know nothing about me to be making such off-the-cuff remarks. Ad hominem attacks are truly a waste of time and energy.
I subscribe to the KISS principle. If we spent more time communicating and searching out answers on our own and not ascribing to the educratic lemming philosophy, the world, IMHO, would be a better place. I prefer to be a well-informed yet independent thinker. That IS the basis for critical thinking, is it not?
IMBC, RN
You obviously know nothing about me to be making such off-the-cuff remarks. Ad hominem attacks are truly a waste of time and energy.I subscribe to the KISS principle. If we spent more time communicating and searching out answers on our own and not ascribing to the educratic lemming philosophy, the world, IMHO, would be a better place. I prefer to be a well-informed yet independent thinker. That IS the basis for critical thinking, is it not?
IMBC, RN
Reality does hit hard sometimes. However, it's not an "off the cuff" remark but one born of experience and education.
In your second paragraph you state exactly what you will learn in the educational process, yet you do not see it...at least yet.
Zash - it seems to me that we're talking past each other. To me, it appears due to using the same terminology in different ways.
For example, you note that 20% of the 210 hospitals were not included. Additionally, because you deem the value of that 20% to be so crucial that any statistical analysis of the data from the other 80% is flawed. Yes?
From my experiences, when this sort of condition results in a sample that doesn't represent the actual target population its called "sample bias". From your post, you seemed to be using the term "bias" in the political sense, not the statistical. If that was your intent, my appologies for not understanding your meaning.
I would, however, disagree that the absence of this 20% of the population from the sample completely taints the result. For me, 200k patients, a sample size of nurses at 10k, and 168 hospitals is representative enough for relevance. Could the VA hospitals and the other 36 hospitals change the data? Sure. Could I be convinced that the omission creates a sample bias in this study? Sure. Would I change my mind based only on your assertion that it holds that degree of importance for no other reason than what amounts to a "because"? Nope. I'd need something a bit more empirical first.
I also don't agree that the authors' omission was a result of some nefarious intent. Since a good portion of their data came from state sources, the VA info wasn't available. Could they come up with a survey to submit to these hospitals? Sure. But it seems to me that a prime motivator for using state-gathered data would be cost (or lack of it). Paying the feds to dedicate the resources needed to collect such data aint cheap. And that's assuming that the feds even keep that sort of data in any form resembling what the state uses.
I also don't see omitting the other 36 hospital's responses as an issue. If they don't provide the data, or if the data they provide doesn't conform to the guidelines, it just isn't useable. I don't know what you call that sort of issue, but from my experience, that violates data integrity.
When you refer to these issues, you use terms like, "lazy", "purposely excluded", etc. indicating your suspicions (contempt?) of the author's motives. For me, I don't see anything suspicious. For the VA, they couldn't obtain the info. For the other civilian hospitals, they didn't provide the info. I just don't see how that's the fault of the researchers.
Then there's the data. Again, I think we are having a conflict of how we are using terms. If my connotations were wrong, my appologies. For me, there's the sample, and then there's the data collected from the sample. I also tend to think in terms probably more appropriate for database work that statistical. Be that as it may... I see data as comprised by a defined set of elements. For example, each patient has an associated number of elements that are used to describe him/her, gathered from one state database. Each hospital datum point was fleshed out with elements taken from two other state sources. Each nurse was described by elements obtained from a survey created by the authors. In my parlance, this defines the methods of data collection.
It looks to me that they've collected the information needed to run whatever algorithm they need to produce a result for each hospital, patient, and nurse required to determine a ratio of 2 year to 4 year degrees at each facility. It also appears that they've pulled the necessary data to differentiate the patients that died as a result of illness or injury outside the control of the facility. I can't tell from your responses whether you have an issue with what I'm calling their data collection methods, if your issue is rooted in what you construe as sampling bias, or both. On the one hand, you say they "augmented" data - which looks to me like valid data collection method. Then you immediately bring up the missing 20% of hospitals - which, as discussed above, falls in the realm of sampling.
You also spend a good deal of time regarding the dreaded "133" number. For me, it seems the authors explained what these 133 items are fairly clearly. From what I interpret from your posts, it sounds like you believe this number represents the number of adjustments made to any single data item. I look at it and see 48 elements to describe various surgical procedures, 28 elements for describing chronic conditions, and an undefined number of elements to describe the individual patient. These are descriptors. They don't modify anything. And they make up over half of the 133 elements. Since I'm certain this data was put into a database, you'd need these descriptors just to facilitate the selection of data for specific database queries. The resulting query results would identify the specific elements to plug into whatever algorithm they used/designed for doing things like "adjust for risk". So by definition, the number of adjustments made has to be much less than the total number. So there's no way they twiddled each data point 133 times. And even if I'm completely wrong, the only reason you give as to why this is bad amounts to "because it's too complicated." For me, what you find too complicated doesn't qualify as a reason to toss the approach.
You also seem to feel that the authors just flat lied about some data and/or analysis. For example, they claim to have done the analysis of the diploma and ADN nurses separately. Your stated reason being that they'd have to do another separate study the size of the current effort and since they didn't, their assertion must be false. From what I see, they've already gathered that data. Since the nurse's survey asked (among other things) for the highest degree obtained, they already have that information in their data set. Since this information is most certainly in a database, it would only require fairly small changes to the same queries they used to run the calculations with those two degrees in aggregate. Then rerun the calcs and a few milliseconds later the result pops out. OK.. probably not quite that simple, but certainly not requiring an entirely new effort.
The other item you seem fairly intent on is what you called "a statistical NO-NO". I have to admit to being confused about what you are referring to. The context of what you say seems to indicate that the data they "segregated out" was the incomplete, unavailable, or improperly reported data from the VA and other hospitals. If that's the case, then refer to the discussion on sampling bias and data integrity.
The last item you seem concerned with is the "predictions" the authors used. I'm not sure what to tell you. One of the specific uses of statistics is the determination of various types of "regressions". These regression calculations identify what is likely to happen to one variable if the other related variable changes. It's part and parcel of the discipline. You're certainly welcome to have a philosophical disagreement about the practice. However, there's probably a veritable truckload of mathematicians to convince otherwise. Knock yourself out.
As far as my personal opinion of the study, I'm not ready to fall on a sword to defend it. There are questions that could be looked at in a bit more detail. Such as...
- The study only examined patient outcomes after a subset of surgical procedures. Is there some reason not to include all types?
- What happens to the predictive nature of patient outcomes for non-surgical admissions?
- The sample was limited to "adult acute-care general hospitals". What are the other types? Why exclude them? How might their inclusion affect the result?
- Zash's issue regarding VA hospitals isn't without merit. It would be nice to have some info, predictions, educated guesses on what their data might provide.
- If I were in the mood for self-punishment, I'd like to get some more details on what considerations they used in making their various risk adjustments.
- The issue about experience is also interesting. Besides having a fuller explanation of the methods they used to make their determination, I'd love to see a follow-up study on just this aspect of the findings. Any grad students wanna volunteer?
- For the nurse's survey, the author's note that they used a random sample to identify who to poll. There's usually something of interest when looking at random selection methods. Same goes for having more detail on who actually responded. Was there some slant toward a particular segment?
I guess to summarize, I think the study has merits and is worth consideration and additional study. I also would say it's fair to suggest that I'm probably more inclined to accept the data simply because I think the linkage between education and quality is self-evident. I've had that view reinforced by personal experiences with the same question in the engineering field. Then throw in the weight of the ever-increasing complexity of health care. The whole only need 2-year degree just looks like a non sequitur. It strikes me as nonsensical to have one of the most important parts of the field strapped to an idea that additional education isn't just unnecessary, but virtually worthless.
[quote name=zenman"Evidence shows that nursing education level is a factor in patient safety and quality of care. As cited in the report When Care Becomes a Burden released by the Milbank Memorial Fund in 2001, two separate studies conducted in 1996 - one by the state of New York and one by the state of Texas - clearly show that significantly higher levels of medication errors and procedural violations are committed by nurses prepared at the associate degree and diploma levels as compared with the baccalaureate level. These findings are consistent with findings published in the July/August 2002 issue of Nurse Educator magazine that references studies conducted in Arizona, Colorado, Louisiana, Ohio and Tennessee that also found that nurses prepared at the associate degree and diploma levels make the majority of practice-related violations."
http://www.aacn.nche.edu/Media/FactSheets/ImpactEdNP.htm
This above quote is directly from the Ame Aca of Colleges of Nursing's website, as you link it.
It is important to note that AACN didn't cite their references but cited a reference that cited the references. The milbank study used two references:
Green, A. 1996. Texas Creates a Profile of the Disciplined Nurse. Issues 17(2);8-9.
https://www.nursys.com/public/resources/nocost_archive_17_2_04.htm
This reference backs up my previous statement: it looked at a 'mean' status, not at bsn vs. adn numbers. It also says btw, that women, those aged 44, Whites, F/T employees, w/ >6yrs experience, but
But it says NOTHING about greater ratios of ADN vs. BSN violators. Look it up in the above link.
~~~
The second reference was ITSELF a reference to the State Education Department /University of the State of New York's annual RN survey 1996. This reference was related to an INDEPENDENT public comment in a letter of discussion and wasn't part of the Survey itself.
Since when do public comments ABOUT a survey get to count as being PART of a survey?
I stand by my assertion. I'll submit that ADNs make more errors BECAUSE there are more ADNs and BECAUSE BSNs have a lower percentage of nurses working AT THE BEDSIDE (both in sheer numbers and in percentages; BSN is more likely to be in a position not at risk for bedside 'errors'.)
But to state this has something to do with the LEVEL of education is a baseless accusation. And it certainly isn't backed up by the references in the milbank report.
~faith,
Timothy.
I subscribe to the KISS principle. If we spent more time communicating and searching out answers on our own and not ascribing to the educratic lemming philosophy, the world, IMHO, would be a better place. I prefer to be a well-informed yet independent thinker. That IS the basis for critical thinking, is it not?IMBC, RN
How about this - education provides a base upon which experience builds.
When comparing a 2-year degree with a 4-year degree in the same discipline, the person with the additional education will have more ways to integrate the experiences gained. The person with the 2-year degree is not going to have the same theoretical underpinnings that would allow the full use of the same set of experiences. Additionally, they'll have less ways for synthesizing that information into new solutions, etc. Can the 2-year eventually gain those "underpinnings" through the school of hard knocks? Sure. But if you are running the organization, how much time can you afford to spend while waiting for them to catch up? And in that time, how much further could the 4-year grad progress? In the context of the study that's the subject of this thread, it appears that this difference might even impact things like patient mortality.
Reality can't hit hard when one doesn't state facts and instead relies on ad hominem (read: attacking the person not the position) arguments.
The fallacy in all of the arguments is: I must believe this because I do not possess a 4 yr degree. That is an assumption you choose to make in your rush to judge me or judge what I may or may not believe. Furthermore, your tone is condescending.
"Educrats say they want [students] to think for themselves, then make them work in groups.
Educrats are obsessed with achieving racial diversity in lessons, regardless of subject area, and in school statistics. Educators are obsessed with educating.
Educrats believe that the important thing is that [students] can "communicate mathematically'' and scientifically. Educators think [students] should know math and science.
Educrats write history standards, such as: "Students should be able to identify and explain how events and changes occurred in significant historical periods.'' Educators realize the sentence is utterly meaningless.
Educrats think a class is doing well if the students are performing at the same level. Educators want better students to do their best.
Educrats care about students knowing how to do things -- solve problems, present an argument -- "in different ways.'' Educators care about students doing the above well."
As I stated earlier, critical thinking comprises independent thinking, and knowing where to find resources to answer your questions. Surely you are not suggesting one can only obtain these skills through a 4 yr degree? We are not in the pre-internet era anymore.
Simply put, UNIVERSITIES ARE BIG BUSINESS. The bottom line revolves around how many students can be placed in a particular classroom at a particular time. I'm not surprised that people (esp connected with university education) don't wish to acknowledge this.
Works2xs: Yes, maybe we are talking past each other a bit. So, let me go back and address some of your previous comments TO you:
My point mirrors yours on the exclusion of some hospitals. You state you would need some empirical evidence to support that exclusion of this data damaged the study before you accept that as fact. I think the burden of proof lies with the Authors to prove that it DIDN'T damage the study.
Especially since the reasons for exclusion had nothing to do w/ the viability of the study but HOW they collected the data. And how they collected the data was to first collect it for another study, with ANOTHER purpose in mind.
And in regards to this study, the hospitals left out were MOST likely to have BSN vs. ADN attributes worthy of consideration, be they MORE BSNs in federal facilities, or LESS in small hospitals. Maybe, it could be argued that this VERY thing makes these hospitals statistical outliers. But, that is NOT why they were excluded.
No, I don't think it's a 'nefarious' purpose on the part of the authors. But I do think the authors wanted to study something different, and so used the data they had already acquired to do so. And it is an 'imperfect' fit. I understand the limitations to funding, etc. as why data is some studies are constructed in some ways - and this is ESPECIALLY true for 'empirical' data, which is dependent upon the limitations of source.
But I do question the 'convenience' of conducting two back to back studies using the same data sets. And I question it in the specific context that the 'imperfect fit' of this data for the BSN question is EXACTLY due to the fact that this data set omits key areas of BSN populations: very high, and, very small populations.
To their credit, they point out the omission, but make no comments as to how said ommission affects the validity of the study.
I merely point out that it raises valid questions.
~~~~
Regarding the 133 criteria. I understand that those are 'identifiers'. Whether a nurses is BSN, ADN, Diploma, What kind of hospital, doc, pt load, etc. I don't have a problem with that. What I have a problem with is that all of these 133 identifiers are being used to create a 'handicap' - a standardazation to provide a 'value equal' assessment of say, sally, ADN, working in a county hospital w/ 7 pts, and ella, BSN, working in a teaching hospital w/ 4 pts, etc.
I don't have a problem, as you say, with this concept being 'too complicated'. I have a problem with it being 'too subjective'. These 133 criteria, in total, are are being used to construct a value assessment for - as far as I can tell - each of the 168 hospitals (as this is where the comparison to the state statistics would come into play).
Basically, this is my problem: the study, time after time, states that it adjusts for this, and that, and the other thing. This is how. This 'risk adjustment', based on these 133 criteria.
I would find it difficult to factor out for the things the study blithely assures that it DOES. Just HOW do you factor (in a study of morbidity/mortality) for a nurse caring for 4 pts vs. one caring for 7? (Indeed, this was the authors FIRST study that used this data; increased morbidity/mortality based on pt ratios.) They go from stressing the importance of that in one study, to 'factoring it out' in another. How do you do this? How do you decide which morbidity/mortality issues are related to ratios and which to education?
You do this by 'risk adjusting' the differences between the two, based on these 133 criteria. Those adjustments are designed to 'equalize' the value of a whole range of situations (theoretically, EACH of the 168 hospitals in the study would have a different composition of these 133 identifiers and therefore, a different 'value' of risk adjustment.)
And those different values are, how shall I say, value judgments, subjective. Can you understand the concern that I have over using this kind of data to 'factor' out morbidity and mortality conditions in a study about the morbidity and mortality of 1 of the 133 criteria?
It's not the criteria themselves, but the value judgments being placed upon them to derive a 'risk neutral' relationship between these 168 hospitals. First, I DO think that it is too (not complicated as you suggest but) abstract to accurately risk adjust for that many variables. But more than that; it's too subjective.
~~~
Regarding the ADN vs. Diploma issue. You guess at a technique the authors might have used in their study. But, it's simply not cited. I would think that if they first 'ran the numbers' with ADN / Diploma: this would certainly have been of some import. More so than just dismissing the relationship in one sentence.
I would think that 'ADN vs. Diploma mortality rates equal" would be as big a pronouncement from this study. But, I personally think that it IS a bias on the part of the authors. It's too 'neat'. Just like my complaint about the - also unverified - comment that while common sense suggests that experience would be a factor in failure to rescue - we found that not to be true!
These are amazing findings in this study! Too bad they are buried in the details. Ok, call me cynical: but these types of findings are necessary constructs for this study. The study is without value if experience, indeed, CAN'T be factored out. And, this study is specifically about BSN vs. 'lessor degrees'. So, both constructs are necessary or desirable for this study. And, as it happens, both constructs are 'proven' by this study. And yet, both are without supporting documentation.
Yes, on these points I'm challenging the credibility of the authors - and as a result, the study itself. But, it is the authors themselves that deemed these sweeping conclusions unworthy of supporting documentation.
~~~
Regarding their statistical NO-NO. They used two criteria: failure to rescue and death in 30 days. (For deaths, they used death records).
On the failure to rescue part, they examined ICD-9 codes from the hospitals' reporting stats. If there was a change, and, in their expert opinion, if that change wasn't related to a co-morbidity, then it was included in the study as a failure to rescue.
The raw data is being sorted by the authors prior to inclusion in the study. If, in their expert opinions, it belonged in the study, then it was included. It is a huge no-no in statistics to sort the raw data prior to running it through your algorithms. It's the old 'garbage in, garbage out' routine. If you adjust what goes in, it affects what comes out.
What went INTO the study was first subject to the authors' expert opinion that it merited consideration by the study. So, what comes out of the study is the authors subjective results.
I get this image of florida ballot workers examining hanging chads!
This process allows for the intellectual dishonesty of changing the input of data in any number of ways to affect the outcome. Am I accusing them of dishonesty? No. I AM saying that they took no efforts to avoid the APPEARANCE of dishonest techniques. And that is why, in statistics, such a practice is not allowed.
I said in my first critique: if I poll on an issue, and I can determine which of my data I collect is included in the results, do you think it's possible for me to rig the results? And this is why I mentioned that Ame Aca of Colleges of Nursing was backing this study. My followup question: If I can control the data that makes it into my poll above, and the subsequent results FITS my backers point of view perfectly, would you trust those poll results? Maybe if you were inclined to believe them in any case. But, what if you weren't? Can you not see how that could raise serious credibility issues?
~~~
I don't have a problem with data regressions. I have a problem with THEIR regressions because so much subjectivity was introduced into this study BEFORE it got to that point, that any data gained from any regression model has no statistical validity in any case.
The problem I really have with using regression models is that the abstract paints a different picture. You have to find (difficult to do online without access to a professional site) the study itself to realize that this is a theoretical analysis as opposed to an actual comparison.
~~~
I feel like BSN probably IS where we need to go. But I think this study is of no statistical value and PASSES off an inaccurate view of the whole package of today's RN.
I understand the point of view that more education is more better. I'm not inclined to disagree. I do think, however, that nursing is a SPECIAL case BECAUSE of the amount of experience that factors in.
You want to tell me that the profession is better off with a BSN-entry. Fine, convince me. But, it's intellectually dishonest, on its face, to say that BSN means less errors, regardless of experience. In fact, it's only divisive, it only gives RNs a bad name as a group, and it will not convince the very peers you need to convince to 'come along'.
Interestingly, on another thread (NY moving to BSN standards, or something like that), I posted a statement from an education group that advocated that RNs were 'technical' nurses and APNs 'professional' nurses, and, after getting rid of ADN/Diploma, the next advancement in nursing was to get rid of "RNs - with the bad connotation that comes with that title" altogether.
Studies like this don't improve the image of BSN over ADN. They just attack the image of RN, generally.
~faith,
Timothy.
Reality can't hit hard when one doesn't state facts and instead relies on ad hominem (read: attacking the person not the position) arguments.The fallacy in all of the arguments is: I must believe this because I do not possess a 4 yr degree. That is an assumption you choose to make in your rush to judge me or judge what I may or may not believe. Furthermore, your tone is condescending.
My tone is not condescending, but it is irritabilility that people have not yet at least learned to use google before posting. If that happppend more often, there would even be less posting of things that defy logic. That's why I don't waste my time posting facts when you should have already done a search. That's why Dr. House seems even more likable to me every day! Can you believe someone even posted, "Do NPs have to be nurses?" Hello, "Nurse Practitioner!" Yes, I'm irritable, but hey, look at my countdown. I'll be ok tomorrow!
Thanks Timothy! You worked very hard on this post and very good rebuttal of the flawed study.
BTW, according to my own "study" as a new nurse. I don't kill (a slightly exaggerated claim) people every day by asking more experienced nurses for help/advice. I've never even considered what their degrees are. Our degrees are listed on our name badges, but I still couldn't tell you who has what.
Thanks again!
Roy Fokker, BSN, RN
1 Article; 2,011 Posts
PS: As far as credentials go---
Credentials to support an argument is ok.
Credentials to browbeat and/or dismiss an opposing argument is not kosher in my book.
Brilliant ideas don't always spring from well "qualified" minds.
cheers,