Published
This is EXTREMELY long but worthwhile to read if the topic interests you.
Linda Aiken et. al., in 2003 released through the JAMA a landmark Study, "Educational Levels of Hospital Nurses and Surgical Patient Mortality"
The study is oft cited and contends that evidence suggests that higher rates of BSN education at the bedside directly translates to improved pt outcomes.
This is my critique:
Why the study, "Educational Levels of Hospital Nurses and Surgical Patient Mortality" is flawed.
1. Academic Laziness
The original data pool was used for an earlier study about staffing levels and mortality GENERALLY. That data was just copied onto this template for this study. But it wasn't just copied; it was copied with the full assurance of the authors that the results of the first study that used this data could be 'factored out' of this, subsequent study.
2. Discrimination Bias (Hospital Selection)
Before analyzing the data, the authors first decided that it would be necessary to 'exclude' hospitals that didn't fit their data set. Some were excluded for valid reasons (they didn't report to the data set. Although, however valid, the exclusion ITSELF taints the data. THIS IS EXPECIALLY TRUE SINCE THIS EXCLUSION INCLUDES ALL VA HOSPITALS - a known source for high BSN recruitment. The very hospitals that might yield some useful data on the subject were ELIMINATED from the study!) - but some were excluded because the data they generated didn't meet the authors' needs. In other words, INCLUSION of said data would disturb the conclusions of the study.
So the authors warrant that exclusion of some data is relevant. Ok, I can concede that point as I understand that large standard deviation multiples (outlying data) can skew the majority of data. But, excluding large amounts of data that are quite possibly within a single standard deviation of what is being studied on the basis that such data wasn't available serves the purpose of undermining the whole study. It is a frank admission that the data itself is incomplete, and so, suspect.
This is the compounded error of the academic laziness mentioned above. The data set was copied from another study with the full understanding that it didn't meet the needs of this study, AND COULD NOT MEET THE NEEDS OF THIS STUDY because of its lack of inclusion of hospitals MOST LIKELY to represent a significant sample of this study. Rather than develop data that was 'pertinent' to THIS study, that academic laziness now calls for this lacking, and possibly highly relevant, data to merely be excluded from consideration.
3. Degree Bias
The Authors state in the study, "Conventional wisdom" is that nurses' experience is more important than their educational levels." It is this ''conventional wisdom' that the study aims to examine. But how does it do so? By buying into the exact same conventional wisdom!: "Because there is no evidence that the relative proportions of nurses holding diplomas and associate degrees affect the patient outcomes studied, these two categories of nurses were collapsed into a single category"
HOLD ON. In a study about how degrees affect patient outcomes, an essential tenet of the study is to disregard degrees held??? After such manipulation, how can you say with a straight face that a study that disregards the relationship between degrees can make a conclusion REGARDING the relationships between degrees?
4. Lack of Substantiating Data
"It was later verified that this decision did not bias the result."
This statement, or others like it, appear throughout this 'study' without mention of the methods used to 'verify'.
"Previous empirical work demonstrated. . ." - um, exactly WHAT empirical work was that?
In fact, the study makes lots of claims and manipulates the data in lots of ways that it nevertheless insists that you have to trust its 'independent verification' that such didn't bias the results. Of course, that is without being provided access to said independent verification.
You have to love the 'self-affirming' validity of it all.
5. Data Manipulation
A. The data was 'manipulated' to grant varying degrees of credibility depending upon whether it was received by a 'teaching' hospital vs. a 'non'-teaching hospital.
B. The data was 'manipulated' to grant varying degrees of credibility to hospitals that are more 'technological' (e.g. have transplant services) as opposed to less.
C. "An important potential confounding variable to both clinical judgment and education was the mean number of years of experience working as an RN": telling comment, but never fear, the data was 'manipulated' to take this into account.
D. Nursing workloads might affect patient outcomes. (Indeed, THIS was the previous study that this study's data set was copied from.) But, in this case, the data was 'manipulated' to take those workloads into account.
E. "Estimated and controlled for the risk of having a board certified Surgeon instead of a non-board certified Surgeon." The use of 2 'dummy variables' comparing MD licenses to general vs specialty board certification was "a reasonable way for controlling surgeon qualifications in our models."
In fact the authors admit to manipulating the data 133 ways! But all of these 'manipulations' were later 'verified' to have produced no bias.
6. Key Criteria Conjecture
The study's two key criteria: deaths within 30 days of hospital admissions and deaths within 30 days of complications due to 'failure to rescue'. But how were these criteria established?
In the first case, they were established by comparing the data set to vital statistic records (death records). I doubt they accurately compared 235,000 individual patients (data points) against another data set (death records) that was probably multiple times in size, but OK - I'll buy this for the moment.
In the second case, however, 'failure to rescue' was defined - NOT BY EXAMINING ACTUAL CASES OF FAILURE TO RESCUE - but by establishing different ICD-9 secondary codes from admit to discharge. An assumption is made that a different code meant that a 'failure to rescue' had occurred. What?!
RE-READ THAT LAST! By making dubious assumptions on data sets (hospital reporting statistics) - this study conjectures how this translates to 'failure to rescue' and then makes conclusions based on what this 'failure to rescue' might mean! ALL BY ITSELF, THIS NEGATES THE ENTIRE STUDY.
But, it was 'verified' to not bias the study results. How was this part 'verified'? Well, you're gonna love this: "expert consensus as well as empirical evidence to distinguish complications from pre-existing co-morbidities."
In other words, the experts (the study authors) know which data is valid for purposes of inclusion into the study - and which data isn't. The 'experts' consensus is the key element that ensures non-bias.
There are no 'double blind' studies. No sample populations of RNs. The criteria for inclusion of 'data' is solely based on the 'consensus' of the 'experts' creating the study. And these 'experts': backed by AACN (Amer Assoc of Colleges of Nursing) - An organization committed to BSN-entry and an organization which maintains, on its website, a valiant defense of this study:
http://www.aacn.nche.edu/Media/TalkingPoints2.htm
No, no possibility of bias here.
Let me ask you this: if you know of a study conducted by Republican Pollsters - where they alone determined whose answers were valid - would you trust a result that brags that 'Most Americans Love President Bush!' But here's the question I really want to ask: WHY wouldn't you trust such a result?
7. Risk Adjustment.
Still trust this study? Try this one: "Patient outcomes were risk-adjusted by including 133 variables in our models, including age, sex, whether an admission was a transfer from another hospital, whether it was an emergency admission, a series of 48 variables including surgery type, dummy variables including the presence of 28 chronic, pre-existing conditions as classified by ICD-9 codes, and interaction terms chosen on the basis of their ability to predict mortality and failure to rescue in the current data set."
So the data was manipulated 133 ways, excluding some data. But, and this is key: there are SO VERY MANY variables that could effect patient outcomes that you have to adjust for EVERYTHING except for what you're looking to find. Right? This is not only what the authors contend, but they contend that they SUCCESSFULLY adjusted the data, 133 different ways, for just this purpose, and completely without bias. Amazing.
8. Logistics Regression Models
So, after the study took in all this manipulated 'data', it compared hospitals with higher BSN-RNs to those with less, and reached a conclusion. Right? Wrong.
It took the data and ran a 'logistics regression model' as to what might happen in a given hospital "if there were a 10% increase in BSN RNs."
This study doesn't even compare the relative levels of RN education. Let me repeat that: THIS STUDY DOESN'T EVEN MAKE THE COMPARISONS IT PURPORTS TO HAVE STUDIED. This model and, as a result, this study doesn't compare existing situations. Instead, it makes assumptions regarding potential situations compared to current situations.
Do you get this: the study wasn't designed to test real conditions. The study was designed to create hypothetical situations and comment on the validity of said models based on highly modified and incomplete data.
THIS STUDY SPECIFICALLY COMMENTS ONLY ON HYPOTHETICAL SITUATIONS. Study Disclaimer: ANY RELATIONSHIP TO REAL CONDITIONS IS ONLY IMPLIED BY THE AUTHORS.
Now, see if this isn't a key statement: "The associations of educational compositions, staffing, experience of nurses, and surgeon board certifications with patient outcomes were computed before and after controlling for patient characteristics and hospital characteristics." Indeed.
9. Direct Standardization Models.
Apparently, even after all the above manipulation, there were still 'clusters of data' that had to be 'standardized' using 'robust estimations'. The study does at least have the guts to admit that such 'standardizations' turns the final conclusion into an 'estimation'. Too bad it only makes that admission in the body of the study, and not in the 'abstracts'.
10. Alternative Correlations
The study admits that fewer than 11% of hospitals in Penn in 1999 (the area/year of the study) had 50% or greater BSNs (Excluding the VA hospital system, which were completely ignored by the study.) And then the study cites co-factors that could unduly influence the study under these situations: "Hospitals with higher percentages of BSN or masters prepared nurses tended to be larger and have post-graduate medical training programs, as well as high-tech facilities. These hospitals also had slightly less experienced nurses on average AND SIGNIFICANTLY LOWER MEAN WORKLOADS (emphasis mine). The strong associations between the educational composition of hospitals and other hospital characteristics, including workloads, makes clear the need to control for these latter characteristics in estimating the effects of nurse education on patient mortality."
Wow. Two key things from that statement: a direct acknowledgment that this 'study' is an 'estimation' and an acknowledgment that such an 'estimation' only occurred after 'the need' to highly manipulate the data.
In fact, I think it much more likely to argue that such "co-correlations" makes any 'estimated' conclusions IMPOSSIBLE to verify.
11. Study Conclusions.
This is one of the study's least reported conclusions. See if you agree: "Nurses' years of experience were not found to be a significant predictor of mortality or failure to rescue in the full models." Re-read that and UNDERSTAND the implications of what it means.
The authors admit that their "estimations" can only lead to an "implication" that increased education means better nurses. OK. I'll agree with that. But, because the same study 'factored out' experience, I think it is impossible to estimate how even a fraction of experience affects the conclusions of the study.
Indeed, in order to arrive at its conclusion, the authors must first dismiss the 'conventional wisdom' that experience IS a factor, as they did, in the above statement. Without the above assumption, this whole body of work is worthless. If experience factors in, then the key question cannot be tied simply to education, BUT MUST BE TIED TO BOTH QUALITIES.
And so, the authors find themselves in a conundrum, in which they must first dismiss the importance of experience in order to highlight the importance of education. Amazingly enough, their study reached both conclusions: experience is meaningless to patient outcomes and THEREFORE education level is, by itself, a measurable standard.
The problem with that is, once experience is dismissed, the correlation between education, experience, and patient outcomes is NOT part of this study. Even if you COULD credibly claim that there is no correlation between experience and outcomes (a silly claim), once you add education level into the consideration, you create a new dynamic. By dismissing experience from the equation the study also dimisses its own results, which NOW have the effect of ascribing the results and effects of a real-life system (education AND experience vs outcomes) to a completely different and hypothetical system (education alone vs outcomes).
In short, the claim that experience is not a factor and can be excluded from the study of education's impact on quality is the equivalent of stating that nature is not a factor and can be isolated from nurture in the study of human behavior. In truth, the concepts are much too intricately linked for bland assurances of non-bias in the elimination of part of either equation.
Also not taken into consideration is alternative educational pathways, such as non-BSN bach degree'd RNs (To include both Accel Programs and '2nd Career' ADN nurses.)
The study also fails to note that many BSNs are prior ADN students. While the subset of BSN included ADN graduates, the subset of ADN graduates ALMOST NEVER includes BSN graduates. This would obviously skew the data unless this characteristic were isolated from the data set. In fact, the data set isn't a pool of RNs but pt discharge records and there is no way included within this study to make a distinction.
Given the broad range of said experiences and educations within nursing, negating those experiences and educational pathways also serves the purpose of negating the validity of the study itself.
My conclusion:
Saying that education is a bigger factor than experience in ANYTHING is the same as saying that nurture is a bigger factor than nature in ANYTHING. The relationships are so intricately linked as to be inseparable. As a result, these types of arguments rise to the level of philosophy.
This study claims the ability to make such distinctions, using incomplete and highly manipulated (133 ways by its own admission) data and applying that data only to hypothetical situations.
This is not science; it's propaganda.
Simply put, this flawed and un-reproducible study is worthless as anything BUT propaganda. And that's the bottom line.
~faith,
Timothy.
"Dr. Aiken's groundbreaking work demonstrates one very simple point: Education makes a difference in nursing practice. To anyone outside the nursing profession, this statement is not controversial. Of course, education broadens one's knowledge base, enriches understanding, and sharpens expertise. Yet many in nursing reject the idea that entry-level clinicians who followed different educational paths have different qualifications. Why is the idea that education makes a difference so controversial among nurses?" ...Kathleen Ann Long, APRN, PhD, FAAN
As one who went from CNA to LPN to RN, I've never had a problem with this concept. I have the same question as this lady. Someone needs to do a study to find out why nurses think the way they do about education.
"Evidence shows that nursing education level is a factor in patient safety and quality of care. As cited in the report When Care Becomes a Burden released by the Milbank Memorial Fund in 2001, two separate studies conducted in 1996 - one by the state of New York and one by the state of Texas - clearly show that significantly higher levels of medication errors and procedural violations are committed by nurses prepared at the associate degree and diploma levels as compared with the baccalaureate level. These findings are consistent with findings published in the July/August 2002 issue of Nurse Educator magazine that references studies conducted in Arizona, Colorado, Louisiana, Ohio and Tennessee that also found that nurses prepared at the associate degree and diploma levels make the majority of practice-related violations."
“Dr. Aiken's groundbreaking work demonstrates one very simple point: Education makes a difference in nursing practice. To anyone outside the nursing profession, this statement is not controversial. Of course, education broadens one's knowledge base, enriches understanding, and sharpens expertise. Yet many in nursing reject the idea that entry-level clinicians who followed different educational paths have different qualifications. Why is the idea that education makes a difference so controversial among nurses?” …Kathleen Ann Long, APRN, PhD, FAAN
It doesn't surprise me that the Amer Asso of Colleges of Nursing would gush about the 'groundbreaking' nature of this study. They backed the study. This above quote is LITERALLY from the study's PR campaign.
As one who went from CNA to LPN to RN, I’ve never had a problem with this concept. I have the same question as this lady. Someone needs to do a study to find out why nurses think the way they do about education.
I have never undermined the value of education. I also went from CNA to LVN to RN to Bach Degree. I understand the value of education. But nursing is near unique in the value-added attachment of experience. Bedside nursing is a 'hands on' job. Neither ADN nor BSN teaches the day to day in-the-trenches technical feel and professional 'critical thinking skills' of this job. It is OJT.To ascribe significant advantages to either educational path over another - once leavened with experience - is absurd. The advantages of BSN over ADN - while real - are AWAY from the bedside.
Tell me, what part of the BSN program did you learn how to use balloon pumps? Or to titrate critical drips on a minute by minute basis. Where in the BSN program did they teach --- seriously teach -- hemodynamics and swan performance? How about vents? Not just a day's theory but real hands on use - to include troubleshooting? When in the BSN program did they teach the hands on progression from being an unconsciously incompetent practitioner to being an unconsciously competent professional?
There is a reason why this educational debate is an ACADEMIC debate that has no resonance in the trenches. OUR BEDSIDE TRENCHES RESPECT EXPERIENCE. Period.
You don't need a study for that.
"These findings are consistent with findings published in the July/August 2002 issue of Nurse Educator magazine that references studies conducted in Arizona, Colorado, Louisiana, Ohio and Tennessee that also found that nurses prepared at the associate degree and diploma levels make the majority of practice-related violations.”
I'll address the rest of this part that I didn't requote later. I'll have to look up that study and it is bedtime for me. But of the part I requoted, I'll look that up also, but I can't say that I'm surprised. SINCE ADN/DIPLOMA NURSES COMPRISE THE MAJORITY OF RNs, IT IS NO SURPRISE THAT THEY MAKE THE MAJORITY OF PRACTICE-RELATED VIOLATIONS. But of course, that has no bearing on any comparison between BSN vs. ADN and/or Diploma and errors rates. To suggest that it does is a 'logical fallacy'.
btw links to the actual studies would help.
~faith,
Timothy.
I have never undermined the value of education. I also went from CNA to LVN to RN to Bach Degree. I understand the value of education. But nursing is near unique in the value-added attachment of experience. Bedside nursing is a 'hands on' job. Neither ADN nor BSN teaches the day to day in-the-trenches technical feel and professional 'critical thinking skills' of this job. It is OJT.To ascribe significant advantages to either educational path over another - once leavened with experience - is absurd. The advantages of BSN over ADN - while real - are AWAY from the bedside.
I disagree with you. I've taught in both programs, and in the school I was in, BSNs were taught to care for the more complex patient, including the critical thinking skills that need to go along with OJT. Take the management course, for example. How does this help at the bedside? You have a knowledge of management's role as well as hospital chain of heirachy, some knowledge of healthcare costs, how to manage/lead others such as LPN, CNA, etc, , how to set your priorities, time management, communication, etc., all of which should directly impact patient care. Obtaining my masters even further translated to better care directly at the bedside.
The best blend in any field is a combination of OJT and education, That's why nurses have clinicals and other fields have internships. So I only partly agree with you. I learned to intubate with a combination of education and OJT, but it was creativity and critical thinking that allowed me to intubate with my long fingers when no scope was available. These are some of the traits that make liberal arts grads so valuable...even if they initially can't do anything with their hands, LOL!
Tell me, what part of the BSN program did you learn how to use balloon pumps? Or to titrate critical drips on a minute by minute basis. Where in the BSN program did they teach --- seriously teach -- hemodynamics and swan performance? How about vents? Not just a day's theory but real hands on use - to include troubleshooting? When in the BSN program did they teach the hands on progression from being an unconsciously incompetent practitioner to being an unconsciously competent professional?
Teaching the concepts behind the ballon pump is all that should be taught, then if you are on a floor that uses them, you should get hands on training. Teaching hands on use of the pump in school is a waste of time as most nurses will not be exposed to them. In most professions you are not expected to hit the floor running. You got to have the "educational preparation" then do the OJT. OJT without the educational background = trade school graduate.
SINCE ADN/DIPLOMA NURSES COMPRISE THE MAJORITY OF RNs, IT IS NO SURPRISE THAT THEY MAKE THE MAJORITY OF PRACTICE-RELATED VIOLATIONS.[/b] But of course, that has no bearing on any comparison between BSN vs. ADN and/or Diploma and errors rates. To suggest that it does is a 'logical fallacy'.
Oh, come on. Of course they make the most mistakes because they are more in number. But we have stats to "correct" for that.
btw links to the actual studies would help.
I think you can find them on the link I provided. I didn't check but they were highlighted.
Take the management course, for example. How does this help at the bedside? You have a knowledge of management's role as well as hospital chain of heirachy, some knowledge of healthcare costs, how to manage/lead others such as LPN, CNA, etc, , how to set your priorities, time management, communication, etc., all of which should directly impact patient care.
In most cases, the extra coursework, in this case management, is a barrage of paperwork on useless concepts that NEVER get used in day to day life.
Anyone that has ever had a kool-aid stand understands the basic concepts of costs/profits. If you can balance your checkbook you understand debits and credits. Anyone that has ever held a job besides nursing understands organizational structure, getting x amount of tasks accomplished in y amt of time and content and importance of a yearly review. It's really not rocket science. In most cases it's just a bunch of superfluous hoopla.
Oh yeah, that art history requirement? I find it particularly useful as I stroll down the hall at work and look at the lithographs of Anne Geddes.
Works2xs, what are your credentials to critique research?
If you mean would a professional journal ask me to write peer reviews? - Were I the editor, I wouldn't hire me for that role.
With respect to doing formal research critiques in the style of a "professional" peer review - just the experiences from coursework for the BSN program. I also conducted an applied research project as part of my management degree.
As for using research, I had 20+ years working and managing R&D projects in the high-tech sector. So I've pulled apart studies, analyses, and reports, ad nauseum. I've also reverse-engineered methodologies used for complex risk assessments of large enterprise systems as well as developed assessment methodologies.
So, do I know how to critically evaluate complex information? I was gainfully employed to do so. Do I have experience identifying gaps, omissions, and discerning what the author of a technical paper didn't want to reveal? Been there, got the T-shirt. However, if you need someone to check the math of someone's parametric procedure, forget it. Besides being crappy at it, I'd rather get a square needle in the left eyeball than do that sort of work.
Does that answer your question? That make my opinion any more/less credible? Just curious as to why you want to know.
I did not realize I had to be certified in anything in order to be able to read a study and recognize the scientific method and understand data sets and statistical manipulation.
I guess I will slink out the door with my LOWLY adn that required me to have statistics and two classes on how to read eveidence based research...
In most cases, the extra coursework, in this case management, is a barrage of paperwork on useless concepts that NEVER get used in day to day life.Anyone that has ever had a kool-aid stand understands the basic concepts of costs/profits. If you can balance your checkbook you understand debits and credits. Anyone that has ever held a job besides nursing understands organizational structure, getting x amount of tasks accomplished in y amt of time and content and importance of a yearly review. It's really not rocket science. In most cases it's just a bunch of superfluous hoopla.
Oh yeah, that art history requirement? I find it particularly useful as I stroll down the hall at work and look at the lithographs of Anne Geddes.
It's called "education" and since you're a nursing student I hope you catch on soon. Without that "fluff" you're a trade school grad...which is ok if that's what you want. You're speaking without experience; I'm speaking from experience.
If you mean would a professional journal ask me to write peer reviews? - Were I the editor, I wouldn't hire me for that role.Does that answer your question? That make my opinion any more/less credible? Just curious as to why you want to know.
Yes, it answers my question. Just like to know the background of the person doing the review/critique.
Just like the above nursing student poster...I take her comments differently than someone who has a PhD. One is posting an opinion about something they haven't experienced yet, the other has "been there and done that."
I did not realize I had to be certified in anything in order to be able to read a study and recognize the scientific method and understand data sets and statistical manipulation.I guess I will slink out the door with my LOWLY adn that required me to have statistics and two classes on how to read eveidence based research...
No you don't! But if you and a research scientist were arguing about a study, I would value one opinion more than the other!
Fascinating thread. And it touches upon an aspect of nursing I've wanted to ask for ages but never found the right opportunity to do so.
If this question really detracts from the thread, please feel free to ignore it.
My question:
Rofl. So why go to school at all. Just show up on the job site and in a few years, you'll have an education.Your last argument here I'll take first. Another straw man argument. I never maintained that education isn't important. I'm offering a critique of a study that states that the DIFFERENCE between education levels can be examined without experience being a factor.
My Father has often told me that this whole business of graduating from school and then having to pay someone to sit for the NCLEX so that the State can grant you a 'professional' lisence - then having to go to a job and undergo orientation/training anyways makes no sense. He says that the whole lisencing process as it stands today is BS - just another way for the State and/or the 'professiona' bodies to make more money by selling the bunkum that some lisence is required by taking a particular exam and that this is somehow more required that that initial orientation/training.
Is he mistaken? Is there some grain of truth to this?
Thanks
ZASHAGALKA, RN
3,322 Posts
OP'S RESPONSE
(Works2xs's comments from the previous post highlighted.)
"So, because they don't have a 100% data sample the whole study is flawed?"
YES.
I never said, as you suggest, that the study required 100% representation. Your answer is a logical fallacy, to wit, a straw man argument.
I argued that the study was flawed because it didn't sample major population segments, not because, as you suggest, that it didn't sample EVERY RN.
Crucial segments of RN populations were left out of the study. And it wasn't just the VA hospital system. Because the study relied on state statistics, it almost certainly left out military hospitals with both transient military and fixed civilian RNs - again, almost certainly in much higher percentages of BSN due to the hiring nature of the Federal Gov't and military BSN requirements.
You could argue that since the active duty military is transient, they shouldn't be included in the study. But, since that transient status is relatively 'fixed' throughout the country as a steady-state population of the nation's military hospitals, I would argue for inclusion. In ANY case, the civilian RNs should have been included in the study.
And the study left out small hospitals that it stated it couldn't get enough of a survey sampling to be representative.
So, the two groups MOST LIKELY to affect the outcome of this study: groups with high percentages of BSNs and groups where even modest improvement of BSN percentages could have the most pronounced effects - were excluded from the study. And the reason for exclusion? The authors didn't bother to devise a method for capturing this data.
As to your argument that this was only a 'few' hospitals. By the authors own admission, it represented 20% of the populations segments. And those 20%, since they very likely represent a much higher ratio of BSNs - distort the whole sample if not accounted within the study.
"The number of VA hospitals that could not provide the requested data for the study totaled a whopping 6 hospitals.... out of a possible 210! The suggestion that this study is worthless because they don't have a 100% data sample is just patently absurd."
I do not suggest the study is worthless because they do not have a 100% study SAMPLE, but because they conveniently ignored the sampling of crucial population SEGMENTS. In fact, out of the 210 hospitals you cite, the study ignored 42 of them.
Instead of devising a new model to correspond to a new study...
"Are you confusing sample size with data points? Please explain what type of data collection points were omitted from the study. If you say, "They omitted VA hospitals" save your breath. That's a fucntion of who was sampled and not of what was gathered from the sample."
The data points that would correspond with data from VA, military, and small hospitals. It's not a waste of my breath. You have it backwards. what was gathered is a function of who was sampled, not the other way around. Because they started with incomplete data, it affected who they included in the study.
If, as you suggest, who they included affected what data they collected, I'd be inclined to agree with you. However, who they included was DETERMINED by what data they collected. And that data was lifted from another study. It was a case of 'improper' fit.
"They didn't collect data exactly on point to this study because it wasn't available under the data points they borrowed from their previous study.
Umm.. so what? That's why they collected "unique data obtained from surveys of hospital nurses."
I disagree, the two types of data, state reporting statistics and RN surveys AUGMENTED each other, not functioned as replacement data. In fact, this was the reason cited for dropping small hospitals off the list: not enough survey information. They HAD adequate reporting statistics for those hospitals, but, because they didn't have BOTH, they dropped off those crucial population segments from being sampled. They dropped the VA and military hospitals off because they didn't have state reporting statistics. On either side, if they didn't have BOTH data, they excluded those hospitals from the study.
So, there is simply no representation of crucial population segments in this study because of inadequate data collection.
"(from the 1st paragraph of the "Methods" section of the study). They started with a data set from their previous study and added infromation from a survey created for this study. If they can't use data from another study, and for whatever reason you decided to exclude current survey data, by what means do you propose for collecting new data?"
You misread how they used the surveys. If they didn't have BOTH data, survey and statistics, those populations segments were ignored by the study.
I don't propose collecting data for this study at all. I don't believe it possible to 'adjust' for the 133 factors they adjusted for. The only way to adequately test for this would be to find identical hospitals with identical variables EXCEPT for percentages of BSN and DIRECTLY compare those results.
Degree Bias
...in a study designed to show a difference in one set of degrees, BSN vs lessor degree, the authors made assumptions on another set of degrees, ADN vs. Diploma.
"Stop. You are wrong. They didn't make any assumption. They actually conducted a data analysis of each of those degree types and found that neither one offered an advantage in ability to predict patient outcome."
Stop. A study of the degree comparison between ADN and Diploma is AT LEAST as big an undertaking as a study of BSN and other degrees. It is a question of equal magnitude as the study under consideration itself.
Now reread:
Here's the reference: "When proportions of RNs with hospital diplomas and associate degrees as their highest educational credentials were examined separately, the particular type of education credential for nurses with less than a bachelor's degree was not a factor in patient outcomes.[emphasis added]" (found in the second paragraph of the "comments" section of the study)
In one paragraph, they dismiss their bias, which is ON PAR with the bias they are trying to study, as 'not a factor'. And how did they determine this for ADN vs. Diploma? I'll tell you how they proposed to do the very same thing for BSN vs other degrees: this study.
Do you not see the point? If it takes a study of THIS magnitude to determine whether the difference between BSN and another degree is significant, then how would it not take a study of at least as significant a magnitude to determine whether the difference between ADN and Diploma is significant? Or more to the point, if such a difference between ADN/Diploma is so easily factored, then why the purpose of this study for BSN/other degrees? Just 'factor it in' and move on.
As per usual in this study, with no indication of the method of the verification process, the authors make a sweeping generalization that should just be taken at face value. And this particular generalization changes the parameter of the entire study. There is simply no basis to make any observation, as a result of this study, as to the relationship between BSN to ADN, or BSN to diploma. Neither relationship was independently studied.
Ok, maybe you can say that BSN is preferable to lower degrees, in the same sense that it is preferable to CNA, or an Accounting degree to be a nurse. But no comparison was directly and independently made with ADN, or Diploma. For the purposes of this study, nothing can be said about those direct comparisons. They simply weren't studied.
In other words, the study offers no results on the effects of BSN over ADN education because it didn't study it.
"The above exerpt directly contradicts your claim. There's no other way to say it than you're conclusion on this point is just plain wrong."
You missed my entire point. I understand that a sweeping generalization was made without any means to determine how they verified this. If I took that at face value, you'd be right. But, since the question at hand is THE SAME DEGREE OF BIAS as this study is trying to address, I simply cannot take at face value the claim that the one relationship could be 'factored' in a paragraph, and yet the other relationship - of EQUAL MAGNITUDE, requires this entire study.
I point it out because it is a key indication of the bias of the authors. And no, that's not an ad hominem attack because I'm not shooting the messengers; I'm directly challenging their credibility and so, the direct premise of their argument.
"Lack of Substantiating Data
The study was full of comments like the above comment, without footnotes.
You make this claim and then fail to provide any example. How can any response to your critique on this point be made without any means to examine your claim?"
I believe I used three examples in my first critique of this behavior. The paragraph above, about 'factoring' out ADN vs. Diploma bias without any data to substantiate methods, is yet another example.
(From my first critique as examples of unsubstantiated declarations:
"It was later verified that this decision did not bias the result." - Nice to know. It would be nicer to know HOW this was verified.
"Previous empirical work demonstrated. . ." - um, exactly WHAT empirical work was that?
"Estimated and controlled for the risk of having a board certified Surgeon instead of a non-board certified Surgeon." - Again, simple question: how was estimated and controlled for?)
Data Manipulation
They 'adjusted' (manipulation is not an inherently 'bad' word, as you suggest) the data ONE HUNDRED THIRTY THREE ways.
Wrong. They identified each data item with 133 characteristics. It's like saying "Fred" is a data item and then further identifying "Fred" as a male. Does that change "Fred"? Is it a manipulation of "Fred". Of course, the answer is "No". However, what it does do is allow the researcher to account for any added risks that a male Fred might have when comparing him to a female. If you factor out the added risk Fred has just for being a male, then any difference left has to be related to another factor.
I'm afraid it is you that are wrong. Those 133 adjustments weren't just identifiers, but how to 'risk adjust' for those identifiers. It's not just that Fred is male, but how does that compare to Sally. So, Fred gets a certain value for being male, and sally get a different 'adjustment' for being female.
This is how they make up for differences between pt load, experience, type of hospital, qualification of surgeon, etc. etc. The 'adjustments' aren't just to identify say, between a BSN RN with 6 mos. experience in a county hospital with 7 pts and a BSN RN with 6yrs experience in a teaching hospital with 4 pts. The adjustments were designed to STANDARDIZE this disparity by 'risk adjusting' each.
My point is the subjective nature of this, times 133.
"They do, however, point out that they used the same approach as the one used by "Silber and colleagues" and then cites the references."
This above may indeed point to a standardized method of 'handicapping' each indivdual criteria, but I still maintain, that once you assign relative values times 133, you lose any perspective to scale. Each criteria, if only subject to minute error (and the subjective nature of this evaluation could lend to huge error) nevertheless equates to the risk for a cumulative SUBSTANTIAL error after 133 adjustments are made. It is a mistake potentially along the lines of ORDERS OF MAGNITUDE.
Key Criteria Conjecture
Simply put, they did not examine real life 'failure to rescues'. They extrapolated, using ICD-9 codes, what COULD HAVE BEEN failure to rescues.
They did no such thing! They determined that information by crossing hospital records with PA State death records!
As for the ICD-9 data, they used those definitions to identify the difference between a complication that resulted from the patient's stay versus pre-existing co-morbidities.
You are confusing data. There were TWO criteria. The first was 'death in 30 days', which did indeed use death records. I stated in my first critique the concerns I had with comparing one data base (of 250,000 hospital records) against another (death records) and the potential for error in that.
The second criteria was 'failure to rescue'. In this, the authors looked at ICD-9 codes. If there was a change, and if, IN THEIR EXPERTISE, the change was not related to co-morbidities, then that change equated, for the purposes of this study, to a 'failure to rescue'. The 'failure to rescue' designation was made from ICD-9 codes AS WELL AS the designation of co-morbidities.
This was my point. For this purpose, the ICD-9 codes are the 'raw data'. That raw data was then subject to being segregated (along the lines of co-morbidities vs failure to rescues) by the authors, IN THEIR EXPERTISE.
You just cannot allow the authors of any statistical model to decide which raw data is included in the final results and ever hope to have an unbiased final product! This is Stat 101! The very act of controlling the raw data produces the result of controlling the final product. It turns this whole study into a subjective opinion. And it makes that opinion no more statistically valid than my very own opinion on the subject.
They fully admit that this is the point where data is excluded 'in their expertise'. So, it makes full sense at this point to ask who is sponsoring the study.
Taken completely out of context. The only data excluded was the 4.3% of respondents who checked "other" for their highest level of education (i.e., it couldn't be classified as Diploma, ASN, BSN, Masters, or PhD). And it wasn't just arbitrarily tossed. They included the data as part of the Diploma/ASN data set to see if it altered the outcome (which it didn't). Then they re-ran the calculations with the 4.3% added to the BSN data set to see if it changed the outcome (which it didn't). So, since it made no discernable difference in either outcome, they excluded the data.
Huh? At this point I was referring back to the subjective discrimination of raw data with respect to the ICD-9 codes. Your answer is not on point at all to my argument.
...AT THIS STAGE, since this stage is totally, by their own admission, subjected to their opinions, and the results are totally dependent upon that subjectivity.
Since they didn't just "decide" in some unsubstantiated manner as you seem to think, your conclusion that the study must be biased just falls apart.
It does't matter how 'substantiated' the manner of their decisions are. They segregated raw data out before compiling that data. That is a statistical NO-NO. These results are not based on the whole picture. No, their results are based only on the part of the picture that the authors CHOSE to present.
Risk Adjustment.; Logistics Regression Models; Direct Standardization Models; Alternative Correlations
All of your comments in these sections turned on the mistaken idea that they changed the data 133 times. Since they didn't, all of these arguments are nonsensical
They did risk adjust this data. I didn't say they CHANGED the data. But they DID assign each particular criteria a varying degree of importance. Re-read the details of this report. This wasn't about identifying 133 criteria but about 'adjusting' for them. It's not that Fred is male, but how does that compare to Sally?
And you conveniently neglected to discusss the use of a logistics regression model to make THEORETICAL MODELS regarding this data when they had 168 hospitals in their study and a decent idea of the BSN/RN ratio in each. Why not make direct comparisons between equivalent hospitals with different BSN ratios?
And their conclusion, BASED ON A THEORETICAL MODEL, is that 'complication would be 19% less in hospitals where 60% of nurses had at least a bach degree than in hospitals where only 20% of the nurses did."
But you know what? The VA and Military Hospital systems were MOST likely to have that 60% BSN rate and the small hospitals were MOST likely to have representative samples of 20% of BSN. So, a perfect REAL LIFE control for this hypothesis existed. Oh wait, those hospitals WERE PURPOSELY EXCLUDED FROM THE STUDY. So, the theoretical gets to remain theoretical. Indeed.
Study Conclusions.
"So the authors did no such thing as to dismiss education as a factor. In fact, they specifically looked at it and ran various models in an attempt to see how it affected patient outcome in relation to educational level and staff ratios. So it was not “factored out.” It simply made no difference to the outcome of a patient’s stay when FACTORED IN with education level and/or staff ratios. There is no “conundrum” in their statements. Period."
My point remains. They specifically factored it in. It didn't make a difference in the results. That is not the same as factoring it out.
In fact, experience IS education.
Rofl. So why go to school at all. Just show up on the job site and in a few years, you'll have an education.
Your last argument here I'll take first. Another straw man argument. I never maintained that education isn't important. I'm offering a critique of a study that states that the DIFFERENCE between education levels can be examined without experience being a factor.
I disagree that such is possible. You state they factored it in. And indeed, so do they. My logical question: how? Experience is INDEED a form of education. So how do you 'factor in' levels of experience? Seems to me that this concept is of GRANDER scope than the original study.
Experience and education are intertwined - in ANY occupation. To say they "factored in" for either does disservice to the other. It is the equivalent of the old 'nature vs. nurture' discussion. And to say this without some SERIOUS verification of methods is laughable. That's a study, in itself, that would DWARF the study under consideration.
"Regarding your comments on alternative educational pathways, ..."
I am correct.
Wow. I didn't realize that debate was so easy, else I would have simply declared, "I am correct" at the beginning and saved everyone the trouble of reading through this post.
You more or less validated my argument. I didn't see a reason to respond further.
Conclusion:
And my point is that the limitations of their study makes their study useless as anything but propaganda.
You are certainly entitled to your opinion. My goal was to provide a few points to consider regarding your "critique" for the rest of the readers. Let 'em arrive at their own opinion.
And the authors of this study -- and their sponsors -- are certainly entitled to THEIR own opinions. And that is EXACTLY AND ONLY what this is. This study is too fundamentally flawed to have any scientific and/or statistical validity.
~faith,
Timothy.