Holding Hands with AI (Artificial Intelligence)

Artificial intelligence has amazing potential for shaping our workplace. Our fragile love-hate relationship with AI will evolve, and we will surrender more autonomy. Until AI becomes like us, “only better,” nurses will remain the patient’s last line of defense. Someone still has to pay attention. For now, we’re it. Nurses Announcements Archive

Published

Last night, my MSN homepage bannered a slide-show article by Peter Giffen entitled, "20 ways artificial intelligence is changing our lives." The slide-show opens with an adorable little robot looking submissive and inquisitive. He, she, or it, looks safe, friendly, clever, and, well, almost lovable. I want one. Giffen asserts: "From chess-playing computers to self-driving cars, artificial intelligence involves machines learning from experience and performing tasks like people, only better." (Geffin, 2018, para 1). In the arena of healthcare, he intones, "AI has the potential to make medicine more precise and personalized." (Geffin, 2018, para 4). Or, more robotic. I wonder which it will be?

AI's presence has blossomed in my world over the past few years, but it's learning curve has introduced new forms of danger. I research a trip to Australia. Within minutes, a bunch of adds for travel to Australia pop up in my FB newsfeed. Targeted advertising is cool, and a little creepy. Do I want to be spied on? Apple really wants my fingerprint to keep my phone safe. But, if I give it to Apple, my fingerprint becomes a new member of the AI community, making it available to anyone, or anything, clever enough to find and use it. Apple also suggested using facial recognition for security. Hey, why not? God knows, my face is already out there- except a photograph can pass the recognition hurdle. A few numbers in my head might be safer.

My opinion, for what it's worth, is that privacy and autonomy are the commodities at stake in our courtship, or struggle, with AI. Whatever we give to AI comes out of our own pockets. When AI gets more, we have less. There are subtle levels of vulnerability in our surrender to AI, and our work environment is not immune.

About a year ago, we adopted mandatory bedside scanning of patient armbands and medications prior to administration. In theory, scanning a patient's ID bracelet and every medication prior to administration removes risks of errors. We may have gained some safety, but we slowed the process. Giving AI a big voice has introduced several arguments I'd rather do without. For example, I need to give a child 240 mgs or 7.5 mls of children's Tylenol. When I scan the first 5-ml container, my KBMA warns me that I've only got a partial dose. When I scan the scan the next container, the KBMA hits me with a stop which I must justify. After scrolling through a drop-down menu, I choose the "physician's order" option to inform the KBMA that I won't be giving the full 320-ml dose which has been scanned. AI has the doctor's order too, and, hopefully the next upgrade will simply ask, "Since the Dr. ordered 240 mls and you scanned 320 mls, you're going to waste 80 mls, right?" That would allow me to confirm that we are all in agreement. Now, our AI simply knows I've got more than the doctor ordered. After our conversation, the KBMA signs off without knowing how much I'm planning to waste. And, it still has no idea how much volume is going in the syringe or in the child's mouth.

AI may confirm the patient, the dose, and the route to its satisfaction. But, after making the computer happy, nurses still have full control over how much is pulled up, where it goes, and how fast it gets there. The same vial of Decadron can be given IM, IV, or PO (despite the "for IM or IV only" warning on the label), and the patient will probably have a similar outcome despite a route error. But, giving Bentyl IV (trust the warning) instead of IM will cause multiple adverse reactions including pain, edema, thrombosis, thrombophlebitis, or reflux sympathetic dystrophy syndrome. A 0.3 mg dose of Epi IM will stop an allergic reaction but giving the same dose IV will destroy even a young, healthy heart. Sometimes, I choose to override my KBMA for practical reasons. Our PO potassium order defaults to two 20 MEQ tabs. Most patients can't swallow them, but I have to override AI to substitute four 10 MEQs. In life-threatening situations in the ER, we still routinely work off verbal orders and create a record later. Despite the appearance of AI control, the autonomy in medication administration is still largely ours. The responsibility is definitely ours.

Our wireless communication Vocera units are a mix of AI. They're getting better at voice recognition and taking more initiative in conversation: "Jason is not logged in. Would you like to leave a message?" But, they're still learning. If you have any doubt, crank up two of them and set them next to each other on the counter. The artificial conversation is entertaining. Their lack of intelligence shows up by the second or third sentence. They're very polite though, and they apologize to each other a lot. "I'm sorry, I didn't understand. Could you repeat that message?" "I'm sorry. I didn't hear you. Did you say to call to call Martha Johnson?" When two consecutive sentences start with "I'm sorry," the AI meltdown is well underway.

One evolving pitfall of AI is the notion that simple clicks equal completed tasks. Recently, I had one patient waiting for transfer to another hospital when my shift ended at 11:30 p.m. I told my coworker assuming care that I'd called report to the receiving faciality. My coworker asked me to put in a disposition note. I said that I couldn't do it yet because the patient was still in our department. "It would be falsifying the record for me to chart that she's gone when she may still be here two hours from now." The oddity of the encounter was that my coworker was clearly not impressed with my explanation. I can understand her confusion though.

New management instituted a plan where a receiving unit "pulls" the patient to the floor in their computer as soon as a room is assigned, creating a digital record that the patient has been moved. While this makes their times look great, the report has not been called or charted, transport may not come for fifteen to twenty minutes, and the patient is physically in my room. In a glaring AI failure, our ER charting system and floor systems are separate. After a patient is "pulled," I can no longer see him in our ER system. I can only chart by adding an append note, and vitals and other pertinent data are displaced. As reverence for virtual reality evolves, there's a growing acceptance that clicking creates reality. If it's clicked, it's done, end of story. The real world can catch up later.

The greatest danger of AI in our environment is the ease of wrong orders entering the system. Doctors can order multiple order sets and sweeping protocols with single clicks. The configuration of our dashboard gives them a margin for error about the size of a cursor between patients. AI is not yet equipped to ask if it's sensible to order a psych consult on a two-week-old or question the fluid bolus ordered on the CHF patient. Instead of making medicine more precise and personal, AI is making it more cookie cutter. Extremely diverse complaints are evaluated with the same "might be septic" battery of tests. The evolution of AI protocols has made doctors more obsolete while passing more of the ongoing assessment, treatment, risk, and responsibility to bedside nurses.

Ironically, even when AI seems innocuous and safe, it may be the most dangerous. We recently got bedside scanners to print lab labels. They've been great. Within seconds of an order going into the computer, I can scan an armband, print a label for a specimen I've already drawn, and have it in the tube system in about one minute. After more than two decades without a labeling error, last week I had an "ah ha" moment. I realized that I'd come to trust the seemingly infallible new system and let my career practice of carefully checking the name and birth date slide. Scanning an armband is only infallible if the person who put the armband on got it right.

Geffin makes an interesting observation. In autonomous car crashes, there was a human on board. "The human 'safety driver' in the car, who was supposed to provide emergency backup in cases like this, was apparently looking down at the fateful moment." (Geffin, 2018, para 9).

Our fragile love-hate relationship with AI will evolve, and we will surrender more autonomy. Until AI becomes like us, "only better," nurses will remain the safety drivers, the patient's last line of defense. Someone still has to pay attention. For now, we're it.

Peter Giffen. "20 ways artificial intelligence is changing our lives." 2018. Web. November 3 2018.

Daisy4RN

2,221 Posts

Specializes in Travel, Home Health, Med-Surg.

Good article!

It is scary how much autonomy and freedoms we are willing to give up, or sell, to others. AI might be good in a few select circumstances but the potential risks and negative results (like those you describe) far out way that in my opinion.

Google Is Building a City of the Future in Toronto. Would Anyone Want to Live There? - POLITICO Magazine

Specializes in PICU, Pediatrics, Trauma.

Excellent article! Scary, for sure, regarding all

the mistakes that can happen using computer systems. The examples you gave explain it well and I've actually experienced a few of them.

One issue in particular effected me quite adversely.

It can be confusing to explain, so bear with me...

I was assisting another nurse in an emergency situation and pulled a Morphine dose from Pyxis. Gave it to her and she administered it. Some weeks later, I was called into the managers office to explain how I pulled a morphine dose, but didn't show documentation of giving it, 5 mins after it was documented as given by a different nurse.

Turns out, the flipping clocks on the Pyxis and the bedside charting computers were not in sync. The problem was that I couldn't explain it at that point and dumb-founded. I looked suspicious.

It wasn't until a later time when I was telling a colleague about this and she had a similar experience in another facility. So she checked the clocks on said Pyxis and bedside computers and found they were approx 5 mins apart.

Not a life threatening situation, but potentially may have ruined my reputation. I no longer worked at that facility when this was figured out and it went unexplained. It was a travel contract and I was "let go" at the time. WRONLY, but none the less, it still bothers me whenever I think about it.

RobbiRN, RN

8 Articles; 205 Posts

Specializes in ER.
Excellent article! Scary, for sure, regarding all

the mistakes that can happen using computer systems. The examples you gave explain it well and I've actually experienced a few of them.

One issue in particular effected me quite adversely.

It can be confusing to explain, so bear with me...

I was assisting another nurse in an emergency situation and pulled a Morphine dose from Pyxis. Gave it to her and she administered it. Some weeks later, I was called into the managers office to explain how I pulled a morphine dose, but didn't show documentation of giving it, 5 mins after it was documented as given by a different nurse.

Turns out, the flipping clocks on the Pyxis and the bedside charting computers were not in sync. The problem was that I couldn't explain it at that point and dumb-founded. I looked suspicious.

It wasn't until a later time when I was telling a colleague about this and she had a similar experience in another facility. So she checked the clocks on said Pyxis and bedside computers and found they were approx 5 mins apart.

Not a life threatening situation, but potentially may have ruined my reputation. I no longer worked at that facility when this was figured out and it went unexplained. It was a travel contract and I was "let go" at the time. WRONLY, but none the less, it still bothers me whenever I think about it.

Ouch. Our Pyxis clocks often run a few minutes out of sync with our charting system. The sad part of your story is that someone was getting paid to come along later and make an issue out of your situation. The morphine was pulled, it was given, and it was charted accurately according to the equipment supplied by your facility.

AI is the new spy. It's cheaper, remote, supposedly accurate, and much easier than each level of management knowing their employees and actually paying attention in real time.

JKL33

6,777 Posts

I was assisting another nurse in an emergency situation and pulled a Morphine dose from Pyxis. Gave it to her and she administered it. Some weeks later, I was called into the managers office to explain how I pulled a morphine dose, but didn't show documentation of giving it, 5 mins after it was documented as given by a different nurse.

Turns out, the flipping clocks on the Pyxis and the bedside charting computers were not in sync. The problem was that I couldn't explain it at that point and dumb-founded. I looked suspicious.

Evil. The combination of "bad intentions" and stupidity is quite dangerous.

The clocks should've had nothing to do with any of this. Did no one ever ask where the nurse who administered the morphine got it? Was that person also (nefariously) "suspected" of having lied about giving it? Under which patient's name was it pulled? And, uh....did that patient have a morphine administration documented? Even John Doe situations are reconciled more easily than all of this nonsense.

These are rhetorical questions and off-topic. Just thought I would take the opportunity to say that people who participate in raising nonsensical and fake suspicions are amongst the lowest of the low in my book.

+ Add a Comment