Archive: January 24, 2023

<<< January 23, 2023

Home

January 25, 2023 >>>


2101 a Telehealth Odyssey

Tuesday,  01/24/23  01:06 AM

Just another day in the life of S#a, she thought.  Coffee, shower, clothes (what to wear?), grab B1u, out the door into her pod, and blast to work.  Every day there were problems, but today there was a big new one.  Usually the hard part was data, finding the needles in the haystacks.  But this time the harder part was the people.

The coffee was kicking in.

“B1u, what’s happening?  Tell me everything.”

“Good morning S#a.” B1u said, using that fake Australian accent.  “Today is Monday, Feb 21, 2101.  It’s going to be bright and sunny and freezing cold.  I think that sweater was a wise choice.”  B1u always complimented her clothes but it was nice to hear anyway. 

“And the Lunacy continues!”  B1u used puns wherever possible.

“Yeah I didn’t think it would be a self-fixing problem”, she admitted.  “What’s the latest?”

“Another 200 people have been stricken.  No warning, no clues.  Just a bunch of healthy Luna colonists one day, and a bunch of sick people the next.”  B1u let its voice get deeper, trying to conjure up emotion.

S#a was actually relieved: nobody had expired yet.  But at this rate it was only a matter of time.

“Did we get new data?  Please scan my comms.  Look for LIDS and mark anything about it ‘urgent’.”  S#a knew B1u would have done so already, but by asking she allowed it to feel superior.

“Well of course I did that already.”  B1u tried to sound hurt, but it ended up sounding merely comical.  “I looked for anything with ‘Lunar’, ‘Immune’, ‘Deficiency’, or ‘Syndrome’.  There are 28 new matches, 5 have data...”, B1u paused for imaginary breath, “… of those, 3 are telehealth session blocks.  B3n was copied and has sent a comm.  Would you like to hear it?”

S#a suppressed an irritated “of course”, and said “yes please B1u” as sweetly as she could.  Take that, you aEye.

“B3n’s reply is just: ‘CI now!’”

S#a pondered for a few clicks.  B3n was right.  Their work with CI wasn’t ready for official certification but this was an emergency.  So now the challenge was not “figure out the data”, it was “figure out the people”.

“B1u?  I need your help.  I need to know everything about the Sacher Lab disasters.  Stat!”

***

S#a was a data scientist at the Global Institutes of Health.  Her day job was compiling and analyzing health data to assist policy makers.  But she moonlighted as a disease detective, working with her team to understand, diagnose, treat, and prevent diseases wherever they might surface.  It was a never ending task; no sooner was one infectious agent or internal syndrome identified and treated than another showed up.  Humans were complicated machines and they exhibited complicated symptoms, and they were increasingly living in complicated environments outside of their design parameters.

***

S#a’s pod popped into the landing zone of GIH headquarters with a soft ‘fwoop’.  “Yay, I made it”, thought S#a, and B1u chimed in with synthetic applause.  She checked herself in the reflection of the windows – argh, red hair sticking in all directions, as usual! – and walked into the lobby.

Striding down the hallway to the engineering area, she noticed more than a few stares.  Was it her hair?  B1u’s LEDs?  Or the fact that they knew, and wondered what she was going to do about it.  Knew she wanted to use CI.  Knew it could help, and could damage GIH’s reputation.  And knew C@l and the senior team would be skeptical.

Her workspace was a 10’ x 10’ cube with low walls that extended in transparent fields up to the ceiling.  A soft bluish light beamed down from above.  She sat down at the round table in the center, and plopped B1u in one of the spherical depressions on the table’s surface.  Red chair for a red day, she thought, for a red girl.  Onward.

“Okay I’ve found a bunch of data about Sacher”, B1u said.  “Much of it is public, but some of it is still restricted and I have to confess there might be more you can’t access at all.” 

Always my fault when you can’t do something, S#a thought, but access to sensitive data was only granted to humans.  “Great”, she said more brightly than she felt, “lay it on me”.

“Back in the good old 2050s, a lot of companies started to use AIs for concocting therapies as well as making diagnoses”, B1u began.  “A lot.  There was serious reimbursement which drove serious research and a lot of product development.  And a lot of pretty dramatic clinical results were reported.  A lot.”  B1u’s fake Australian was pretty cute, S#a reflected, making “A” sound like it had about five ‘y’s at the end, and giving “lot” an unwritten ‘w’.

“Good so far”, prompted S#a.

“Yeah, good so far”, said B1u, milking the suspense.  “But only so far.  As you know, AIs are turned to optimize for outcomes.”  B1u paused a beat for suspense.  “In the beginning the tuning was only to get the best outcomes for patients.  But then people figured out they could also tune for other outcomes, like maximizing billing.  A small lab called Sacher developed some AI on its own, starting with a certified model but then modifying it with new data to optimize for new outcomes.  So far so good.”

“Yeah, so far”, said S#a with her best imitation of B1u’s imitation Australian.  “Lab developed AIs were a pretty common thing, and as long as they had appropriate validation protocols, were good science and good medicine.  So what happened?”

“Complexity happened”, said B1u.  “The AIs were good, but the algorithms were impossible to understand.  The results could be validated, but there was no way to check the optimizations.  Sacher decided to optimize for profit, and not patient outcomes.  Nobody could tell, but it made a difference.”

“Wheew”, fake-whistled S#a.

“Exactly.  It was subtle enough to fool the validators, but it affected results.  Sacher’s profitability increased, and they raised money to grow faster.  Did a few high-profile deals with pharmasites.  Attracted attention.”

Easy enough to happen, thought S#a.  She knew as well as anyone that AIs were only as good as their data.  Give the wrong optimization targets, and they could be corrupted.  It could be happening right now, too, except for the ban.  “Show me”, she asked.

B1u raised an opaque screen in the air over the table.  “Here ya go…”

A multiD spreadsheet appeared, which S#a scrolled and spun with brief gestures.  It took a few minutes, but she began to find the patterns.  She was used to data mining and pretty soon had the key bits isolated.  Sacher had deliberately retooled their diagnoses to overtreat patients, slowly at first, then faster.

“This is pretty low on the radar”, S#a marveled.  “How were they caught?”

B1u faked a laugh.  “The usual: someone blabbed.  Make a few people rich, make a lot of people jealous.  One of the engineers, a longtime veteran named J&n, spilled the beans.  The validators came back in, checked the results more carefully, and shut them down.  And once the lid was off the jar, a bunch of other examples were found at a lot of other places.  Lab developed AIs were banned.”

S#a leaned back and pondered.  As usual, going too fast was not the fastest way.  The whole industry of AI-based medicine sat paralyzed by human review.  Humans were expensive and they didn’t scale.  But they could be trusted.

***

“Hey, checking out the LIDS data?” called a voice from the doorway.  B3n was tall and lanky, with long black hair that hung around his head like a curtain.  He grabbed the blue chair across from S#a plopping his aEye into the table.  He glanced at B1u’s screen, and did a double take.  “Hey that’s not LIDS… What is that?”  Realization dawned.  “… ah the Sacher stuff.  Ah, yes.  Ah, yes.”

“We have to get past this ban!”  S#a sighed.  “We have to get C@l and the senior team to agree.  This is the perfect chance to test CI, it’s our best shot at LIDS.  We have to get our swing at bat.”

“Well we’ve got it.”  B3n gave S#a a significant look.  “We are invited to a senior team meeting this afternoon, to give a LIDS update.  P*y thinks we should bring up CI as an approach.  S/he says C@l hasn’t objected.  Maybe they think they can nip this thing in the bud.  Or maybe they think it could help.  Anyway we have just a few hours to get ready, and the fate of the whole Luna colony could be at stake”

***

The senior team.  Dum dum dum, S#a thought.  Why didn’t she wear a suit?  Why didn’t she work out every day, or eat better?  Yeah, right.  And why didn’t her parents have her genetically modified at conception, she smirked to herself.  And strode into Conference Room Galen.

B3n was already there, leaning against one of the walls.  Ten pairs of eyes swiveled as she walked over to stand next to him.  “What’s happening?” she whispered.

“Nothing yet.  Did you run the tests?”

“Yep.”

B3n looked sidelong at S#a, and her sly smile told him all he needed to know.  “And…”

“Yep.”

C@l was one of those people you notice.  Tall, strong, and … severe, she had worked her way up on the business side, selling diagnostic tests to labs and later their companion drugs to clinics, and later still, managing teams selling everything to everyone.  Rumored to be interested in politics, her acceptance of an appointment to lead GIH was still a surprise.  Under her leadership it had become a force, working closely with FDA to decide which diagnostics and treatments were “safe and effective”, and which companies became successful as a result.  Having climbed so high, she was careful not to fall.  For GIH to be the gatekeeper its reputation must be preserved.

“Okay everyone let’s get started.”  C@l flipped up a big screen at one end of the hexagonal space delimited by translucent fields, and the lights at that end dimmed gracefully.  “Today we have a lot to cover, but let’s start with LIDS.  I think you all know B3n, and have probably met S#a from data science.  I’ve asked them to bring us up to speed.”

B3n and S#a had agreed that B3n would start with an overview.  He slouched to the front of the space, determined not to be nervous.  A few crisp panels with his narration gave the recent history; how LIDS had started with a few people complaining of flu-like symptoms, how the telehealth team had been unable to root-cause any bacterial or viral infection, how those affected were not getting better, and how the syndrome seemed to be contagious, affecting more and more of the Luna colony.

“Other than gathering data, what has been tried”, P*y asked.  S/he was a hybrid, round of form and dressed top fashion.  S/he had risen rapidly through the ranks of academia, renowned for skillful politics as well as strong science.

“Well of course telehealth is huge in all this”, B3n replied.  “Despite the three-second delay to Luna, we can effectively treat just about anything remotely.  As usual we’ve recorded all the sessions along with all the peripheral telemetry, and have been looking for correlations.”  B3n paused and scanned the room.  “So far nothing has turned up.  We’ve asked the local caretakers to isolate the patients and form a few groups, so we can try changes in diet and other possible treatments.  It’s too early for that to yield relevant data.”

As always P*y was the designated inquisitor.  “Okay, we’re following procedures, and we’re stumped.  So what else can we do?”  P*y paused and dramatically faced the room.  “What do you think we should do?”

“Um, well…”  B3n looked over at C@l.  “I think, that is we think, that is, S#a and I think, well, we think AI should be tried again.  For this case.  It’s a perfect test of our new CI theories, which …”

“Yes yes we all know about AI”, C@l interrupted.  “We all know.  And we all know what happened in the recent past, and why it was banned.  So what is this CI?  It would take more than just your hunch for me to let GIH out onto that limb.”  C@l swiveled to S#a.  “So … convince me.”

S#a walked to the screen.  Be cool, be calm, she told herself.  Stick to the facts, and let the data speak.  “Okay”, she said, “let’s review.”

“AI is able to take way more data than humans, find way better correlations in it, and make way better diagnoses than humans.  We know this.  The problem is not that AI is not good.  The problem is that AI is not transparent.  We don’t trust it, and so we banned it.  What if we had a way to trust it again?”

S#a stopped and surveyed the room, then gestured to bring up an animated panel that gave the history of diagnostic medicine over the 21st century.  The rise of genomics.  The dramatic advances in AI technology.  The adoption of telehealth, at first for emergencies, then for general medicine, and then as a way to capture data to support AI.  “We got to the point where most routine diagnoses were being made via telehealth, just to digitize the data.”  Her arms waved and the graphs danced.

“Like with anything, at first we didn’t trust it.  The AI was simply there to suggest diagnoses to the physician.  The people made the final decisions.  Kind of like how self-navigating pods started.”

S#a paused.  So far, so good, she thought.

“Of course we all know what happened next.”  A flick of her wrist brought up a new panel.  “Suggestions evolved into recommendations, which were strengthened by probabilities of outcomes.  And recommendations become the standard of care.”  S#a’s charts were simple and compelling.  (“Thank you B1u, she secretly thought.  You’re the best chartmaker ever.”)

“But the datasets which enabled these incredible improvements were getting bigger, the correlations were getting harder to understand, and the algorithms harder to verify.  There was a trade-off between effectiveness and transparency.  Humans could only validate results, they could not verify the algorithms.  And that led to trouble.”

S#a paused.  Here we go.

“Okay, let’s think about humans for a minute.”  S#a had practiced this part with B1u, and was it her imagination or was she slipping into that Australian accent?

“People have a wide variety of motivations.  Scientists do.  Physicians do.  Mostly they focus on their patients and the best outcomes.  But what if they don’t?  How do we keep that from happening?”

S#a had their attention for sure.  C@l was staring, curious.  P*y was gazing into the middle distance.  “It’s pretty simple.  We rely on other people.”

“We don’t trust any one person, but we trust groups.  The bigger the group, the more trustworthy.  This is how democracy works.  Economists study these groups and they have a name for this: Collective Intelligence, or CI.  Most of the work has been done on groups of humans, but it can apply to AIs too.”

“B3n and me and our team have been working on ways Collective Intelligence could enable AIs to watch each other.  We don’t have to trust any one AI.  Instead we trust a group of them, watching each other.”

Okay there it is, S#a thought, there’s the big idea.  But people don’t always believe a big idea.  I have to show them.

“I wanted to share some data from the Sacher situation.”  There was dead silence as one more wrist flick revealed the data she and B3n had been looking at that morning, nicely charted.  “As you can see, Sacher was doing good medicine.  They were diagnosing and treating patients effectively.  But that was not all they were doing.”

One more flick, one more panel.  “Here we see the profitability of treatment recommendations charted against the same data.  At a certain point, improvement in patient outcomes was sacrificed for profitability.  We all know this happened.  Now, what can be done?”

Before the room could react, another flick loaded another panel.

“Here are the Sacher data again, but this time, also charted with what other AI’s have figured out from the results.  You can see the trust factor going down just as the patient outcomes have stopped improving and the profitability of the lab is picking up.  With CI we could have easily detected the Sacher situation.”

Wow, B3n thought, she did it.  It works.

“Here’s what we think.  What I think.  We should use AI to diagnose and treat LIDS.  We should use CI to monitor the AI.  We tell everyone what we’re doing and why.  And we let them check us as we check the CI as it checks the AI.  And we help a bunch of sick people get better.”

S#a stopped again.  What was everyone thinking now, that she was crazy?

P*y cleared hisr throat.  “Well I like it.  I like the thinking, and I like the data.  And I like doing something instead of thinking of reasons we can’t.”

There was a general rustling as the whole senior team took each other’s temperature.

C@l stood.  “Well I like it too!  Thanks B3n for your overview.  And thanks S#a for bringing this to us.  And thanks to your team for refining the idea and testing it.  Great work.”

“You know, it occurs to me, human progress is made this way.  One person has an idea, but they have to convince a group, and the group convinces others.  S#a has convinced us.  Now let’s be the ones to convince everyone else.”

She paused for effect.  “Maybe we can get AI back on track as the future of medicine.”

***

It has been quite a day, S#a thought, as she pushed open the door of the Space Bar.  It isn’t every day you can restart the future.  “So B1u, what’s happening?  Tell me everything.”

 

hello CUDA: a grid of blocks x threads - GPU series #4

Tuesday,  01/24/23  08:25 PM

Another day, another post about CUDA and GPU acceleration.  Now we're going to build on the detailed example from yesterday, in which we multi-threaded a simple example.  We'll extend this to run a parallel grid with multiple blocks of multiple threads.  (Series starts here, next post here)


Previously we saw that we could easily run many threads in parallel:

Up to 1,024!  But what if we want to run even more?

Turns out GPUs enable many blocks of threads to be run in parallel, like this:

Many (many!) blocks of threads can be invoked, and the GPU will run as many of them as possible in parallel.  (The exact number which will run depends on the GPU hardware.)

Let's see what this looks like in code, here is hello4.cu:

As before the changes are highlighted.  We've added a new parameter gpublocks to specify the number of blocks.  If this is given as zero, we compute the blocks as arraysize / gputhreads.

We've specified gpublocks as the first parameter in the triple-angle brackets, on the kernel invocation of domath().  Remember that the second parameter is the number of theads per block, so the total parallel threads is block x threads.

And we've changed the way the index and stride are computed inside the domath() function, so that the array is parcelled out to all the threads in all the blocks.  You'll note this makes use of several global variables provided by CUDA: threadId, blockDim, and now also blockId and gridDim.

So what will happen now?  Let's try running hello4:

Wow.  With 10 blocks of 1,024 threads (10,240 threads overall), the runtime goes down to 2.3s.  And if we compute the maximum number of blocks (by specifying 0 as the parameter), we get 12,056 blocks of 1,024 threads, for a runtime of .4s!  That's GPU parallelism in action, right there.

Furthermore, when we specify an additional order of magnitude to make the array size 123,456,789, we run 120,563 blocks of 1,024 threads, and the total runtime of that is 3.7s.  Way way better than CPU only (hello1) which was 50s!

In fact, something interesting about this run, the array allocation took most of the time; the actual computation only required .16s.  Which is a good segue to the next discussion, about memory, and we'll tackle that in the next installment.

 

 
 

Return to the archive.

Comments?