Published Date:
Dec 2, 2023
Exclusive
AI
Technology
Healthcare
Meet Healthcare Worker’s Next Side Hustle; Making AI Smarter
Kosice seemed the most unlikely place for it. Were our advisors wrong?
When I landed at the Kosice International Airport in Slovakia it was as if nothing much had changed since the communist era. At least at first glance. I was in search of a home for a European AI tech center location, and at that moment, Kosice seemed the most unlikely place for it. Were our advisors wrong? As it turned out, Kosice was teaming with tech activity from a curious class of higher-ed healthcare talent flush with foreign investment capital.
Kosice is synonymous with the way tech moves.
Slovakia’s second largest metropolis is a beautiful city a few miles from the Ukrainian border which partially explains its abundance of strong tech talent. Even amidst a regional war AI endures. It is budding with clever innovators keen on using technology to make healthcare work better – particularly artificial intelligence. But we needed health talent not more AI engineers.
Kosice is not unlike many global cities outside Silicon Valley that are attracting skilled healthcare workers to a fledgling side hustle economy.
AI needs this skilled workforce to help navigate the last mile of the biggest problem with AI in the industry, the alignment problem.
Alignment is the process of training AI to improve its output which isn’t going to occur organically with data from the internet corpus and ChatGPT – despite all the hype.
Alignment is going to come from the smartest among the healthcare workforce for the heavy lifting. Those who need to examine its output, correct it, and add it back to private datasets that specialize in highly trained medical disciplines for others to use. We, like many others, long to move AI into its full potential.
Are we headed for a brain drain?
For most, the pay is good for AI data trainers and the workplace environment is even better. It’s no wonder why this side hustle – versus one of picking up more shift hours, or the rigors as a travel care worker, or dog sitting – is a most promising career prospect. If you’re one of those workforce worryworts don’t. Even as demand is high for the best talent, it’s unlikely that training AI side hustles will have a meaningful impact in the immediate workforce count.
The brightest among healthcare will contribute to humankind immensely by fine-tuning the most promising technology ever invented.
Fine tuning healthcare data is going to make healthcare much smarter. It’s a big effort to train AI. For example, we’ve logged about 5 million hours harnessing the skills of 2,500 behavioral health professionals for our solution. And, we’re only just beginning. We estimate it will take thousands of professionals – many taking advantage of a new side hustle income to fund a reliable corpus capable of diagnosis and predictive capability in the simplest health applications (behavioral health in our case).
AI is no substitute for those trained to ask the right question
So as the story goes, 5 doctors ponder how to arrive at a standard for a better diagnosis. One-by-one they suggest a question they consider important patient queries. They add the score and divide by 5 and a new index is born. Alas! How did they arrive at this new index? They made it up. But were these the right questions? This is the alignment problem.
There are countless indexes in healthcare. Each of them made up. Few of them can be broadly replicated and hardly a day goes by when we see a research paper, in elaborate detail, make something up. Case in point; last week I observed a rigorous debate from a researcher proclaiming that its social determinates of health data was so robust that a near perfect effect rate of .88 was achievable when assessing whether a patient was likely to abuse illicit substances. Impressive sounding indeed. However, the fine print showed this study excluded demographic data. Why? His fear that demographic attributes might contribute to equity bias.
In May 2016, the investigative journal ProPublica published the headline, “Machine Bias, There’s software used across the country to predict future criminals. And it’s biased against blacks”! The model, they said, used race and social determinants as attributes to predict the likelihood of an offender reoffending. Thus, the use of race, as they concluded at the time, created an unfair sentence that could be handed out.
But by the end of June and considerable investigation, ProPublica admitted in a notable retraction, that the predictive model was, in fact, accurate and that demographic factors had a measurable positive truth with reoffender prediction. The model had been proven to ask the right question and that omitting the protected attribute made it impossible not only to measure bias but also to mitigate it.
We have a long way to go with the ethical use of data in healthcare. AI augmentation is the output but the solution lies with whom is asking the right question. Will healthcare workers contribute their skills in a new side hustle by asking the right questions we need for common sense healthcare decisioning?
We’ll be watching.
And, in the meantime, we'll be talking edTalk. See you next time. Be sure to subscribe.
Other Blogs
We Need a Small World for a New World of Healthcare
Exclusive
Mental Health
Addiction
Drugs
Behavioral Technology Takes Aim at Urban Violence
Exclusive
Mental Health
Addiction
Drugs
Are Trump and Elon breaking up with the Federal Income Tax?
Exclusive
Mental Health
Addiction
Drugs
CMS Outlines New Tech Stack Rules For ‘No Wrong Door’ Community Health
Exclusive
Mental Health
Addiction
Drugs
Other Blogs
Have Questions? Lets Meet
Select a time you like to meet with us