mental health
AI therapy app pulled in UK
Slingshot AI, maker of the therapy chatbot Ash, told users in the United Kingdom that the app will no longer be available, citing concerns it runs afoul of regulations.
There's healthy debate in the United States about which mental health apps and AI products are subject to Food and Drug Administration regulation. But in the U.K., there's little wiggle room for products that claim they aren't devices because they're intended for wellness purposes. I explain more in my story.
Read more here
fundraising
OpenEvidence raises big, looks beyond ads
If it feels like we were just talking about OpenEvidence — we were. Just a week go, the maker of a popular chatbot for clinical evidence promised "medical superintelligence" at its first ever JPM presentation. Now it's announced $250 million in new funding, bringing its total announced over the last year to $735 million. The company is valued at $12 billion.
One begins to wonder how an advertising business model justifies that big number, and CEO Daniel Nadler told me that indeed the valuation is based on a more ambitious bet that AI that can "solve" medical cases "would have nearly infinite economic value to human civilization."
Read more here
Radiology
FDA clears AI tool to detect 14 conditions
Aidoc, a well-funded radiology AI company, on Wednesday announced it received FDA clearance for a a tool that can triage 14 critical findings in a single abdominal CT scan: liver injury, spleen injury, bowel obstruction, appendicitis, and more.
As my colleague Katie Palmer reports, the new AI hints at a coming future. More developers are experimenting with large vision models that can be used to identify many different conditions from a single x-ray, MRI, or CT scan. This may require new regulatory approaches from FDA which right now evaluates indications one at a time.
While creating a single product that can scan for many conditions may simplify workflows for radiologists who must juggle a growing arsenal of technology at their disposal, it could also amplify risks, including that an AI will falsely identify signs of a condition a patient doesn't have.
Read more here