health equity
What health tech companies should know about teens
Earlier this week I spoke with Julie Tinker, who leads equity-related design efforts at Hopelab, a San Francisco-based startup lab and investor focused on teen health. Hopelab has built and funded tech services, including ones offering mental health care or helping teens find it in-person. Tinker, who has advised startups on reaching BIPOC youth, said health tech companies catering to adolescents — especially for mental health — often overlook the importance of involving them in design.
“It’s really easy to think we understand what young people need and want and we can easily be wrong," she said. One platform she advised, focused on support services for LGBTQ youth, was planning to build a mobile app. But discussions with queer and BIPOC communities revealed that they didn't always want to use apps, especially not ones with complicated login protocols. Instead, they preferred a service they could access anywhere, from any device, without extensive set-up. The company decided on a web-app, Tinker said.
In another instance, a group helping young Black girls find mental health services was designing an FAQ page, populated with information founders thought would be relevant about the types of services available. But adolescents told Hopelab their primary question was how much they'd have to pay. "We got really clear information that the lead points we thought they wanted to hear first were definitely not," Tinker said.
"Unless you’re having the conversation with young people to understand what their pain points are, and what their needs are, it’s really hard to deliver on the design of a product.”
It's not always easy for startup founders to embed in the communities they're hoping to target. But they can certainly partner with community groups who do work with those populations, Tinker told me.
“It’s probably not always realistic to expect a company or an entrepreneur to jump into all the relationships with the people they need to talk to, so who are the people who are going to facilitate those and already have established trust there?"
Artificial intelligence
Experts wrestle with AI's role in new biothreats
Late last week in Washington, I dropped by a panel organized by the Johns Hopkins Center for Health Security that, unlike many other AI-discussions, promised to highlight the risk of its misuse by bad actors— not just hallucination or error. On stage was Tejal Patwardhan, a member of OpenAI's technical staff, who said the ChatGPT creator is closely tracking ways large language models could be harnessed to propagate biological threats: they could, hypothetically, provide a malevolent actor a step-by-step process for creating a harmful biological agent. OpenAI is testing its own systems to see how easily they can be misused, Patwardhan explained.
That's one of many attempts by companies and regulators to stave off risks. But, as moderator and Center for Health Security director Tom Inglesby pointed out, pressure-testing efforts often appear ad hoc and up to individual companies. "It does feel very manual at times," said Helena Fu, director of the Energy Department's Office of Critical and Emerging Technology.
Patwardhan warned that the technology's rapid evolution means use cases, and potential risks, will also change. "It'll be very important for the research community to have a sort of finger on the pulse" to track risks too, she said.
Congress
Senate Dems want to limit AI in Medicare Advantage
Democratic lawmakers want Medicare to stop health plans from using AI-guided algorithms to deny patients medical services unlawfully, they said during a Senate Finance Committee hearing late last week. Citing a STAT investigation by my colleagues Bob Herman and Casey Ross, Sen. Elizabeth Warren singled out insurers' use of such algorithms within Medicare Advantage.
“Until CMS can verify that AI algorithms reliably adhere to Medicare coverage standards, by law, then my view on this is [the Centers for Medicare & Medicaid Services] should prohibit insurance companies from using them in their MA plans for coverage decisions,” the Massachusetts senator said. “They’ve got to prove they work before they put them in place.”
That's a step beyond what federal officials have already said they're doing: continuing to allow AI and algorithms' use in coverage as long as they disclose how they work and ensure that humans make the final adjudication. Read more from Bob and Casey here.