It’s not easy to get depression-detecting AI through the FDA
For the past seven years, the California-based startup Kintsugi has been developing AI designed to detect signs of depression and anxiety from a person’s speech. But after failing to secure FDA clearance in time, the company is shutting down and releasing most of its technology as open-source. Some elements may even find a second life beyond healthcare, like detecting deepfake audio.
Mental health assessments still largely rely on patient questionnaires and clinical interviews, rather than the lab tests or scans common in physical medicine. Instead of focusing on what someone is saying, Kintsugi’s software analyzes how it is being said. The idea isn’t new — speech patterns like pauses, sentence structure, or speed are known indicators of various mental health issues — but Kintsugi says its AI can pick up subtle shifts that may be less obvious to human observers, though it has not publicly detailed exactly which features drive its models’ predictions. In peer-reviewed research, the company reported results broadly in line with established self-report screening tools for depression using short speech samples.
The company pitched the technology as a complement — or potential alternative — to self-reported screening tools.
The company pitched the technology as a complement — or potential alternative — to self-reported screening tools like the Patient Health Questionnaire-9, or PHQ-9, a staple of primary care and psychiatry. These tools are supposed to be used alongside formal clinical assessment, and although they are widely validated, screening rates can be low, they depend on patients accurately describing symptoms, and they may not capture the full set of symptoms associated with mental health disorders. Kintsugi argued its voice-based model could provide a more objective signal, expand screening to more patients, and be deployed at scale across health systems, insurers, and employer programs. Doing so, however, would require FDA clearance.
Kintsugi had been seeking FDA clearance through the agency’s “De Novo” pathway, a route meant for novel, low-risk medical devices without an existing equivalent on the market. While intended to streamline approval for new kinds of products, it is still a process that can require years of data collection and regulatory review. Kintsugi’s founder and CEO Grace Chang told The Verge a lot of time was spent teaching the regulator about AI. The framework also fits AI poorly; much is designed with more traditional devices in mind — think hip implants, surgical tools, pacemakers — whose design remains largely fixed once approved. For AI systems, that can mean locking a model that would otherwise continue to be optimized and updated over time.
The FDA fits AI poorly; much is designed with more traditional devices in mind.
Despite the Trump administration’s hard push to cut red tape and get AI products into the real world as soon as possible, Chang said regulatory experts tell her that “there’s nothing that helps them do that except loud yelling from the top.” The approval process was further slowed by federal government shutdowns. The startup ran out of funding waiting for its final submission.
Efforts to raise additional funds faltered as the company’s runway shortened. Rather than accept “predatory” short-term offers to meet payroll — Chang said one proposal offered around $50,000 a week in exchange for $1 million in equity — the team decided to open-source most of its technology so others might continue the work. Investors were not happy.
Open-sourcing a mental health screening model also raises concerns about misuse. Tools designed to flag signs of depression or anxiety could, in theory, be deployed outside clinical settings, such as by employers or insurers, without the safeguards typically required in healthcare. Obviously that shouldn’t happen, but once released publicly there is little to prevent the technology from being used in ways its creators did not intend.
There are other complications, too. Nicholas Cummins, a senior lecturer in speech analysis and responsible AI in health at King’s College London, told The Verge that open-source releases often lack the detailed “paper trail” regulators expect, including a clear record of how a model was trained, validated, and tested for safety. Without that, he said, bringing a product built on the technology through FDA approval could prove difficult.
Open-sourcing a mental health screening model also raises concerns about misuse.
More likely, Cummins suggested, companies would treat the model as a starting point and layer their own data and validation processes on top. Even then, he cautioned voice-based systems remain imperfect and carry a “reasonable” risk of errors, he warned, especially for conditions like depression, which manifest differently across individuals, languages, and cultural contexts and depend heavily on the diversity and structure of speech data used in training.
Chang did not dismiss concerns about potential misuse, but said “it’s less of a concern in practice than it might appear in theory.” The organizations with the greatest incentives to abuse the technology, she argued, are also those that “face the highest barriers to actually deploying it.” In Chang’s view, “the more realistic risk is underuse, not misuse.”
While Kintsugi’s mental health screening technology has been open-sourced, Chang said not all of the company’s technology has been released publicly. In part, this is for security concerns, she said, as chief among it is technology that can detect synthetic or manipulated voices.
Chang said the capability emerged when the team experimented with AI-generated speech to strengthen its mental health models. The synthetic audio lacked the vocal signals the model was trained to recognize, revealing that it could be used to distinguish between human and AI-generated voices. It is a growing challenge given the proliferation of AI slop and fraudulent deepfakes and one that has yet to be reliably solved. It is a potentially lucrative opportunity, and, thankfully for Kintsugi, an area that is not subject to FDA oversight.
Chang declined to speculate on her next move or whether Kintsugi’s security-focused technology might resurface, but she said she wishes someone else would build on the company’s work and carry it through the final stages of the FDA process. But without broader changes, Kintsugi’s shutdown is unlikely to be the last example of startup timelines clashing with medical regulation, and Chang said she hopes that reality doesn’t deter other founders from trying.
You may be interested

Razer gaming accessories include a $25 Taco Bell gift card at Best Buy
new admin - Apr 02, 2026Razer is summoning hungry gamers to Best Buy, where shoppers can get a free $25 Taco Bell gift card with…

Josh Allen, Hailee Steinfeld announce birth of first child
new admin - Apr 02, 2026[ad_1] NEWYou can now listen to Fox News articles! Buffalo Bills quarterback Josh Allen and actress Hailee Steinfeld welcomed their…

Body camera video of Tiger Woods arrest shows golfer being handcuffed after Florida crash
new admin - Apr 02, 2026Body camera footage of Tiger Woods' arrest has been released, less than a week after the golf icon was taken…





























