Psychiatrists are looking to sophisticated computational tools that may be able to disentangle the intricacies of mental illness and improve treatment decisions
Computational psychiatry sets forth the ambitious goal of using sophisticated numerical tools to understand and treat mental illness.
Brain science draws legions of eager students to the field and countless millions in dollars, euros and renminbi to fund research. These endeavors, however, have not yielded major improvements in treating patients who suffer from psychiatric disorders for decades.
The languid pace of translating research into therapies stems from the inherent difficulties in understanding mental illness. “Psychiatry deals with brains interacting with the world and with other brains, so we’re not just considering a brain’s function but its function in complex situations,” says Quentin Huys of the Swiss Federal Institute of Technology (E.T.H. Zurich) and the University of Zurich, lead author of a review of the emerging field of computational psychiatry, published this month in Nature Neuroscience. Computational psychiatry sets forth the ambitious goal of using sophisticated numerical tools to understand and treat mental illness. [Scientific American is part of Springer Nature.]
Psychiatry currently defines disorders using lists of symptoms. Researchers have been devoting enormous energies to find biological markers that make diagnosis more objective with only halting success. Part of the problem is there is usually no one-to-one correspondence between biological causes and disorders defined by their symptoms, such as those in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5). A specific disorder, like depression or schizophrenia, may result from a range of different underlying causes (biological or otherwise). On the other hand, the same cause might ultimately lead to different disorders in different people, depending on anything from their genetics to their life experiences. One of the goals of computational psychiatry is to draw connections between symptomsand causes, regardless of diagnoses.
The variability that exists within a single disorder means two people can have the same diagnosis but share no symptoms. Furthermore, significant overlap exists between diagnoses: Many symptoms are shared among numerous conditions, and multiple conditions often occur together. “To deal with this complexity we need more powerful tools,” Huys says.
In the age of big data neuroscientists routinely handle extremely high-dimensional data sets. There are many types of data, including neural anatomy or activity as well as cognitive, clinical, genetic and more. The data generated by an fMRI scan alone can consist of many series of values changing over time, in which each numerical series represents the activity of a single unit of brain volume. One of the two main branches of computational psychiatry involves applying machine-learning techniques to these large data sets to find patterns without referring to theories about cognitive dysfunction or mental illness.
Initially these “data-driven” efforts focused on developing automatic tools for objective diagnosis. For instance, numerous studies have attempted to use the average structural and functional brain differences seen, in magnetic resonance imaging (MRI) scans of people with a given psychiatric diagnosis to distinguish between those with and without the disorder.
The moderate accuracy obtained in some of these studies indicates the disorders are indeed reflected in the brain, but there are problems to overcome before such tools are clinically useful. For instance, many clinical cases are ambiguous, and it is not clear how useful classification systems, which tend to be developed using clear-cut cases, would be in those situations. Also, as symptoms increase in severity, the number of co-occurring conditions tends to increase whereas classification systems tend to treat disorders as mutually exclusive. Techniques to allow for complex multiple diagnoses are far more challenging.
Researchers are working on these problems, and performance is likely to improve both as the tools themselves are refined and more types of data are added. But the difficulty of connecting biology to disorders defined by clusters of symptoms may prove to be a fundamental limit to progress until such time as psychiatry’s classification system undergoes drastic changes.
These problems have led to a shift toward analyses intended to go beyond diagnosis, to make predictions about how an illness will proceed for a given individual—predicting suicide risk, for example, or treatment response. “Looking at a clinical outcome, like risk of relapse or response to a particular treatment, is likely to be more interesting,” says Jonathan Roiser, a professor of neuroscience and mental health at University College London, who was not an author of the review. “And much more clinically useful.”
Every psychiatrist wants to know which treatment will work best for a given patient. A number of studies have found potential biomarkers—increased activity in certain brain regions, for instance—that might be useful for predicting which patients will respond to which treatments. One even tested whether this approach could improve results in a randomized clinical trial. Psychiatrist Charles DeBattista of Stanford University and colleagues, compared electroencephalograms (EEGs) collected from depressed patients, with a database of EEGs from over 1,800 patients that included information about response to specific treatments. Using EEG measures to guide decisions about treatment alternatives led to significantly better outcomes than clinical treatment selection.
These data-driven, machine-learning approaches represent one way of tackling a psychiatric disorder but do not reveal much about why symptoms occur, how those symptoms relate to the brain’s problem-solving processes or how the brain implements those processes. The other side of computational psychiatry involves “theory-driven” approaches that attempt to model mental processes in software. Abstract algorithms can mimic decision-making and other cognitive processes without worrying about how such processing occurs in the brain. At the other extreme, biologically realistic models simulate actual neural processing in terms of electrical impulses, chemical messengers, synaptic connections and so on.
The study of decision-making processes in situations involving reward and punishment is known as reinforcement learning. Researchers believe the brain employs two distinct types of process in reinforcement-learning situations. One is a simple, rapid, habitual form that predicts the consequences of actions using expectations based on how often an action has been rewarded in the past. The difference between the predicted reward and the one actually obtained is a “reward prediction error,” which can be used to update expectations. The other is a slower, more deliberative form of goal-oriented control, which uses knowledge about the world to think through (often multiple) actions to assess probable consequences. This approach is more reliable, being able to rapidly adapt to changes in the environment, but is also much more intensive and costly.
The concept of reward prediction error was developed by researchers working on abstract models of reinforcement learning, but later physiological research found neural circuits that actually seem to calculate these prediction errors using the signaling chemical dopamine. “The relationship between the abstract models and biophysical implementations is well understood in this area,” says computational neuroscientist Nathaniel Daw of Princeton University. “There’s a strong line from neurons and synapses all the way up to behavior.”
This finding may be relevant to psychiatry: Researchers believe changes in reward valuation and other decision-making processes underlie phenomena like anhedonia (inability to enjoy things or feel excitement) in depression and compulsive behaviors seen in conditions like obsessive-compulsive disorder (OCD). A study published this month in eLife from Daw and colleagues, led by psychologist Claire Gillan at New York University, used data-driven techniques to analyze patterns of symptoms in the responses to psychiatric questionnaires collected from nearly 2,000 people via the Internet. They identified a class of compulsive symptoms, including intrusive thoughts, that was common to people reporting symptoms of multiple disorders, including OCD, drug abuse and some eating disorders.
The participants also completed a reinforcement learning task designed to assess their decision-making processes. The researchers found that degree of compulsivity was related to differences in the balance between the two types of reinforcement learning process, favoring the rapid, habitual type over the more deliberative, goal-directed form. “We know a lot more about brain systems—the computations the brain is doing and the mechanisms that support them—than when [psychiatric diagnoses] were invented,” Daw says. “The time is ripe to see if we can connect [these systems] up to these illnesses and thereby connect the illnesses back to the brain.”
Huys and his co-authors suggest the biggest payoff might come from combining the two approaches. There is a wealth of data available to psychiatrists today, but too much data can be as bad as too little. Pattern classification techniques can divide items into any required subgroups, given enough dimensions in the data, but such solutions are unlikely to “generalize” to correctly classify new items, making them useless in practice. This problem is known as “over-fitting”. The trick to avoiding this is to find just the right information to capture what differs systematically and meaningfully between groups while avoiding differences that are just incidental “noise”.
Theory-driven models can help simplify complex high-dimensional data by reducing it down to a few, theoretically meaningful quantities that summarize the important variation. Examples might be learning rates from a reinforcement learning model or synaptic connection strengths from a model of neural activity. If the models accurately depict the processes they mimic, these quantities might be useful as data for classification and prediction. “Modeling plays a kind of dual role,” Roiser says. “It informs theory about where dysfunction might lie but also creates more precise features of the data that might help classification.”
One example is a 2013 study by Kay Brodersen of E.T.H. Zurichand colleagues in which they used a model of brain activity to identify subgroups of schizophrenia patients. The team took brain scans conducted while participants performed a working-memory task to construct models of how dynamic activity unfolds in three brain regions known to be involved in working memory (the capacity to hold information in mind over the short-term, despite competing demands or distractions). This network consisted of the visual cortex, where information enters, plus the dorsolateral prefrontal cortex and parietal cortex, which have been shown to be important for working memory. The model produced estimates of the connection strength among these three regions during the working-memory task, which the researchers used as data for a classification system. They were able to distinguish between patients and controls using these quantities better than with more traditional measures, such as average overall activity in the three regions.
The team also identified three distinct groups of patients who differed significantly in terms of this network architecture. These were identified without using any information about patients’ symptoms but corresponded to groups with different levels of negative symptoms (social withdrawal, reduced motivation, etcetera). This result ties in with previous research showing that schizophrenia patients with higher levels of negative symptoms have lower working memory capacity. “It’s a nice example of people using a biophysical model to improve classification, which then had a clinically relevant outcome,” Roiser says.
Computational psychiatry is young enough that tools still in development are not yet ready for use by psychiatrists. “The next steps will be to validate these tools in longitudinal studies and examine how they could inform treatment decisions,” Huys says. “Then, their ability to improve outcomes will have to be tested in clinical trials.” There is already evidence that data-driven approaches could improve treatment decisions whereas theory-driven approaches have not yet been shown to improve results. “Theory-driven approaches are promising in terms of redefining our view of symptoms and providing new ways of bridging the gap between symptoms and neurobiology,” Huys says. “Combining the approaches should be very powerful, but that’s still at an early stage of development.”
As an emerging field, computational psychiatry has the potential to change traditional treatment protocols. “A lot of us feel the time is ripe,” Daw says. “It’s an area with a lot of promise, but it’s more aspiration than payoff at this point.”