Published by
Stanford Medicine


Behavioral Science, Neuroscience, Stanford News

Real time view of changing minds

Real time view of changing minds

There at this morning’s meeting was a large box of donuts which I had absolutely no intention of eating. None. Until I changed my mind.

What happened this morning was probably a little more complex than the simple changes of mind that Stanford Neurosciences Institute director William Newsome studies, what with the delicious smell of chocolate and a quick realization that perhaps a lunchtime run could be squeezed into my day.

Newsome has focused on recording the activity of individual neurons in animals making simple decisions, like indicating which way a dot is moving on a screen. He and his team then statistically analyze the results of many such recordings of individual neurons. These studies have gone a long way toward revealing the activity of neurons in different parts of the brain but can miss some of the fine scale dynamics that take place during the decision-making process. Recently, new probes have been developed that allow scientists to record the activity of many neurons at the same time.

Using such a probe, Newsome and his team recorded groups of neurons in animals making simple decisions, and could track in real time the patterns of how the neurons fired as the animals made a decision and changed their minds. They published their results in Current Biology. A press release from New York University quotes co-first author on the paper Roozbeh Kiani (a former postdoctoral scholar in Newsome’s lab):

“Looking at one neuron at a time is ‘noisy’: results vary from trial to trial so you cannot get a clear picture of this complex activity. By recording multiple neurons at the same time, you can take out this noise and get a more robust picture of the underlying dynamics.”

The team was able to watch the neurons firing in real time, and detect a pattern indicating which decision the animal was going to make. They could also tell when the animal changed its mind, for example as a result of a stronger signal on the screen or to more time to make a decision. What I found interesting is that in most cases when the animals changed their minds it was to correct their initial decision.

What does all this suggest about my donut splurge? Maybe that given enough time I was able to correct my initial decision of self-control to the right one – of deliciousness.

Previously: Co-leader of Obama’s BRAIN Initiative to direct Stanford’s interdisciplinary neuroscience institute

Medical Education, Science, Stanford News

Bio-X Kids Science Day inspires young scientists

Bio-X Kids Science Day inspires young scientists

sciencefair_2691What better way to spend a sunny Friday afternoon than by letting a gooey cornstarch slurry ooze between your grubby fingers.

No? Then perhaps investigating the bacteria of your nose (the outside) is more of an end of the week treat. In the case of my kids, attempting a tae kwon do sparring match with a reluctant robot was another great way to enjoy the tenth annual Stanford Bio-X Kids Science Day.

About 200 kids showed up to the Clark Center courtyard June 13 to explore 15 booths of interactive fun. In the ten years of this event, Heideh Fattaey, executive director of operations & programs for Bio-X, said that around 2,000 kids have come to learn about science and have fun – and by extension, to discover that learning about science is fun.

Other booths had an array of magnets to investigate, pools of water with a collection of toys for learning about mass and volume, and a demonstration of the 50 cent paper microscope developed by bioengineer Manu Prakash, PhD, and his lab.

Every 20 minutes or so, an explosion from air-powered, t-shirt-shooting robot interrupted the festivities (finders keepers on the t-shirt).

sciencefair_2771In the center of the courtyard, undergraduate student Tony Pratkanis stood watch over the PS2 personal robot, not far from a bubble machine that held several kids in thrall. The robot had, on another day, made an independent coffee run for the lab of computer scientist Kenneth Salisbury. On Friday the robot was set to dole out high fives, though that program met its match with my son’s kicking.

Fattaey told me that the day is intended not just to wear out active kids, but to inspire the next generation of scientists who will be picking up biomedical innovation where today’s Bio-X faculty leave off.

Case in point, Fattaey said she talked with a high school student she knew who was going to be doing a summer internship in a Clark Center lab. “He said seeing all the kids have fun brought back memories of when he attended Kids Science Day,” she said.

Previously: Stanford Medicine community gathers for Health Matters event, At Med School 101, teens learn that it’s “so cool to be a doctor”, A day in the lab: Stanford scientists share their stories, what fuels their work, Stanford’s Clark Center, home to Bio-X, turns 10 and Bay Area students get a front-row seat to practicing medicine, scientific research
Photos, of Quinn and Reid Monahan playing with a cornstarch slurry, and of Reid Monahan sparring with the PS2 personal robot, by Amy Adams

Big data, Stanford News, Technology

What computation tells us about how our bodies work

What computation tells us about how our bodies work

Last week, as the 2014 Big Data in Biomedicine conference came to a close, a related story about the importance of computing across disciplines posted on the Stanford University homepage. The article describes research making use of the new Stanford Research Computing Center, or SRCC (which we blogged about here). We’re now running excerpts from that piece about the role computation, as well as big data, plays in medical advances.

cup of coffeeAs you sip your morning cup of coffee, the caffeine makes its way to your cells, slots into a receptor site on the cells’ surface, and triggers a series of reactions that jolt you awake. A similar process takes place when Zantac provides relief for stomach ulcers, or when chemical signals produced in the brain travel cell-to-cell through your nervous system to your heart, telling it to beat.

In each of these instances, a drug or natural chemical is activating a cell’s G-protein coupled receptor (GPCR), the cellular target of roughly half of all known drugs, says Vijay Pande, PhD, a professor of chemistry and, by courtesy, of structural biology and computer science at Stanford. This exchange is a complex one, though. In order for caffeine or any other molecule to influence a cell, it must fit snuggly into the receptor site, which consists of 4,000 atoms and transforms between an active and inactive configuration. Current imaging technologies are unable to view that transformation, so Pande has been simulating it using his Folding@Home distributed computer network.

So far, Pande’s group has demonstrated a few hundred microseconds of the receptor’s transformation. Although that’s an extraordinarily long chunk of time compared to similar techniques, Pande is looking forward to accessing the SRCC to investigate the basic biophysics of GCPR and other proteins. Greater computing power, he says, will allow his team to simulate larger molecules in greater detail, simulate folding sequences for longer periods of time, and visualize multiple molecules as they interact. It might even lead to atom-level simulations of processes at the scale of an entire cell. All of this knowledge could be applied to computationally design novel drugs and therapies.

“Having more computer power can dramatically change every aspect of what we can do in my lab,” says Pande, who is also a Stanford Bio-X affiliate. “Much like having more powerful rockets could radically change NASA, access to greater computing power will let us go way beyond where we can go routinely today.

Previously: Computing our evolution, Learning how to learn to readPersonal molecular profiling detects diseases earlier, New computing center at Stanford supports big data and Nobel winner Michael Levitt’s work animates biological processes
Photo by Toshiyuki IMIA

Big data, Genetics, Stanford News, Technology

Computing our evolution

Computing our evolution

Last week, as the 2014 Big Data in Biomedicine conference came to a close, a related story about the importance of computing across disciplines posted on the Stanford University homepage. The article describes research making use of the new Stanford Research Computing Center, or SRCC (which we blogged about here). We’re now running excerpts from that piece about the role computation, as well as big data, plays in medical advances.

The human genome is essentially a gigantic data set. Deep within each person’s 6 billion data points are minute variations that tell the story of human evolution, and provide clues to how scientists can combat modern-day diseases.

To better understand the causes and consequences of these genetic variations, Jonathan Pritchard, PhD, a professor of genetics and of biology, writes computer programs that can investigate those linkages. “Genetic variation effects how cells work, both in healthy variation and in response to disease, which ultimately regulates organism-level phenotypes,” Pritchard says. “How natural selection acts on phenotypes, that’s what causes evolutionary changes.”

Consider, for example, variation in the gene that codes for lactase, an enzyme that allows mammals to digest milk. Most animals don’t express lactase after they’ve been weaned from their mother’s milk. In populations that have historically revolved around dairy farming, however, Pritchard’s algorithms have shown that there has been strong long-term selection for expressing the genes that allow people to process milk. There has been similarly strong selection on skin pigmentation in non-Africans that allow better synthesis of vitamin D in regions where people are exposed to less sunlight.

The methods used in these types of investigations have the potential to yield powerful medical insights. Studying variations in gene regulation within a population could reveal how and where particular proteins bind to DNA, or which genes are expressed in different cell types – information that could be applied to design novel therapies. These inquiries can generate hundreds of thousands of data sets, which can only be parsed with clever algorithms and machine learning.

Pritchard, who is also a Stanford Bio-X affiliate, is bracing for an even bigger explosion of data; as genome sequencing technologies become less expensive, he expects the number of individual genomes to jump by as much as a hundredfold in the next few years. “There are not a lot of problems that we’re fundamentally unable to handle with computers, but dealing with all of the data and getting results back quickly is a rate limiting step,” Pritchard says. “Having access to SRCC will make our inquiries go easier and more quickly, and we can move on faster to making the next discovery.”

Previously: Learning how to learn to readPersonal molecular profiling detects diseases earlier and New computing center at Stanford supports big data

Big data, Imaging, Stanford News, Technology

Learning how we learn to read

Learning how we learn to read

Last week, as the 2014 Big Data in Biomedicine conference came to a close, a related story about the importance of computing across disciplines posted on the Stanford University homepage. The article describes research making use of the new Stanford Research Computing Center, or SRCC (which we blogged about here). We’re now running excerpts from that piece about the role computation, as well as big data, plays in medical advances.

letter - smallA love letter, with all of its associated emotions, conveys its message with the same set of squiggly letters as a newspaper, novel, or an instruction manual. How our brains learn to interpret a series of lines and curves into language that carries meaning or imparts knowledge is something psychology professor Brian Wandell, PhD, has been trying to understand.

Wandell hopes to tease out differences between the brain scans of kids learning to read normally and those who are struggling, and use that information to find the right support for kids who need help. “As we acquire information about the outcome of different reading interventions we can go back to our database to understand whether there is some particular profile in the child that works better with intervention 1, and a second profile that works better with intervention 2,“ said Wandell, who is also the Isaac and Madeline Stein Family Professor and a professor (by courtesy) of electrical engineering.

His team developed a way of scanning kids’ brains with magnetic resonance imaging then knitting the million collected samples together with complex algorithms that reveal how the nerve fibers connect different parts of the brain. “If you try to do this on your laptop, it will take half a day or more for each child,” he said. Instead, he uses powerful computers to reveal specific brain changes as kids learn to read.

Wandell is associate director of the Stanford Neurosciences Institute where he is leading the effort to develop a computing strategy – one that involves making use of SRCC rather than including computing space in their planned new building. He said one advantage of having faculty share computing space and systems is to speed scientific progress. “Our hope for the new facility is that it gives us the chance to set the standards for a better environment for sharing computations and data, spreading knowledge rapidly through the community,” he said.

Previously: Personal molecular profiling detects diseases earlier, New computing center at Stanford supports big data, Teaching an old dog new tricks: New faster and more accurate MRI technique quantifies brain matter, Study shows brain scans could help identify dyslexia in children before they start to read and Stanford study furthers understanding of reading disorders
Photo by Liz West

Big data, Genetics, Research, Stanford News, Technology

Personal molecular profiling detects diseases earlier

Personal molecular profiling detects diseases earlier

Today, as the 2014 Big Data in Biomedicine conference continues, a related story about the importance of computing across disciplines posted on the Stanford University homepage. The article describes research making use of the new Stanford Research Computing Center, or SRCC (which we blogged about here). Over the next few days we’ll run excerpts from that piece about the role computation, as well as big data, plays in medical advances.

snyder - smallOur DNA is sometimes referred to as our body’s blueprint, but it’s really more of a sketch. Sure, it determines a lot of things, but so do the viruses and bacteria swarming our bodies, our encounters with environmental chemicals that lodge in our tissues and the chemical stew that ensues when our immune system responds to disease states.

All of this taken together – our DNA, the chemicals, the antibodies coursing through our veins and so much more – determines our physical state at any point in time. And all that information makes for a lot of data if, like genetics professor Michael Snyder, PhD, you collected it 75 times over the course of four years.

Snyder, who is a member of Stanford Bio-X and the Stanford Cancer Center, is a proponent of what he calls ‘personal omics profiling’, or the study of all that makes up our person, and he’s starting with himself. “What we’re collecting is a detailed molecular portrait of a person throughout time,” he says.

So far, he’s turning out to be a pretty interesting test case. In one round of assessment he learned that he was becoming diabetic and was able to control the condition long before it would have been detected through a periodic medical exam.

If personal omics profiling is going to go mainstream, serious computing will be required to tease out which of the myriad tests Snyder’s team currently runs give meaningful information and should be part of routine screening. Snyder’s sampling alone has already generated a half of a petabyte of data – roughly enough raw information to fill about dishwasher-size rack of servers.

Right now, that data and the computer power required to understand it reside on campus, but new servers will be located at SRCC. “I think you are going to see a lot more projects like this,” says Snyder. “Computing is becoming increasingly important in medicine.”

Previously: New computing center at Stanford supports big data, Stanford researchers work to translate genetic discoveries into widespread personalized medicine, Stanford geneticist talks tracking biological data points and personalized medicine, How genome testing can help guide preventative medicine and ‘Omics’ profiling coming soon to a doctor’s office near you?
Related: Big data
Photo of Snyder by Saul Bromberger

Big data, Research, Stanford News, Technology

New computing center at Stanford supports big data

New computing center at Stanford supports big data

computer facilityWhen I went out to visit Stanford’s new computing center recently, located at SLAC, I admit I wasn’t sure what to expect. I mean, it’s a building with servers, so, where’s the excitement? Call me easily amused, but one thing that struck me was the view. There you are, in a state-of-the-art computing center, staring as a grassy hillside, oak trees in the distance, and deer wandering by. Not bad.

Also – and stay with me for this one because at first read this might not sound so interesting – the air-driven cooling system is pretty (forgive the pun) cool. When it comes to computing, keeping the servers in their temperature happy zone takes a lot of energy in air conditioning. By taking advantage of the mild bay area temperatures, the building will save significant energy. Modeling climate change (one of the projects that will be running at the facility) just got greener.

Although computing space at the Stanford Research Computing Center is available to faculty across campus, one-third is allocated to the School of Medicine, which says something about the growing importance of big data and computation in medical technology. As I wrote in an online story today:

Case in point, the School of Medicine has a joint big data initiative with Oxford University and is hosting an international Big Data in Biomedicine meeting May 21-23. The meeting’s organizer, Euan Ashley, associate professor of cardiovascular medicine who directs Stanford’s arm of the collaboration, said the initiative to improve health care worldwide benefits from the university’s computation strengths.

Ashley is one of several medical school researchers who will be carrying out computing at the new facility as part of the school’s big data initiative.

Previously: Euan Ashley discusses harnessing big data to drive innovation for a healthier world, Registration opens for Big Data in Biomedicine conference at Stanford, Grant from Li Ka Shing Foundation to fund big data initiative and conference at Stanford, Big laughs at Stanford’s Big Data in Biomedicine Conference and A call to use the “tsunami of biomedical data” to preserve life and enhance health
Photo by Linda A. Cicero / Stanford News Service

Research, Science, Stanford News

Stanford ChEM-H bridges chemistry, engineering and medicine

Stanford ChEM-H bridges chemistry, engineering and medicine

Name changes can come with some confusion (ask anyone who changed their name after getting married), but they can also bring clarity (ask anyone who didn’t change their name and has to explain why their name is different than their child’s).

Today, one of Stanford’s institutes got a little clarity with a new name that better reflects its vision. What was once the Stanford Institute for Chemical Biology – a joint venture of the schools of Medicine, Engineering and Humanities & Sciences – is now Stanford ChEM-H.

I talked with Chaitan Khosla, PhD, director of ChEM-H, for a Q&A on the name change that was published today. He explained:

The term ChEM-H has two meanings. In one, it is shorthand for an emerging interdisciplinary area of chemistry that this institute will support; that is, using the principles and tools of chemistry to better understand and advance human health. It is also an acronym for the fields that will need to come together for us to be successful (chemistry, engineering and medicine for human health).

Khosla also talked about why now is the right time to be bringing these fields together:

The core value of chemistry remains timeless, even to a high school student. Chemistry is the science that makes new forms of matter and measures its properties at an atomic level. That said, I see the field of chemistry as being at an inflection point analogous to a period of time immediately after the transistor was invented. As mathematicians started to recognize the capabilities of this device, the field of computer science emerged.

In a similar way, the human genome project has created a resource that opens the door to understanding human biology in the language of chemistry. Up to this point, the impact of chemistry on our world has been profound – all the synthetic products we use in our daily lives are a result of chemical ingenuity. This, of course, includes a vast majority of medicines that society has come to rely upon so heavily. I predict that the emerging frontier between chemistry and human biology will challenge future generations of chemists and molecular engineers to elevate their design, synthetic and analytical skills to new heights. In turn, these pursuits will fundamentally alter our understanding of who we are as a species and as individuals.

Khosla has more to say about the language barrier that needs to be overcome between chemistry and biology, how this institute is different than biochemistry, and what he hopes ChEM-H will have accomplished ten years from now.

Neuroscience, Research, Stanford News, Technology

This is your brain on a computer chip

20140425_neurogrid_stillHere are some numbers that blew me away when I heard them earlier last week. You brain is using just a few watts of power right now as it sees and processes these words, hears and sorts through sounds around you and makes mental notes about grocery lists, or dry cleaning that needs picking up. By contrast, a computer uses about 40,000 times more power and runs about 9,000 times slower just to model a mouse brain and a human brain is about 1,000 times more complex. Given that, it’s no surprise several groups are hard at work trying to create a computer chip with brain-like efficiency.

Stanford Bioengineer Kwabena Boahen, PhD, and his graduate student Ben Varkey Benjamin have announced a milestone in this effort: they’ve modeled one million neurons in real time on a or a circuit-board called Neurogrid that contains sixteen chips called Neurocores. Their publication, in the Proceedings of the Institute of Electrical and Electronics Engineers, goes into more detail about exactly how they are using electronics parts to mirror our own intricate collection of cells, as does this story about the work.

What I found  most interesting are the possible uses of such a chip. Obviously, it could make our personal electronics smaller, smarter and less power hungry. But the chip can also for the first time model how our brain works, and how it fails to work in some diseases. This is something that once required supercomputing capabilities, plus lots of time and power. Now anyone can do it.

The chip also makes possible the dream of interpreting signals from the brain and, in real time, using those signals to drive robotic limbs for paralyzed people. As things are now, a person would be tethered to a computer and a power supply to interpret brain signals, and the limb wouldn’t move in real time. A Neurocore-like chip could conceivably be implanted, interpreting signals and driving robots in real time with minimal power needs. Boahen is working with his Clark Center neighbor and fellow Bio-X affiliate Krishna Shenoy, PhD, who is professor of electrical engineering and neurobiology, on making that dream a reality.

This video by my colleague Kurt Hickman shows where the team is now in working with Neurogrid to drive robot movement.

Photo by Kurt Hickman

Neuroscience, Research, Stanford News

Thoughts light up with new Stanford-designed tool for studying the brain

Thoughts light up with new Stanford-designed tool for studying the brain

A 3d rendered illustration of a nerve cell.

When I talk to neuroscientists about how they study the brain I get a lesson (usually filled with acronyms) in the various ways scientists go about trying to read minds. Some of the tools they use can detect when general regions of the brain are active, but can’t detect individual nerves. Others record the activity of individual nerves, one nerve at a time, but can’t detect networks of nerves firing together. Still another tool can report the afterglow of a signal that has been sent across networks of neurons.

There hasn’t been any one way of seeing when a nerve fires and which neighbors in connects to.

I wrote recently about a new tool to do just that, developed by bioengineer Michael Lin, MD, PhD, and biologist and applied physicist Mark Schnitzer, PhD. They’ve both come up with proteins that light up when a nerve sends a signal. They can put their proteins in a group of nerves in one part of the brain then watch those signals spread across the network of neurons as they interact.

In my story I quote Lin: “You want to know which neurons are firing, how they link together and how they represent information. A good probe to do that has been on the wish list for decades.”

The proteins could be widely used to better understand the brain or develop drugs:

With these tools scientists can study how we learn, remember, navigate or any other activity that requires networks of nerves working together. The tools can also help scientists understand what happens when those processes don’t work properly, as in Alzheimer’s or Parkinson’s diseases, or other disorders of the brain.

The proteins could also be inserted in neurons in a lab dish. Scientists developing drugs, for example, could expose human nerves in a dish to a drug and watch in real time to see if the drug changes the way the nerve fires. If those neurons in the dish represent a disease, like Parkinson’s disease, a scientist could look for drugs that cause those cells to fire more normally.

Now that I’ve written about the invention of this new tool I’m looking forward to hearing more about how scientists start using it to understand our brain or develop drugs.

3D rendered illustration of a nerve cell by Sebastian Kaulitzki/Shutterstock

Stanford Medicine Resources: