Published by
Stanford Medicine

Category

Technology

Events, Stanford News, Technology

Big Data in Biomedicine technical showcase to feature companies’ innovations related to big data

Big Data in Biomedicine technical showcase to feature companies' innovations related to big data

big_dataIn an effort to spark collaboration among thought leaders across industry, government and academia, Stanford’s upcoming Big Data in Biomedicine conference is hosting a technical showcase where attendees can browse displays and demos highlighting public and private companies’ innovations related to big data.

Conference organizers are continuing to develop the program, but the current roster of companies committed to participating in the showcase range from industry giants to smaller ventures. Among the participants are multinationals firms, such as Samsung, SAP and General Electric Co. and emerging startups ClusterK, a cloud computing platform; StationX, a developer of software for scientists and clinicians working with genomics data; and Syapse, which aims to bring molecular profiling into standard medical use.

Part networking opportunity and part show-and-tell, the event is a new addition to this year’s conference and will be held on May 21 under a tent on the lawn of the medical school’s Li Ka Shing Center for Learning and Knowledge. Conference-goers will have a chance to watch demos of biomedical analytical tools and large computing solutions for big data, as well as relevant ontologies that make for very effective information organization and retrieval.

Registration for the conference is now open on the Big Data in Biomedicine website. The event, which is co-sponsored by Stanford and Oxford University, will be held May 21-23.

Previously: Euan Ashley discusses harnessing big data to drive innovation for a healthier world, Registration opens for Big Data in Biomedicine conference at Stanford, Grant from Li Ka Shing Foundation to fund big data initiative and conference at Stanford and Big laughs at Stanford’s Big Data in Biomedicine Conference
Photo by Saul Bromberger

Bioengineering, In the News, Neuroscience, Stanford News, Technology

New York Times profiles Stanford’s Karl Deisseroth and his work in optogenetics

New York Times profiles Stanford's Karl Deisseroth and his work in optogenetics

Rockefeller University neurobiologist Cori Bargmann, PhD, is quoted in today’s New York Times as saying optogenetics is “the most revolutionary thing that has happened in neuroscience in the past couple of decades.” The article is a profile piece of Karl Deisseroth, MD, PhD, the Stanford researcher who helped create the field of optogenetics, and it reveals how a clinical rotation in psychiatry led him to this line of work:

It was eye-opening, he said, “to sit and talk to a person whose reality is different from yours” — to be face to face with the effects of bipolar disorder, “exuberance, charisma, love of life, and yet, how destructive”; of depression, “crushing — it can’t be reasoned with”; of an eating disorder literally killing a young, intelligent person, “as if there’s a conceptual cancer in the brain.”

He saw patient after patient suffering terribly, with no cure in sight. “It was not as if we had the right tools or the right understanding.” But, he said, that such tools were desperately needed made it more interesting to him as a specialty. He stayed with psychiatry, but adjusted his research course, getting in on the ground floor in a new bioengineering department at Stanford. He is now a professor of both bioengineering and psychiatry.

Previously: A federal push to further brain research, An in-depth look at the career of Stanford’s Karl Deisseroth, “a major name in science”, Lightning strikes twice: Optogenetics pioneer Karl Deisseroth’s newest technique renders tissues transparent, yet structurally intact, The “rock star” work of Stanford’s Karl Deisseroth and Nature Methods names optogenetics its “Method of the Year
Related: Head lights
Photo in featured-entry box by Linda Cicero/Stanford News Service

Health and Fitness, Technology, Videos

Wireless stick-on patch could make continuous health monitoring more flexible and practical

Wireless stick-on patch could make continuous health monitoring more flexible and practical

Stress tests or sleep studies are two examples of when long-term clinical monitoring are necessary. But bulky wires, sensors, or tape used during these studies can inhibit the natural movements of test subjects and potentially skew outcomes. In an effort to solve this issue, researchers at Northwestern University developed a wearable patch that adheres to the skin, easily stretches and moves with the body, gathers physiological statistics, and can send wireless updates to a cellphone or computer.

A recent post on Futurity offers more details about how the device, which stick to the skin like a temporary tattoo, was designed:

Researchers turned to soft microfluidic designs to address the challenge of integrating relatively big, bulky chips with the soft, elastic base of the patch. The patch is constructed of a thin elastic envelope filled with fluid. The chip components are suspended on tiny raised support points, bonding them to the underlying patch but allowing the patch to stretch and move.

One of the biggest engineering feats of the patch is the design of the tiny, squiggly wires connecting the electronics components—radios, power inductors, sensors, and more. The serpentine-shaped wires are folded like origami, so that no matter which way the patch bends, twists or stretches, the wires can unfold in any direction to accommodate the motion. Since the wires stretch, the chips don’t have to.

The article goes on to discuss the potential of wearable electronic devices in health care, including the possibility of detecting motions associated with Parkinson’s disease at its onset.

Previously: Ultra-thin flexible device offers non-invasive method of monitoring heart health, blood pressure, New method for developing flexible nanowire electronics could yield ultrasensitive biosensors and Stanford researchers develop a new biosensor chip that could speed drug development

Science, Technology

Survey says: Americans split on the future of science

Mechanics of Time TravelIf you’re feeling sciencey, check out a Pew Research Center survey completed with Smithsonian Magazine and released yesterday. You’ll see the percentage of Americans surveyed who would welcome or dismiss personal robots servants, driverless cars or other futuristic possibilities, given the options.

piece published on the blog re/code reports:

Asked what inventions they most want to take advantage of themselves, participants repeatedly landed on three themes: Medical strides that extend human longevity, flying cars or personal space crafts, and time travel.

On the other hand, nearer realities like robot caregivers, face computers, genetic engineering and drones seem to give a lot of people the heebie-jeebies. Among the findings:

  • 65 percent think it would be a change for the worse if lifelike robots become the primary caregivers for the elderly and people in poor health.
  • 63 percent think it would be a change for the worse if personal and commercial drones are given permission to fly through most U.S. airspace.
  • 53 percent think it would be a change for the worse if most people wear implants or other devices that constantly show them information about the world around them.

What do you think?

Previously: Using personal robots to overstep disabilityHelping the public make sense of scientific research and Hey, president-to-be: What are your views on science?
Via Kara Swisher
Photo by Bob Owen

Orthopedics, Technology

“Intelligent” liner may improve prosthetic limb fit and function

0984-FieldsRuns200.jpgWhen I lived in the triathlete town of San Diego and tagged along for fun with a group who trained, a kind young man always gave me an encouraging word or high-five as he zoomed past me while running or cove swimming. He has a prosthetic leg, and although the device that helps him move around was clearly functional, and even sounded springy on the pavement, I wondered if a small shift in alignment could cause a great deal of discomfort.

This thought came back today as I came across news about an “intelligent” liner for better-fitting prostheses. A prototype of the device, which is being developed by researchers at the University of Southampton, uses sensors to detect pressure and forces at the point of contact between a patient’s stump and the prosthesis. Information on limb loading could lead to a better fitting and perhaps self-adjusting prosthesis, according to a release, which also notes:

There are 50,000 lower-limb amputees in the UK, most of whom use artificial limbs that are attached to the residual limb through a socket. No two stumps are exactly the same shape and size and even an individual’s stump can change shape over the course of a single day.

Pain, discomfort and ulceration are frequently experienced at the socket interface due to poor fit. This stems from the excessive build-up of pressure within the limb socket (causing high ‘loads’ on the stump).

Synthetic liners, worn like a sock over the stump, provide some cushioning against the hard socket, but at present there is no convenient way to accurately measure the critical loads at this interface in the clinic. Without this information, prosthetists face difficulty in fitting replacement limbs and the outcomes for patients are variable.

According to the non-profit Amuptee Coalition, nearly two million people in the United States live with limb loss.

Previously: Stanford graduates partner with clinics in developing countries to test low-cost prostheticBiotech start-up builds artful artificial limbs and Two Stanford students’ $20 device to treat clubfoot in developing countries
Photo by U.S. Army

Neuroscience, Research, Technology, Videos

Using Google Glass to improve quality of life for Parkinson’s patients

Using Google Glass to improve quality of life for Parkinson's patients

Researchers at Newcastle University are exploring ways that Google Glass could improve Parkinson’s patients’ quality of life by assisting them in placing phone calls, reminding them to take their medications or giving them behavioral prompts, such as speaking louder. In the video above, Roisin McNaney, a PhD student in the university’s Digital Interaction Group, explains how using Glass could ease patients’ anxiety about encountering a symptom-related problem while in public, raise patients’ confidence and, ultimately, make them more independent.

Previously: Abraham Verghese uses Google Glass to demonstrate how to begin a patient exam, Revealed: The likely role of Parkinson’s protein in the healthy brain and Stanford study identifies molecular mechanism that triggers Parkinson’s
Via Medgadget

In the News, Neuroscience, Technology

Facial expression recognition software could predict student engagement in learning

Facial expression recognition software could predict student engagement in learning

bored faceTest day approaching? Get your game face on. A study of a computer program that recognizes and interprets facial expressions has found that identifying students’ level of engagement while learning may predict their performance in the class. Computer scientists at the University of California, San Diego and Emotient, a San Diego-based company that developed the facial-recognition software used in the study, teamed with psychologists at Virginia Commonwealth and Virginia State universities to look at “when and why students get disengaged,” study lead author Jacob Whitehill, PhD, researcher in UC San Diego’s Qualcomm Institute and Emotient co-founder, said in a release.

The authors write in the study, which was published in an early online version in the journal IEEE Transactions on Affective Computing:

In this paper we explore approaches for automatic recognition of engagement from students’ facial expressions. We studied whether human observers can reliably judge engagement from the face; analyzed the signals observers use to make these judgments; and automated the process using machine learning.

“Automatic engagement detection provides an opportunity for educators to adjust their curriculum for higher impact, either in real time or in subsequent lessons,” Whitehill said in the release. ”Automatic engagement detection could be a valuable asset for developing adaptive educational games, improving intelligent tutoring systems and tailoring massive open online courses, or MOOCs.”

Previously: Looks of fear and disgust help us to see threats, study showsProviding medical, educational and technological tools in Zimbabwe and Whiz Kids: Teaching anatomy with augmented reality
Photo by Jesús Gorriti

Aging, Stanford News, Technology

Stanford Center of Longevity announces dementia-care design challenge winners

Stanford Center of Longevity announces dementia-care design challenge winners

Winners have been announced for Stanford Center on Longevity‘s first Design Challenge, which launched last fall. As previously written about on Scope, 52 teams representing 31 universities in 15 countries submitted entries, all of them centered on improving the daily lives for people with dementia as well as their families and caregivers.

Stanford News reports:

There were seven finalists, including one student team from Stanford.

Sha Yao from the Academy of Art University in San Francisco won the $10,000 first place prize for her project, “Eatwell,” which involved the design of tableware specifically for people with Alzheimer’s.

For example, blue was chosen as the color of the insides of bowls because dementia sufferers can become confused when food and bowl have similar colors, according to Smith. As spills are common when bowls are tipped to get the final bits out, Yao designed a slanted bottom that eliminates the need to tip. The cups have low centers of gravity and are difficult to knock over.

The piece describes runner-up prize winners and the center’s new design contest, themed “enabling personal mobility across the life span.”

Previously: Finalists announced for Stanford Center on Longevity’s Design Challenge and Soliciting young minds to help older adults

Cancer, Genetics, Research, Stanford News, Technology

Gene panel screens for dozens of cancer-associated mutations, say Stanford researchers

Gene panel screens for dozens of cancer-associated mutations, say Stanford researchers

Stanford scientists have shown that it’s possible to simultaneously screen for dozens of cancer-associated mutations from a single blood sample using a multiple-gene panel. The research is published today in the Journal of Clinical Oncology (subscription required).

As I describe in my release:

Gene panels allow researchers to learn the sequences of several genes simultaneously from a single blood sample. It stands to reason that screening for mutations in just a few select genes is quicker, easier and cheaper than whole-genome sequencing. The technique usually focuses on fewer than 100 of the approximately 21,000 human genes. But until now, few studies have investigated whether homing in on a pre-determined panel of suspects can actually help people.

The researchers, medical oncologists and geneticists James Ford, MD and Allison Kurian, MD, used a customized 42-gene panel to investigate the presence of cancer-associated mutations in 198 women with a family or personal history of breast or other cancers. The women had been referred to Stanford’s Clinical Cancer Genetics Program between 2002 and 2012 to undergo screening for mutations in their BRCA1 or BRCA2 genes. They found that the panel was  a useful way to quickly screen and identify other cancer-associated mutations in women who did not have a BRCA1/2 mutation. From our release:

Of the 198 women, 57 carried BRCA1/2 mutations. Ford and Kurian found that 14 of the 141 women without a BRCA1/2 mutation had clinically actionable mutations in one of the 42 genes assessed by the panel. (An actionable mutation is a genetic variation correlated strongly enough to an increase in risk that clinicians would recommend a change in routine care — such as increased screening — for carriers.)

Eleven of the 14 women were reachable by telephone, and 10 accepted a follow-up appointment with a genetic counselor and an oncologist to discuss the new findings. The family members of one woman, who had died since giving her blood sample, also accepted counseling. Six participants were advised to schedule annual breast MRIs, and six were advised to have regular screens for gastrointestinal cancers; many patients received more than one new recommendation.

One woman, with a history of both breast and endometrial cancer, learned she had a mutation that causes Lynch syndrome, a condition that increases the risk of many types of cancers. As a result, she had her ovaries removed and underwent a colonoscopy, which identified an early precancerous polyp for removal.

The study shows that gene panels can be a useful tool that can change clinical recommendations for individual patients. It also indicates that patients are willing and eager to receive such information. As Ford explains in the release:

Gene panels offer a middle ground between sequencing just a single gene like BRCA1 that we are certain is involved in disease risk, and sequencing every gene in the genome. It’s a focused approach that should allow us to capture the most relevant information.

Previously: Whole genome sequencing: the known knowns and the unknown unknowns,  Assessing the challenges and opportunities when bringing whole-genome sequencing to the bedside and Blood will tell: In Stanford study tiny bits of circulating tumor DNA betray hidden cancers.

Global Health, Infectious Disease, Technology

Health workers use crowdsourced maps to respond to Ebola outbreak in Guinea

Médecins Sans Frontières and other international aid organizations are furiously working to contain an outbreak of Ebola in Guinea and nearby African countries. Latest reports estimate that the virus has infected 157 people and killed 101 in Guinea alone.

A New Scientist story published today explains how health workers from Médecins Sans Frontières were initially at a disadvantage when they arrived in Guinea to combat the deadly virus because they only had topographic charts to use in pinpointing the source of the disease. Desperately in need of maps that would be useful in understanding population distribution, the organization turned to Humanitarian OpenStreetMap Team, which coordinated a crowdsourcing effort to produce the first digital map of Guéckédou, a city of around 250,000 people in southern Guinea. Hal Hodson writes:

As of 31 March, online maps of Guéckédou were virtually non-existent, says Sylvie de Laborderie of cartONG, a mapping NGO that is working with MSF to coordinate the effort with HOT. “The map showed two roads maybe – nothing, nothing.”

Within 12 hours of contacting the online group, Guéckédou’s digital maps had exploded into life. Nearly 200 volunteers from around the world added 100,000 buildings based on satellite imagery of the area, including other nearby population centres. “It was amazing, incredible. I have no words to describe it. In less than 20 hours they mapped three cities,” says de Laborderie.

Mathieu Soupart, who leads technical support for MSF operations, says his organisation started using the maps right away to pinpoint where infected people were coming from and work out how the virus, which had killed 95 people in Guinea when New Scientist went to press, is spreading. “Having very detailed maps with most of the buildings is very important, especially when working door to door, house by house,” he says. The maps also let MSF chase down rumours of infection in surrounding hamlets, allowing them to find their way through unfamiliar terrain.

Previously: Using crowdsourcing to diagnose malaria and On crowdsourced relief efforts in Haiti

Stanford Medicine Resources: