Science & Enterprise subscription

Follow us on Twitter

  • New post on Science and Enterprise: Taking a Break https://t.co/QgH851GMZX #Science #Business
    about 2 days ago
  • Total venture capital funding in the U.S. for first six months of 2019 reached levels not seen since the year 2000,… https://t.co/cGSC6j1Xl8
    about 2 days ago
  • New post on Science and Enterprise: Infographic – U.S. Venture Funds Near 20-Year Highs https://t.co/mIkraoGRAH #Science #Business
    about 2 days ago
  • A set of computational tools developed at Purdue University enables public safety agencies to monitor potentially c… https://t.co/NHSLm9YBBG
    about 3 days ago
  • New post on Science and Enterprise: A.I. Helps Visualize Emergency Social Media Data https://t.co/E94lXHnPsa #Science #Business
    about 3 days ago

Please share Science & Enterprise

Infographic – Top Tech Acquisition Timeline

Top tech industry acquisitions

Click on image for larger view (CB Insights)

13 July 2019. The world’s five leading technology enterprises are acquiring more companies and with more money in recent years than before. The companies, known collectively as FAMGA — Facebook, Amazon, Microsoft, Google, and Apple — continue to add new properties to their portfolios, with the size of the purchase prices at the time documented in this weekend’s infographic, compiled by our friends at CB Insights.

The chart shows acquisitions of $1 billion or more in the past 20 years. All told, FAMGA made 25 of these major purchases, with Microsoft accounting for 10 of that total. Microsoft’s purchase of LinkedIn at $26.2 billion in 2016 is the largest acquisition, followed by Facebook’s take-over of WhatsApp for $22 billion in 2014.

Microsoft also started off the acquisition parade in 1999 with its purchase of Visio for 1.4 billion. The latest $1 billion+ take-over is Google’s acquisition of Looker, an analytics enterprise, for $2.6 billion. Of the five tech companies, Apple so far made the fewest of these purchases, only Beats Electronics in 2014 for $3 billion.

More from Science & Enterprise:

*     *     *

Anti-Smoking Ads Not Reaching Most in U.S.

Lit cigarette

(Ralf Kunze, Pixabay)

12 July 2019. A nationwide survey shows a majority of American adults, including half of those who smoke, did not see court-ordered anti-smoking TV or newspaper advertisements. Findings from the survey are reported by researchers from University of Texas – M.D. Anderson Cancer Center in Houston, in today’s issue of the journal JAMA Network Open.

Cigarette smoking continues to be a leading U.S. public health problem. Centers for Disease Control and Prevention says smoking leads to some 480,000 deaths a year, or 1 in every 5 deaths in the U.S., including 1 in 3 deaths from cancer and 9 of 10 lung cancer deaths. Cigarettes, says CDC, harm most every organ in the body, are the cause of many diseases, and reduce the overall health of smokers. And the agency says more than 10 times the number of Americans died prematurely from cigarette smoking than in all of the wars fought by the U.S. in its history.

In 2006, a U.S. district court ruled that tobacco companies violated the Racketeer Influenced and Corrupt Organizations Act, or RICO, and ordered the companies to run “corrective messages” for the public at large. In these advertisements, the companies are required to describe the harmful health effects of cigarette smoking and indirect or second-hand smoke, addictive nature of cigarettes, negligible benefits of smoking low-tar or “light” cigarettes, and tobacco company responsibility for purposeful design of a harmful product and deceptive marketing practices. The ads appeared on prime-time television and in major newspapers in the U.S., beginning in November 2017.

An M.D. Anderson team led by bio-statistics professor Sanjay Shete is seeking to determine the extent of these messages reaching the American public at large and key segments of the public who may benefit from the ads. The team drew their data from the 2018 Health Information National Trends Survey, a nationwide poll conducted by mail by National Cancer Institute in the first half of 2018. The survey asks about a family’s health in general as well as cancer-related topics, but also on sources of health-related information.

Shete and colleagues analyzed responses from 3,484 adult respondents, including 450 current smokers. The results show only about 4 in 10 American adults (41%) say they read or viewed these advertisements in the previous 6 months. Among smokers, only half (51%) report having seen the ads. In addition, demographic groups most at risk to start smoking were less likely than than the public at large to see the messages. These include younger adults, age 18 to 34, as well as with lower incomes and a high school education. From 34 to 38 percent of these groups say they viewed or read the ads.

“When compared to other nationally funded anti-smoking campaigns, the reach and penetration of these industry-sponsored ads were suboptimal,” says Shete in an M.D. Anderson statement. “Our hope, as cancer prevention researchers, is for more people to see these ads and to avoid tobacco or consider quitting.”

The authors say reported exposure to the ads increased as the year 2018 progressed, from 41 percent overall in February to 47 percent in May. As a result, the researchers recommend increasing the duration of the campaign, as well as expanding ad placement to more youth-oriented media.

More from Science & Enterprise:

*     *     *

Crispr Enhanced for RNA Editing

Feng Zhang

Feng Zhang (McGovern Institute, MIT)

12 July 2019. Researchers expanded the ability of the genetic editing technique known as Crispr to alter the make-up of RNA, adding more ways to treat diseases. A team from the McGovern Institute for Brain Research at Massachusetts Institute of Technology and the Broad Institute, affiliated with MIT and Harvard University, describes the techniques in yesterday’s issue of the journal Science (paid subscription required).

Crispr — clustered, regularly interspaced short palindromic repeats — is a genome editing technique derived from bacterial defense systems, using ribonucleic acid or RNA to identify and monitor precise locations in DNA for editing. While most work with Crispr up to now involves editing DNA sequences, the technique can also be applied to editing RNA itself. Researchers from the labs of molecular biologist David Cox and geneticist-engineer Feng Zhang at the Broad and McGovern Institutes are seeking more ways Crispr can edit RNA for research and eventually as therapeutics.

In a November 2017 Science article, many of the same authors including Cox and Zhang, showed how RNA edits can work. They used Crispr with a simpler and more precise editing enzyme called Cas13 to edit RNA chemistries, changing the amino acids that make-up RNA. In the earlier paper, the researchers changed specific instances of the amino acid adenosine to inosine, another amino acid.

In their new paper, Cox, Zhang, and colleagues enhanced their RNA editing techniques, making it possible to edit other base amino acids in RNA. In this case, the team added the amino acid cytosine as a target, changing it into the amino acid uridine. As with the earlier adenosine-to-inosine changes, these additional cytosine-to-uridine alterations change RNA instructions for protein production in cells.

The new RNA modifications are particularly important, say the authors because they make possible many more protein instructions to cells, including phosphorylation, the adding of phosphorous to proteins. Phosphorous is a key protein chemical for cell signaling, and being able to control phosphorous levels adds a powerful therapeutic tool. “To treat the diversity of genetic changes that cause disease, we need an array of precise technologies to choose from,” says Zhang in a joint institute statement. “By developing this new enzyme and combining it with the programmability and precision of Crispr, we were able to fill a critical gap in the toolbox.”

The researchers demonstrated the RNA editing technique they call Rescue — RNA editing for specific C to U exchange — in 24 disease-related mutations produced by 9 genes. Among these tests, the team used the Rescue technique to change RNA instructions for the protein beta-catenin, expressed in a number of functions, including cell adhesion and gene transcription. The team’s RNA edits made it possible to activate more beta-catenin production and subsequent cell growth on demand, with implications for healing wounds.

The tests also demonstrated another key property of RNA editing not found in DNA edits, its reversibility. The beta-catenin increases were temporary, and could be turned off to prevent uncontrolled cell growth. In addition, the researchers refined Rescue’s targeting to reduce the chance for off-target edits.

Zhang is co-founder of the start-up company Beam Therapeutics in Cambridge, Massachusetts developing precision-medicine treatments by editing DNA and RNA base-pair amino acids, similar to those described in the paper. As reported by Science & Enterprise in May 2018, Beam Therapeutics licenses Crispr technology from the Broad Institute, as well as Editas Medicine, another company founded by Zhang.

More from Science & Enterprise:

*     *     *

Outcomes Scientists May Miss in a Paperless Office

– Contributed content –

Test tubes in a lab

(Martin Lopez, Pexels)

12 July 2019. The benefits of going paperless have been coming to the fore in every business recently, and the scientific field is no different. Whether you run a health care center or a research lab, the chances are that you’re at least considering doing away with paper. Going paperless can cut costs and improve efficiency, after all, both of which are vital in every scientific area. But, before you jump in, you should consider the downsides you may not be expecting from this move.

Admittedly, there are no downsides significant enough to mean going paperless isn’t the best move. But, as with anything, preparing for holes is the best way to cover them. Without further ado, then, let us consider the paperless downsides you might not be prepared for.

Limited space for storage

Taking filing digital can lead to storage issues. Of course, these are problems many scientists face when using traditional filing methods, but you’ve probably developed solutions such as extended storage spaces or outside filing units.

When you transfer to digital, though, you may assume that you’re letting yourself in for limitless space. Sadly, that isn’t the case. Even MacBooks may struggle to store files from an entire pharmacy, let alone a research lab. And, if you overload hard drives, you could end up with slow systems and storage all over the place.

The good news is that Cloud storage can pretty much eliminate this, with expanded storage capabilities which are all in one place. If you don’t fancy putting your files in an elusive cloud, though, there are more tips to take note of, such as installing a program which regularly cleans your Mac to keep space free, or even installing a second internal hard drive. Either way, preparation is key to making sure you can file everything with ease.

Outside risks to confidential files

Whether you’re storing client information or new research findings, file protection is paramount. Reducing security breaches is actually one of the major arguments for going paperless and stopping the wrong people from getting your hands on their records.

The trouble is that going digital opens those files to risks you can’t see, and that could make keeping information safe even harder if you don’t prepare for it ahead of time. And, you only need to consider the fall out from the 2018 NHS leaks to realize how bad that could be, especially if you store any patient information. Luckily, there are easy ways around this, but you’ll need to put them in place before paperless is even a possibility. For one, you’ll want to make sure any cloud storage you use is on a closed network with security measures you can trust. On your systems, you should also implement malware and antivirus software which you regularly update to keep your new filing system as safe as it can be for a long time to come.

Once you’ve taken care of risks like these, though, there’s really no reason why paperless can’t power your scientific enterprise.

*     *     *

Bacteria Recruited to Produce Graphene

Anne Meyer

Anne Meyer (J. Adam Fenster, University of Rochester)

11 July 2019. Labs in the U.S. and the Netherlands developed techniques to sustainably produce high-quality graphene with more capabilities, using a strain of bacteria. Researchers from University of Rochester in New York and Delft University of Technology describe their process in the 4 July issue of the journal ChemistryOpen.

A team led by Rochester biologist and materials scientist Anne Meyer and quantum-nanomaterials professor Herre van der Zant in Delft are seeking more sustainable and scalable methods to produce graphene, a material with many desirable qualities for a range of industries. The material is very light, strong, chemically stable, and only one atom in thickness, arrayed in a hexagonal pattern. Graphene can conduct both heat and electricity, with many applications in electronics, energy, and health care. In 2010, two researchers at University of Manchester in the U.K. received the Nobel Prize in physics for their discoveries on graphene.

But producing graphene safely in high volumes is difficult. Current production methods using chemical vapor deposition can produce single layers of graphene. While this technique produces a high-quality material, it’s a slow process requiring a controlled environment to grow the graphene, and continued problems separating graphene from its growing surface. Another technique uses exfoliation, or shredding layers, of graphite, as found in pencils, to graphene oxide, then chemically reducing graphene oxide to pure graphene. While the process is faster and more scalable than chemical vapor deposition, it requires use of highly toxic and unstable (i.e., explosive) hydrazine to reduce graphene oxide to graphene.

The Rochester-Delft team investigated an alternative process, to replace the use of hydrazine with a safer and more sustainable method that can still result in high-quality graphene. Their process employs a bacteria known as Shewanella oneidensis found in deep-sea environments, as well as soil. Shewanella oneidensis has an unusual characteristic for microbes, namely an appetite for heavy metals. These bacteria oxidize the carbon in heavy metals, and thus are studied as a method for environmental clean-up of chemical spills.

In this project, Meyer, van der Zant, and colleagues used Shewanella oneidensis in a biological process instead of the chemical hydrazine to reduce flakes of graphene oxide to pure graphene. The results show their microbial process can produce both flakes and bulk graphene with at least comparable physical properties of chemically-produced graphene. In addition, microbe-produced graphene outperforms chemically-produced graphene in some respects. The bacterial graphene is thinner than chemically-produced graphene and more stable, which allows for longer storage.

Moreover, the researchers found bacterial graphene has specialized qualities not found in chemically-produced varieties, including an affinity for biological molecules. This property makes bacterial graphene suitable for field-effect transistors, or FETs, electronic components found in biosensors and medical devices. “When biological molecules bind to the device,” says Meyer in a University of Rochester statement, “they change the conductance of the surface, sending a signal that the molecule is present.”

Meyer notes, “To make a good FET biosensor you want a material that is highly conductive but can also be modified to bind to specific molecules,” and adds that bacterial graphene has left-over carbon-oxygen bonds that can bind to target molecules. Bacterial graphene can be formulated as well into conductive inks for printing graphene circuits in electronic components or into fabrics or paper.

More from Science & Enterprise:

*     *     *

Analysis – Most New Drugs Not Adding Patient Benefits

Tablet with charts

(Pexels.com)

11 July 2019. An analysis of drugs approved for use in Germany since 2011 shows no evidence most treatments provide new or added benefits to patients compared to standard care. The findings are reported by a team from Germany’s Institute for Quality and Efficiency in Health Care, in yesterday’s issue of the journal BMJ.

The Institute for Quality and Efficiency in Health Care is Germany’s health technology assessment agency, which reviews  the benefits of new drugs for the country’s patients, approved largely by the European Medicines Agency, a body of the European Union. A 2011 law for reforming pharmaceutical markets in Germany calls for the Institute to evaluate the benefits of newly approved medications over the standard of care for the diseases treated.

Drug developers are required to submit evidence indicating added benefits of new treatments compared to standard care. In their assessments, the Institute, based in Cologne, shows if the drug provides minor, considerable, or major new benefits to patients. Orphan drugs are exempt from this evaluation.

The researchers, led by Beate Wieseler, who heads the Institute’s drug assessment department, reviewed 216 new drugs entering the German market between 2011 and 2017. Of the 216 new treatments, 125 or 58 percent showed no evidence of added benefits compared to the standard of care. For another 17 approved drugs (8%), the evidence was not quantifiable or showed even less benefit to patients.

A quarter (25%) of the new approved drugs, 54 of 216, showed major or considerable added benefits, while 20 of the new drugs (9%) reported minor added benefits. New treatments for cancer, cardiovascular, and infectious diseases were most likely to display added benefits compared to standard care, although for infectious diseases, much of that evidence was not quantifiable. Drugs to treat diabetes and psychiatric or neurological disorders were more likely to show no added benefits.

In some cases, says Wieseler in an Institute statement, evidence from studies comparing new drugs to the standard of care is just not available, and in a few instances suitable data are available, but show no evidence of benefits. And in still other cases data are available, but the comparison treatment, notes Wieseler, “is unsuitable, for example, because it is not approved for the patients investigated. In this situation, there is no information that could support the decision by patients and physicians for one of the available treatment alternatives.”

The team attributes much of this situation to pressure on regulatory authorities to speed up the approval process for new drugs. One reason given for accelerated regulatory reviews are requirements for post-marketing studies, further evaluations of new drugs after initial approval is given. The researchers point to reviews showing only about half of these post-marketing studies are completed within the required time, some taking as long as five or six years. In other cases, new drugs may show evidence of added benefits to patients, but address similar targets or with similar mechanisms of action as other new drugs.

The researchers acknowledge many of the genome-driven new cancer treatments that address specific genetic variations responsible for some cancers serve only a minority of patients, although that’s the nature of precision medicine. The authors recommend a more proactive approach by stakeholders, including payers (e.g. insurance companies and health authorities) requiring more solid evidence of outcomes that benefit patients to qualify for reimbursements. And for the longer term, the researchers suggest more early direction from health authorities on new drugs, encouraging development of therapies meeting high priority needs rather than leaving those decisions to drug makers.

More from Science & Enterprise:

*     *     *

Companies Responsible for Their Indoor Air Quality

– Sponsored content –

Green laser beams

(SD-Pictures, Pixabay)

11 July 2019. Companies in manufacturing or distribution must keep a close watch on the quality of the air inhaled by their staff while on the job. Substances made at their work sites, or stored for any length of time, may emit volatile compounds into the air, which these businesses need to monitor.

Two federal agencies in the U.S., Centers for Disease Control and Prevention (CDC) and National Institute of for Occupational Safety and Health (NIOSH), provide guidelines for workplace safety and health. Those guidelines include indoor enviromental quality. Some air-quality concerns are easy to track, such as dampness or humidity. The document notes, however, that a key concern of employees is exposure to gas pollutants, but it’s more difficult to pinpoint the cause of these air quality problems, since symptoms may disappear when leaving the workplace.

CDC and NIOSH list some of these symptoms indicating harmful chemicals in the air: itchy or watering eyes, skin irritations or rashes, nose and throat irritation, nausea, headache, dizziness, and fatigue. As a result, monitoring indoor air quality needs to be done while staff is on the job and reported instantaneously if possible, to alert employees and others in the facility, and prevent these symptoms from developing.

A technology for reliable measurement of indoor air contaminants in real time is a form of spectrometry using light to analyze airborne molecules. Spectrometry sends laser beams through a chemical compound to activate its molecules, with the laser aimed at a sensor. Before the beam reaches the sensor, a grate breaks down the light waves into colors, with various collections of colors offering a unique signature for each compound. These hese signatures are then captured as data that the system can store, display, or process further.

Blue Industry and Science has advanced laser spectrometry to rapidly analyze air quality more efficiently with what the company calls virtual lasers. This technology makes it possible to activate multiple lasers to detect, identify, measure, and report on multiple compounds simultaneously. This kind of laser spectrometry can provide early warning of potentially harmful compounds in the air to prevent worker or visitor illnesses.

*     *     *

Cold-Chain Drug Delivery Tested Via Drone

Medical drone

Medical drone illustration (Volans-i Inc.)

10 July 2019. A group of companies and organizations designed and tested a long-distance drone capable of delivering drugs needing constant cold temperatures. The consortium led by the humanitarian organization Direct Relief included participation from drone maker and delivery company Volans-i Inc., drug maker Merck, temperature-controlled packaging company SoftBox, and network company AT&T.

The project team is seeking better ways to deliver drugs, particularly vaccines, needing continuous refrigeration from manufacture through administration to the patient — the so-called cold chain — in difficult conditions. The optimum temperature for many therapies and vaccines is between 2 and 8 degrees Celsius, or 35 to 46 Fahrenheit, which must be maintained until given to recipients. And the controlled temperature must be maintained throughout the supply chain or put the sensitive drug or vaccine cargoes at risk of spoilage.

Cold-chain delivery is particularly difficult in remote regions or disaster areas, where roads are cut off or communities are isolated. “Experience and research consistently show,” says Andrew Schroeder, director of research and analysis at Direct Relief in Santa Barbara, California, in a joint statement, “that those most at risk in disasters live in communities which are likely to be cut off from essential health care due to disruption of transportation and communications. Drone delivery is one of the most promising answers to this problem.”

The team earlier tested drones, also known as unmanned aerial vehicles or UAVs, over shorter distances and within sight of the testing crew in Switzerland and Puerto Rico. In this test, an electric-powered autonomous drone made by Volans-i, in San Francisco, successfully delivered a simulated cold-chain drug package between islands in the Bahamas, crossing open water, and well beyond the line of sight of the dispatch team.

The project team says they maintained temperatures as low as -70 degrees C during the entire flight, a temperature required for some drugs. The cargo package was designed by cold-chain packaging systems maker Softbox in Long Crendon, U.K. The payloads were contained in a Skypod, a thermally-insulated package designed for drones, made by Softbox, and monitored continuously during the flight by a system connected via the cloud through AT&T. The Skypod, says Softbox, has sensors connected with AT&T’s Internet-of-things devices to the cloud.

The project team next plans to extend the tests to Latin America and Africa. “More remains to be done to operationalize medical cargo drones in emergencies,” notes Schroeder. “But successful tests like this one demonstrate that remarkable new humanitarian capabilities are emerging quickly.”

More from Science & Enterprise:

*     *     *

Start-Up Lands Grant to Clean-Up Space Debris

Dragsail illustration

Dragsail illustration (David Spencer, Purdue University)

10 July 2019. A company founded earlier this year is receiving a small-business grant from NASA for a technology to remove small satellites and other debris from space. The agency awarded Vestigo Aerospace LLC, a spin-off enterprise from Purdue University in West Lafayette, Indiana, $125,000 to demonstrate the feasibility of the company’s dragsail system for retrieving nanosatellites and other small objects from earth orbit.

Despite the popular idea of an infinite outer space, the amount of useful territory in low earth orbit is rapidly filling up. To a certain extent, NASA’s Small Satellite Technology program is responsible for the proliferation of small research systems developed by academic and commercial labs and launched in standardized CubeSat containers. A recent market research study forecasts up to 2,800 of these small devices will be launched over the next five years, while plans are proceeding for entire constellations of new satellites, numbering in the thousands, to deliver high-speed Internet service.

Vestigo Aerospace proposes launching a system to de-orbit or retrieve small objects from space. The device, called a dragsail is a pyramid-shaped passive collector that scoops up the small satellites, weighing up to 400 kilograms (882 lbs.). Dragsails would be deployed at the end of the small satellites’ missions, to take them out of earth orbit, rather than continuing to orbit for years or decades. The devices are designed to be carried in a CubeSat, then operate for 11 days to retrieve the target objects.

The project will look as well into the safe return of retrieved objects to earth. Purdue aeronautics and astronautics professor David Spencer, director of the university’s Space Flights Projects Laboratory and founder of Vestigo Aerospace, says in a university statement, “The team will also investigate the use of dragsails for targeted reentry of space objects, to reduce the uncertainty in atmospheric reentry corridors and debris impact zones.” The Purdue lab is partnering with Vestigo on the project.

The NASA grant, awarded under the agency’s Small Business Innovation Research program, calls for Vestigo Aerospace to design a model and conduct tests to prove the concept of dragsails for small object retrieval in space. The tests need to show basic functionality of the dragsail model in the lab and that its components work together. If the researchers can prove the dragsail concept in this first phase, a follow-on project would develop a working prototype.

“Through the six-month study,” adds Spencer, “we will advance dragsail technology for the de-orbit of small satellites and launch vehicle stages. The safe disposal of space objects upon mission completion is necessary to preserve the utility of high-value orbits.”

More from Science & Enterprise:

*     *     *

Voice Indicators to Monitor Dementia Progression

Brain circuits illustration

(HypnoArt, Pixabay)

9 July 2019. A company developing diagnostics for cognitive disorders from changes in speech patterns is partnering on vocal biomarkers to track progression of a form of dementia. Financial and intellectual property details of the agreement between Winterlight Labs Inc. in Toronto, Ontario, Canada and the biotechnology enterprise Alector in South San Francisco, California were not disclosed.

Winterlight Labs is identifying indicators of disease progression for frontotemporal dementia or FTD, a progressive neurodegenerative disease affecting the frontal and temporal lobes of the brain. FTD is marked by a gradual decline in behavior or language, similar to dementia, but usually not affecting memory. People with FTD often find it difficult to plan or organize activities, engage in social interactions, behave appropriately in professional situations, or care for oneself. According to the Association for Frontotemporal Degeneration, another name for the disease, the progression of symptoms in behavior or language can vary from 2 to 20 years, and according to Alector it affects 50,000 to 60,000 people in the U.S.

Winterlight is a four year-old company that employs computational linguistics, machine learning, and neuroscience to assess cognitive health, such as memory and reasoning. The company’s main product is a tablet-based system that analyzes hundreds of short segments from a person’s speech to discover underlying indicators of cognitive decline. The technology is designed for use in clinical trials, as well as in senior care facilities, to detect the earliest indicators of cognitive decline, often before noticeable symptoms occur and when interventions may have more beneficial effects.

Alector is a biotechnology company developing treatments for neurodegenerative diseases like FTD that invoke the immune system. The company’s treatments are designed to repair genetic mutations responsible for malfunctions in the immune system allowing neurological disorders to take hold, and thus enable the immune system to counteract progression of the disorder.

One of the company’s lead therapy candidates is code-named AL001, a targeted synthetic antibody to increase production of a protein called progranulin that regulates immune activity in the brain, and associated with FTD, Alzheimer’s, and Parkinson’s disease. In April, the company reported on an early-stage clinical trial of AL001 that shows the treatment achieved a dose-dependent increase in progranulin in healthy volunteers, without causing serious drug-related adverse effects.

Winterlight Labs and Alector are collaborating on two projects. The first study is using Winterlight’s speech-based biomarkers to follow people with FTD for one year. During that time, participants’ cognitive conditions are assessed through the company’s tablet app, to lessen the need for clinic visits.

Early data from the study, says Winterlight, show people with primary progressive aphasia, a variation of FTD, express themselves with simpler words and grammar than people of similar ages in a comparison group without the disorder. Data from this first project are expected to help design end-points or treatment efficacy targets for a mid-stage clinical trial of Alector’s AL001.

More from Science & Enterprise:

*     *     *