Industry

London: Deep Learning in Healthcare

At RE•WORK Summits, speakers are invited to present advances from the world's leading innovators, showcase the opportunities in emerging health care industry

At RE•WORK Summits, speakers are invited to present advances from the world’s leading innovators, showcase the opportunities in emerging industry trends, and discuss the impact on business and society, all centered around the theme of Artificial Intelligence and Deep Learning. Many data scientists, machining learning scientists, and entrepreneurs attend this summit to network and learn about the technologies that will shape the future. As analyst from Synced, I’m invited to the RE•WORK· Deep Learning in Healthcare Summit in London.

The topic of this particular summit is Deep Learning in Healthcare, and there are a total of 29 speakers.This is a compact report for this summit, introducing each speakers and the content of their presentation. There are also some analysis available for select topics.

Medicine, by definition, is an information science that requires the capacity to actively acquire individualized and context-specific data. Then to iteratively evaluate, assimilate and refine this information against a vast database of medical knowledge in order to arrive at a small solution space. In practice, doctors work hard to find a corresponding implementable policy according to patient data and solution space. In this regard, Deep Learning have great potential in the medical domain. However, in order to do this, we must first combine DL and Medicine in a clinically relevant manner.

Speakers in this summit presented their own ideas regarding this question, with a focus on the processing and recognition of medical data and images. Luca Bertinetto from ICL presented Fully-Convolutional Siamese Networks for Object tracking. Viktor from SkinScanner and Anastasia from Beautify A.I. both proposed the use of skin images to diagnose skin diseases and generate treatment plans. Fangde Liu from ICL presented a method to make personal devices capable of applying DL to medical images. Johanna from Oxford University, and Daniel from Ada both came up with the idea to use data collected by remote equipment. Following is my notes that I would like to share with you.

Topic 1: Deep learning and healthcare in practice

Towards the Development of Clinically Relevant Applications of Deep Learning in Healthcare

Speaker: Michael Kuo, UCLA

Dr. Kuo received his Medical Degree from Baylor College of Medicine, and did his clinical training in Diagnostic Radiology at Stanford University, where he also completed a clinical fellowship in Cardiovascular and Interventional Radiology. He served as Assistant Professor in the Department of Radiology at the University of California-San Diego from 2003-2009. In 2009, he moved to the University of California-Los Angeles where he is an Associate Professor in the Departments of Radiology, Pathology and Bioengineering, while also serving as Director for both the Radiogenomics and Radiology-Pathology Programs. Dr. Kuo is an international leader in the field of Radiogenomics, where he has published seminal foundational papers. In radiogenomics, his principle area of research, his group applies integrative computational and biological approaches in order to derive actionable clinical insights, tools centered around patient stratification, and therapeutic response prediction by leveraging large multi-scale relational data sets including clinical outcomes, clinical imaging, tissue, cellular and sub-cellular biological data.

2

In this talk, Michael presented the many challenges faced by Deep Learning in medicine, and described how radiogenomics can be used to help overcome several of these limitations. He thinks the limited number of data and the complexity of images are the two main problems today, as well as the lack of a “gold standard”. He also shared some of his group’s experiences on the cure to cancer, and talked the possibility of using medical images to predict the performance of some therapy methods. Lastly, he shared some thoughts regarding the opportunities in combining deep learning and healthcare.

Challenges for Delivering Machine Learning in Health

Speaker: Neil Lawrence, University of Sheffield

Neil Lawrence is a Professor of Machine Learning at the University of Sheffield, currently on leave of absence at Amazon, Cambridge. His main technical research interest is machine learning through probabilistic models. He focuses on both the algorithmic side of these models as well as their applications. He has a particular interest on applications in personalized healthcare and applications in developing countries. Neil is well known for his work with Gaussian processes, and has proposed Gaussian process variants for many of the successful deep learning architectures. He is also an advocate of of the ideas behind “Open Data Science” and is active in public awareness and community organizations (see https://www.theguardian.com/profile/neil-lawrence). He has been both program chair and general chair for the NIPS Conference.

3

Neil started off talking about big data and its effects. He thinks big data brings new challenges and opportunities for both personalized health and mental health. (Regarding mental health, Valentin from Ieso Digital Health also presented some ideas on March 1st ). He then focused on three challenges for machine learning in health:

1. Paradoxes of the Data Society: Breadth vs Depth

2. Quantifying the Value of Data: accessibility, validity, usability

3. Privacy, loss of control, marginalization: Society is becoming harder to monitor but individual is opposite.

For his final thought, he suggested that data science offers a great deal of promise for personalized health and it is incumbent on us to avoid challenges and pitfalls. Personally, I liked the final sentence on his slide – “Many solutions rely on education and awareness”.

Topic 2: AI in drug discovery and development

Deep Learning-based Diagnostic Inferencing and Clinical Paraphrasing

Speaker: Oladimeji Farri, Philips Research

Oladimeji (Dimeji) Farri received his PhD in Health Informatics from the University of Minnesota, and MBBS (Medicine and Surgery) from the University of Ibadan, Nigeria, in 2012 and 2005 respectively. He is currently a Senior Research Scientist at Philips Research – North America (PRNA) in Cambridge, Massachusetts, where he leads the Artificial Intelligence Lab. His interests are in clinical NLP, text analysis, and question answering/dialog systems to address medical dilemmas experienced by patients, consumers, and healthcare providers. His recent work includes the use of deep learning to offer solutions for clinical decision support and patient engagement.

4

This presentation highlights some of the work at Philips Research NA’s AI Lab, in which they focus on networks-based diagnostic inferencing and clinical paraphrasing. Oladimeji stated that the information overload of medical knowledge meant it is becoming harder for doctors or patients to find an efficient way to get a promising diagnostic. He showed two methods: Knowledge Graph Approach and Condensed Memory Network for Diagnostic Inferencing, and highlighted their next step is to combine these two methods. Later, he talked about a method, called “Neural Clinical Paraphrase Generation” that can paraphrase what doctors said to patients, because patients’ anxiety and confidence actually rely on how much they can understand what the doctors told them. Finally, he showed us a technique called Adverse Drug Event Detection in Tweets with Semi-Supervised CNN that addresses the recommendation of potential adverse drug events from real-time social media streams.

Application of Deep Neural Networks to Biomarker Development

Speaker: Polina Mamoshina, Insilico Medicine

Polina Mamoshina is a senior research scientist at Insilico Medicine, a Baltimore-based bioinformatics and deep learning company focused on reinventing drug discovery and biomarker development, she is also a part of the computational biology team of Oxford University’s Computer Science Department. Polina graduated from the Department of Genetics at Moscow State University. She was one of the winners of GeneHack – a Russian nationwide 48-hour hackathon on bioinformatics attended by hundreds of young bioinformaticians – at the Moscow Institute of Physics and Technology. Polina is involved in multiple deep learning projects at Insilico Medicine’s Pharmaceutical Artificial Intelligence division, working on the drug discovery engine, and developing biochemistry, transcriptome, and cell-free nucleic acid-based biomarkers of aging and disease. She recently co-authored seven academic papers in peer-reviewed journals.

Through intelligent analysis of high-throughput screening experiments and large repositories of biomedical data, applications of deep neural networks combined with domain expertise can help optimize the biomarker development process.

5

This presentation covered aspects of creating multi-modal biomarkers of human age, trained on human blood biochemistry and transcriptomics data. Polina showed us the classification accuracy of their neural network over a set of drug profiles, and also talked about the applications of Generative adversarial networks (GANS) over image synthesis.

Topic3: DEEP LEARNING IN MEDICAL IMAGING

Deep Learning in Medical Imaging – Successes and Challenges

Speaker: Ben Glocker, Imperial College London

Ben Glocker is a Lecturer in Medical Image Computing at Imperial College London’s Department of Computing. He holds a PhD from TU Munich, was a post-doc at Microsoft Research Cambridge, and was a research fellow at the University of Cambridge. He received several awards for his work on medical image analysis, including the Francois Erbsman Prize, the Werner von Siemens Excellence Award, and an honorary mention for the Cor Baayen Award. Ben is the deputy head of the BioMedIA group, and his research focuses on applying machine learning techniques for advanced biomedical image computing and medical computer vision.

6

For his presentation, Ben talked about some of the problems on the application of deep learning in clinical practice. He took brain lesions and cardiac analysis as examples to introduce these challenges (and his note on the slide shows that these are just “some” of the challenges):

  1. Learning the right features
  2. How do we know when the machine gets it wrong?
  3. Can we predict failure, and can we make the machine robust to changes in clinical data?

He then talked about some of their work on these problems, and argued that we could try to get machines to perform better than humans by applying ground truth. These challenges still exit, and there are still many problems waiting to be solved in the application of deep learning in healthcare.

Quantitative MRI-Driven Deep Learning

Speaker: Kyung Hyun Sung, UCLA

Dr. Sung received the M.S and Ph.D. degrees in Electrical Engineering from the University of Southern California, Los Angeles in 2005 and 2008, respectively. From 2008 to 2012, he finished his postdoctoral training at Stanford in the Departments of Radiology, and joined the University of California, Los Angeles (UCLA) Department of Radiological Sciences in 2012 as an Assistant Professor. His research interest is to develop fast and reliable MRI methods that can provide improved diagnostic contrast and useful information. In particular, his group (http://mrrl.ucla.edu/meet-our-team/sung-lab/) is currently focused on developing advanced quantitative MRI techniques for early diagnosis, treatment guidance, and therapeutic response assessment for oncologic and cardiac applications.

7

First, Kyung Hyun Sung introduced the background of cancer and some examples of prostate cancer. Then, he introduced the situation of current MRI technology and its challenges , like data heterogeneity and geometric distortion. Afterwards, he presented deep learning methods to effectively distinguish between indolent and clinically significant prostatic carcinoma using multi-parametric MRI (mp-­MRI). The main contributions include

  • Pre-trained convolutional neural network (CNN) models to avoid massive learning requirements
  • Applying the proposed DL framework to the computerized analysis of prostate multi-parametric MRI from improved cancer classification.

Finally, he showed us a two-stage Hybrid DL model, which showed great feasibility to outperform conventional mp-MRI scoring systems.

Panel: How to Overcome Challenges Faced in Medical Imaging Databases

Laurens Hogeweg, Deep Learning Engineer, CosMoniO

Jorge Cardoso, Lecturer, UCL

Reza Khorshidi, Chief Scientist, AIG

Moderator: Anastasia Georgievskaya, Research Scientist, Beauty. AI

8

In this panel, they talked about the potential problems with medical imaging databases. After the adoption of deep learning, a database may add many new label types, such as images or notes of doctors and patients, but the addition of these labels and data may affect the normal function of the database. To address this problem, scientists need to combine different data and figure out how should a label be applied over data. They also talked about whether hospitals or some institutions should make data openly available for others to use in experiments. Making data openly available may cause potential problems regarding privacy, but withholding data may prevent the evolution of Deep Learning applications in healthcare. Perhaps transfer learning can solve this problem, like Gilles mentioned in his presentation.

Topic 4: DEEP LEARNING FOR DIAGNOSTICS

Artificial Intelligence and Optical Coherence Tomography – Reinventing the Eye Exam?

Speaker: Pearse Keane, Moorfields Eye Hospital

Pearse A. Keane, MD, FRCOphth, is a consultant ophthalmologist at Moorfields Eye Hospital, London and an NIHR Clinician Scientist, based at the Institute of Ophthalmology, University College London (UCL). Dr Keane specialises in applied ophthalmic research, with a particular interest in ocular imaging. He joined Moorfields in 2010; prior to this, he carried out retinal imaging research at the Doheny Eye Institute in Los Angeles. He is originally from Ireland and received his medical degree from University College Dublin (UCD).

9

Pearse introduced some fields which are driven by technology, then talked about the bionic eye, and the developments of technologies around the examination of eyes. He then talked about the motivation behind applying deep learning to ophthalmology, the processes required to establish a research collaboration between the NHS and companies like Google DeepMind, and the goals of their research. Finally, he expressed his expectation to reinvent ophthalmology by applying deep learning.

Topic5: WEARABLES IN HEALTHCARE

10,000 Steps; So What? Are Wearable Technologies the Future of Clinical Trials?

Speaker: Johanna Ernst, University of Oxford

Johanna Ernst is a DPhil student at the University of Oxford working in affilitation with the Institute for Biomedical Engineering and the George Institute for Global Health, where she is involved with the center’s Program on Deep Medicine. As part of her research, Johanna explores the use of wearable technologies for heart failure risk-stratification. She previously worked as a visiting researcher at Misfit Inc., a world-leading wearable technology developer, where she investigated the use of commercially available physical activity monitors for clinical trial monitoring.

10

Johanna started off by introducing the many wearable equipment around us today, and described the many challenges for the pharmaceutical industry: patent expiration, R&D cost, One Size Fits All model, added value, faster evaluation, and personalization. She showed us some of her research on how to address these challenges. From the conversation I had with her after the talk, many wearable equipment are share one common problem: users will give up these equipment unless they know what can they do with the data collected.

Topic 6:DEEP LEARNING IN NEUROSCIENCE

Micro EMG: Imaging the Inner Structure of the Human Muscle Guided by a Deep Learning Approach to Muscle Fiber Localization

Speaker: Bashar Awwad Shiekh Hasan, University of Newcastle

Dr. Awwad Shiekh Hasan is a senior research associate in computational neuroscience at Newcastle University. His research is focused on the use of computational modelling to expand our understanding of the fundamental neural mechanisms of cognition and perception, and how that understanding can be translated into action. He worked in several interdisciplinary areas including Brain-Computer Interfaces, neural imaging, and most recently the development of medical devices. He has a British patent and has published extensively in leading scientific outlets in neuroscience, and machine learning.

11

Bashar started with an introduction to motor unit structure, then discussed their latest development of a multi-channel electromyography needle. Using flexible electrodes technology, 64 electrodes are placed in a custom designed pattern to maximize the information available for the localization of muscle fibres in human. After the motor units are isolated, an unsupervised stacked denoising auto-encoder is employed to further decompose the motor unit into its constituent muscle fibres, leading to localization of fibres with 100 micrometer accuracy for over 50 fibres simultaneously. This has the potential to revolutionize neurophysiology and the diagnosis of neuromuscular disease.

Topic 7: TRANSFER LEARNING IN HEALTHCARE DATA

Collaborative Artificial Intelligence for Healthcare Data

Speaker: Gilles Wainrib, Owkin

Gilles Wainrib is Chief Scientific Officer and co-founder at Owkin, where he leads the data science team. He holds a PhD in applied mathematics from Ecole Polytechnique and was a former researcher at Stanford University and Ecole Normale Supérieure in Paris, working on machine learning algorithms and their applications in biology and medicine. He is the author of 30+ scientific publications in mathematics, physics, biology, medicine and computer science.

12

In this talk, Gilles first introduced their company, then on transfer learning and its application over medical data. Transfer learning can use data more effectively than previous techniques, and can solve the problem that some institutions do not want to share data. He showed us three transfer learning approaches: warm restart, multi-task, and regularization, as well as the importance of transfer learning. Finally, he offered a new platform for medical image recognition based on deep transfer learning and collaborative AI.

Topic 8: PRECISION AND PERSONALISED MEDICINE

Clinical Relevance of Deep Learning to Facilitate the Diagnosis of Cancer Tissue Biomarkers

Speaker: Michel Vandenberghe , AstraZeneca

Michel Vandenberghe is working at AstraZeneca, developing deep learning algorithms to analyse immunohistochemistry biomarkers and evaluating the potential uses of deep learning to support biomarker development and clinical decision making. Prior to that, he gained a PhD in Computer Science at University Pierre and Marie Curie, and a Doctorate in Pharmacy, at the University Paris Sud XI.

13

Michel started the presentation discussing the situation of pathology driven clinical decision making in cancer, and three challenges of pathology practices with the emergence of personalized healthcare:

  • Increasing number of biomarkers
  • Requirement for quantitative estimations
  • Studies demonstrate significant inter- and intra- pathologist variability.

In the past, human can only perform better than computer on pattern recognition and worse on objectivity, reproducible and quantitation, but new technology greatly improved computer recognition. Michel showed us a computational approach based on convolutional neural networks that automatically scores HER2, an immunohistochemistry biomarker that defines patient eligibility for anti-HER2 targeted therapies in breast cancer. He said their results show that convolutional neural networks substantially agree with pathologist-based diagnosis. Furthermore, they found that convolutional neural networks highlighted cases at risk of misdiagnosis, providing preliminary evidence for the clinical utility of deep learning aided diagnosis. However, this current approach is not validated in a multi-center setting.

3.1

Start session

SurgicalAI: Can we make Surgeries Autonomous?

Speaker: Fangde Liu, Research Associate, Imperial College London

Dr Fangde Liu is a Research Associate from Imperial College London, currently head of Imaging Informatics at the Data Science Institute. His work focuses on bringing autonomy technology to daily clinical practise such as surgical robots and pharmacovigilance system. He is the architect of the surgical navigation system in EDEN2020, the largest surgical robots project in the EU, and is currently managing several medical imaging big data project for cardiology disease quantification and neurology pharmacoviglance. He is an expert on GPU and parallel computing, and contributed to many technologies for medical imaging processing and surgery planning using GPUs. SurgicalAI is a new startup providing autonomous surgery planning and patient specific medical devices design with GPU cloud.

14

Fangde introduce the present-day circumstances around surgery, their thoughts on how to break the barrier of technology deployment, and their work using AI to build a patient specific medical device which makes surgery safer, easier and more efficient. He believes AI has a big opportunity in surgery (precision and speed), and in the feasibility of autonomous surgery. For surgery, AI should provide tools and not solutions.

Helping Clinicians Cure Cancer Using Artificial Intelligence and Big Image Datasets?

Speaker: Václav Potesil, Optellum

Vaclav is a co-founder of Optellum, a startup formed by a team of AI, medical imaging and clinical experts who met at the University of Oxford. Optellum’s vision is to enable earlier and better cancer diagnosis and treatment by using Machine Learning to unlock new insights in huge image databases. Vaclav holds Oxford PhD in Computer Vision (lung cancer therapy planning) in collaboration with Siemens Molecular Imaging and Mirada Medical. He developed and launched pioneering medical robotics devices as Global Product Manager at Hocoma, the global market leader in neuro-rehabilitation exoskeletons. He has worked in 10 countries and speaks 7 languages.

15

Václav Potesil introduced some basic background of lung cancer and the situation about collecting data nowadays. He shared their experience in transforming a proof of concept towards an intelligent decision support system to be deployed in the clinic. From his talked we know there are so much data produced every day but it is hard for us to use them in the past, but deep learning give us a new method to do it. Finally, he talked some challenges when they try to use deep learning to help curing cancer.

Disrupting Dermatology with Deep Learning

Speaker: Viktor Kazakov, SkinScanner

Viktor is the co-founder of SkinScanner, a London-based startup specializing in using Deep Learning algorithms for image classification of skin conditions. SkinSkanner’s ambition is to disrupt dermatology by making the early diagnosis of skin conditions quicker, cheaper and more accurate. Viktor holds a Master’s Degree from SciencesPo, Paris and is a member of CFA Institute. Viktor is a full stack developer with several years of med-tech experience, having launched a number of mobile and desktop-based applications in the healthcare space.

16

Viktor started with a video introduction of their company. Then he showed us the present-day dermatology challenges and the technologies they used as solutions. They are thinking about acquiring more data from other institutions like Google, and try out other models such as ResNet or Googlenet. He also talked some probable methods they may use, like NLP model, CE Mark, Legal disclaimer liability and Regulatory approvals.

Creating Training Sets Quickly and Easily For Computer Vision Applications for the Healthcare Sector

Speaker: Natalia Simanovsky, CVEDIA

A graduate from the London School of Economics, Natalia’s professional experience includes over 10 years writing and research for a variety of global clients including: think tanks in the US, Canada and Israel; intergovernmental organizations including the United Nations Coordination for Humanitarian Affairs; PR and advertising firms; financial services firms; and startups focusing on hi-tech. Having been invited to join CVEDIA, she is on a steep learning curve and is humbled to work alongside a team of incredibly forward-thinking, technical geniuses.

17

Natalia discussed the current challenges facing many researchers and scientists when they manage their image datasets. Natalia told us CVEDIA developed a series of productivity tools designed to simplifythe data collection and preparation process, and explain the ways in which CVEDIA is helping data scientists simplifying the data management process

Enhancing Sight with Machine Learning and Augmented Reality

Speaker: Stephen Hicks, OxSight

Luca Bertinetto, University of Oxford

Dr Stephen Hicks is a Lecturer in Neuroscience and Visual Prosthetics at the University of Oxford, and founder of OxSight Ltd, a startup developing augmented reality systems to enhance daily vision for partially sighted people. Stephen holds a PhD from the University of Sydney and was the recipient of a number of awards including the Royal Society Award for Innovation in 2013 and the Google Global Impact Challenge Award in 2015.

Luca has obtained a joint MSc in Computer Engineering between the Polytechnic University of Turin and Telecom Paris Tech. At the moment he is in the third year of his PhD program with the Torr Vision group at the University of Oxford. The focus of his doctorate is learning representations from video when very little supervision is present – the so called one-shot learning scenario. He is interested in applying these techniques to the problem of arbitrary object tracking, which is a key component of many AI-equipped video processing systems.

18

19

This presentation was divided into two parts, the first part was held by Stephen Hicks. He explained how important it is to enhance people’s vision, and showed us how their product worked. The second part was held by Luca Bertinetto. Luca showed us a Fully-Convolutional Siamese Networks for Object Tracking, which is faster than conventional methods because it works by measuring how similar two image patches are, and not recognizing or classifying them. From the conversation we had afterwards, he told us the experiments over multi-objects are implementing by their group, and the results may be publish within next two months.

Topic: Applications of deep learning in healthcare

Deep Learning for Analyzing Perception of Human Appearance in Healthcare and Beauty

Speaker: Anastasia Georgievskaya, Beauty.AI

Anastasia Georgievskaya is the co-founder and research scientist at Youth Laboratories, a company developing tools to study aging and discover effective anti-aging interventions using advances in machine vision and artificial intelligence. She helped organize the first beauty competition judged by a robot, Beauty.AI, and developed an app called RYNKL for the tracking of age-related facial changes and testing the effectiveness of various treatments . Anastasia has a degree in bioengineering and bioinformatics from the Moscow State University. She won numerous math and bioinformatics competitions and successfully volunteered at some of the most prestigious companies in aging research, like Insilico Medicine.

20

Anastasia started off by introducing their product: let robot to judge people. She showed us BEAUTY.AI, which was used to select the one who would get the highest score among all the people who took part in their competition. And we talked with her after the presentation, they have another app called RYNKL, which can be used to judge whether our skin looks good. The presentation describes the strategies for evaluating human appearance for machine-human interaction and reveals the risks and dangers of deep-learned biomarkers.

Deep Learning in Health – It’s Not All Diagnostics

Speaker: Nils Hammerla, bablyon Health

Nils Hammerla leads the machine learning at Babylon, the UK’s leading digital healthcare service. Its purpose is to democratize healthcare by putting an accessible and affordable health service into the hands of every person on earth. In order to achieve this, the company is bringing together one of the largest teams of scientists, clinicians, mathematicians and engineers to focus on combining the ever-growing computing power of machines with the best human medical expertise to create a comprehensive, immediate and personalized health service, then making it universally available. Nils holds a PhD in Computer Science from Newcastle University and has published extensively on the application of machine learning to a variety of challenges in healthcare, including automated assessment in Parkinson’s disease, Autism, rehabilitation and sports.

21

Nils presented some of the things that used to require a doctor, but can now be solved by deep learning, such as understanding peoples’ language, and talking with people and figuring out their ailments. He showed us some approaches to implement these things (https://openreview.net/pdf?id=r1Aab85gg) and spoke about their future directions.

The AI Will See You Now: Will Your Doctor Be Replaced by an Algorithm?

Speaker: Daniel Nathrath, Ada Health

Daniel has lived and worked in Germany, Denmark, the UK and the USA as Founder, Managing Director and General Counsel at several internet startups. He also spent some years as a Consultant at the Boston Consulting Group. He trained as a lawyer in Germany and the USA, where he was a Fulbright Scholar, and earned his MBA from the University of Chicago.

22

Daniel started his presentation with a question: Will your doctor be replaced by an algorithm? Daniel showed us their app, Ada, which is an very interesting and useful app for the popularization of medical knowledge in helping patients self-diagnose or doctors diagnose patients. They define Ada as a doctor’s assistant and a personal assistant. Daniel think doctor in the future should be trained to use these new technologies. We talked with their AI engineer, and he told us they use Bayes network to implement a classification task. Their product may be launched in China this year, which will help the general Chinese population.

Applications of AI and Machine Learning in Healthcare: Focus on Comorbidities

Speaker: Marzieh Nabi, Xerox PARC

Marzieh is a scientist by profession and an entrepreneur by heart. Her research lies in the intersection of systems science, AI, and machine learning and their wide range of applications from energy, to transportation, to aerospace, to multi-agent and autonomous systems, and more recently healthcare. She graduated with a PhD in Aeronautics and Astronautics and a M.Sc. in mathematics from University of Washington at the end of 2012, focusing on mathematical modelling, probabilistic analysis, distributed control and optimization, networked dynamic systems, and cyber-physical systems. She also obtained an executive MBA from Stanford’s Graduate School of Business (Ignite, Summer of 2015). Marzieh is holding an AIR (Analyst in Residence) position in HealthTech Capital, an investing firm focusing on healthcare related startups. She is also an Associate at Sand Hill Angels helping with business analysis, technical analysis, and due diligence.

23

In this talk, Marzieh started with General problem of multiple chronic conditions. Then she talked about some of the interesting research questions they worked on at PARC, and showed some technical and business challenges she sees today. Finally, she showed us the importance of explainable machine learning and the difficulties in creating explainable machine learning.

Artificial Neural Networks Giving Back – Applications of Deep Learning to Mental Health Therapy Provision

Speaker: Valentin Tablan, Ieso Digital Health

Valentin is a principal scientist at Ieso, and heads their AI initiatives. He has worked on Natural Language Processing, Knowledge Representation, and Artificial Intelligence, spanning both symbolic methods, and machine learning including deep learning. Prior to joining Ieso, he was the lead scientist on the question answering service that powers Amazon’s Alexa smart assistant. Valentin has a PhD from the University of Sheffield, UK, where he also worked as a senior researcher on the popular ‘GATE’ open-source framework for text mining. He has authored more than 70 academic publications in journals and peer-reviewed conferences.

24

Valentin started with the situation of mental health around us and introduced IAPT (Improving Access to Psychological Therapies) in the UK, which is a plan to improve access to Psychological Therapies for people with Depression and Anxiety Disorders. Ieso Digital Health has an On-line Talking Therapy. They improved their results and saved money by using Artificial Intelligence to implement digital triage.

Panel: What Trends and Opportunities Can be Expected for the Future of Healthcare?

Alex Matri, Digital Health Manager, Nuffield Health

Claire Novorol, Founder, Doctorpreneur

Aural Soria-Frisch, R&D Manage, Starlab

Moderator: Rowland Manthorpe, Associate Editor, WIRED

25

In this panel, they talked about the future of deep learning in healthcare and gave their own opinions regarding this question. Aural thinks robot will not replace human doctor in the future. Alex thought the outcome of Deep Learning in healthcare depends on how well human can use machines. Claire held a similar view to Daniel, the ways to train doctors have to be different from the past, and they should get more professional training on modern technologies and try to understand technology.

Reflections

Many excellent ideas were presented in this summit. In particular, the combination of DL and biomarker is a valuable idea from this summit. There has been no big development among many brain diseases like autism for many years now, because it is hard to find an appropriate biomarker. Attempting for a breakthrough from machine learning is a good attempt, and once we achieve progress, humanity as a whole would benefit. On other topics, wearable equipment is also a new area, which could improve people’s well-being or help people with disabilities. Although there are still many problems, eventually, more and more researcher will start applying DL to medicine. Hopefully, we can reap the many benefits from the development of deep learning in healthcare.

Reference:

https://www.re-work.co/

 


Analysts: Junyi Li and Yuka Liu | Localized by Synced Global Team : Xiang Chen

 

3 comments on “London: Deep Learning in Healthcare

  1. Pingback: Beauty Score Prediction with Deep Learning - Sefik Ilkin Serengil

  2. William Doo

    It is important to take care of your health. It all depends on what you smoke and if for example you smoke Purple Afghan Kush AAA as you can find on this site https://westcoastsupply.cc/product/purple-afghan-kush/ then of course you may not be able to pass the drug test. Although now many somehow manage to bypass such tests and smoking cannabis for example is very useful for relieving stress and improving mood, so see for yourself in more detail.

  3. Manson

    As for me, CBD capsules are the best because they are really convenient to consume. I used to take chewable tablets, but once I decided to try the d9 gummies and I don’t regret it. link and try it and you will definitely like it.

Leave a Reply

Your email address will not be published. Required fields are marked *