Analysis of Deep Learning in Healthcare – What you need to know

Deep learning health

It is known that many scientists have tried to create, in the last decades, a software that apparently thinks and processes data like human beings do through the observation of behavior. It began with this idea, to analyze how neurons transmit electrical information among themselves through the release of neurotransmitters in the space known as synaptic cleft, trying to replicate this format of transmissions to computers, which gave rise to the term Deep Learning.

Deep Learning represents, in other words, the steps that the computer takes to try to discover certain things. It is a technology inspired by the functioning of the human brain where artificial neural networks analyze large sets of data to automatically discover underlying patterns, without human intervention.

It is reported that the Deep Learning is a subgroup of Machine Learning that is an aspect of Artificial Intelligence. Deep Learning, as well as Big Data (BD), the Internet of Things (IoT) and Machine Learning, have been applied frequently in the health area. It can be said, therefore, that it is part of electronic health (e-Health), which, according to the World Health Organization, is the use of communication and information technologies for the benefit of health. It is also stated by the World Health Organization, that e-Health solutions were created to make life easier for its users.

The term “deep” is derived from the numerous layers hidden in the structure of the Artificial Neural Network (ANN). The functionality of a biological brain and its respective nerve cells are modeled by the RNA algorithm. An axon nerve cell (outgoing), dendrites (incoming), a node (sum), nucleus (activation function) and synapses (weights). The activation function in the artificial neuron acts as the nucleus of a biological neuron, while the input signals and their respective weights model the dendrites and synapses, respectively.

It points out that Deep Learning methods are powerful tools that complement traditional Machine Learning and allow computers to learn from data so they can create ways to produce smarter applications. It is added that these approaches have already been used in several applications, especially for computer vision and natural language processing. As previously described, Deep Learning is a subgroup of Machine Learning, so, to better understand the idea of ​​it, it is important to know the purpose of Machine Learning. It aims to allow a system to learn from the past or present and use that knowledge to make predictions or decisions regarding unknown future events and its general focus is the representation of input data and the generalization of learned patterns for use in future unseen data.

It is probably inferred that the most well-known health care Machine Learning software is Watson Health (WH) from International Business Machines (IBM). The Watson Health is fed with everything that has already been written in any language and at any time related to cancer diagnoses and their respective treatments. WH continues to ingest all new data as they are published. WH, when presented to a specific cancer patient, will recommend the treatment test most likely to cure the cancer of that individual patient, considering his genome, history, image and pathology, along with all the known information about the treatment of this cancer, because the more information the WH has about the patient, the more accurate it will be. Basically, IBM’s vision for AI is that it will support the doctor’s decision making, instead of replacing it, doing what it does best: managing, dealing with large amounts of data and presenting only the information relevant to the physician.

Machine Learning and Deep Learning are aspects of Artifical Intelligence, which is defined by science as the study of intelligent agents, which are devices that perceive their environment and take actions to maximize their chance of success at some goal. There are reported to be many examples of AI in our lives, and Apple’s Siri is one of those examples; another is Alexa, from Amazon. Natural language processing technology, a form of artificial intelligence, is used to translate languages ​​in Google Translate. Approximately 30 billion dollars were invested in AI in 2017 and in research and development by companies like Google and Microsoft.

AI has been around for some time. The term was coined by John McCarthy in a lecture at Dartmouth College in 1956, however, it took some time for the technology to achieve the promise of AI and the hopes of computer science experts. Moore’s Law, defined in 1965 by Intel co-founder Gordon E. Moore, predicted that the number of transistors in integrated circuits would double approximately every two years.

5/5 - (2 votes)
Share this