Home / Sections / Features / Artificial Intelligence The Future of Smart

Artificial intelligence: the future of smart

AI is reshaping our lives—and revolutionizing science

Features

Date: April 27, 2020
Source: 
Weizmann Magazine Vol. 17

In every branch of science, investigators are buckling under the weight of too much of a good thing: information. From particle physics to genetics, and from cell biology to environmental chemistry, new technologies are generating massive data sets, making the interpretation of experimental findings a significant challenge.  Luckily, the emerging discipline of artificial intelligence is generating mathematical “thought partners” that can sift through this mountain of data, and reveal discoveries that would almost certainly be overlooked by the human mind working alone.

What is artificial intelligence, anyway?

Artificial intelligence, or AI, is the simulation of human intelligence processes by machines. When Marvin Minsky, the founder of MIT’s AI laboratory, advised Stanley Kubrick on the film 2001: A Space Odyssey—which featured an intelligent computer, HAL 9000—artificial intelligence was still the stuff of science fiction. But bit by bit (and byte by byte), research advances have propelled AI into the mainstream. From IBM’s supercomputer Deep Blue, which faced off against human chess champion Garry Kasparov in the late 90s, to the robotic vehicles NASA landed on Mars in 2004, to today’s voice-activated “personal assistants” such as Apple’s Siri, Google Assistant, or Amazon’s Alexa, society has entered into an evolving human-machine partnership for which the terms of the contract are still being written.

 
Today’s “Big Data” revolution makes basic science related to artificial intelligence a matter of critical importance. A new $100 million flagship project at the Weizmann Institute, the Artificial Intelligence Enterprise for Scientific Discovery, will develop AI tools and ensure their integration into a range of scientific areas, while providing the massive computing power necessary to store, process, and analyze the data that will lead to the next big discoveries.

AI is a broad concept used to described the solution, by a computer, of tasks for which human intelligence was previously considered a prerequisite. In the simplest cases, intelligent-like behavior can be achieved by simple a well-defined computer programs. But more challenging tasks require AI systems to gather complex data, reveal patterns, and make independent decisions.

Much like the human brain, successful AI systems comprise sensors, experiences, and the ability to process remembered data to make decisions. In AI, sensors are used to gather complex data, for instance, visual images, or, in biomedical applications, genetic and molecular data. Next, the system must be exposed to large data sets—for example, in facial recognition, or cancer prediction—for which a solution is already known. Finally, AI must be powered by an algorithm—programming code that enables the computer to discover patterns and make decisions based on the sensors and past experiences. Given these components, and if the tasks for which the AI system is designed have been defined correctly, it will do something quite astonishing: it will learn.

Ever-more powerful computers are expected to expand AI capabilities. New algorithmic approaches, such as “deep neural network” techniques inspired by the human brain, will join with emerging multi-sensor systems capable of accumulating data sets of unprecedented size. These developments point toward an exciting future, in which AI will push past the limits of the imagination, and realize its potential as a “thought partner” for its human creators.

AI at Weizmann

Home to some of the world’s most prominent experts in computer science and neurobiology, Weizmann Institute researchers are developing AI-based methods for everything from drug discovery and personalized medicine, to climate modeling and environmental protection. The application of AI to fields historically considered non-computational—such as archaeology and education—is demonstrating the power of such systems to “flag” significant patterns that, because of their enormous complexity, would be overlooked by even the most brilliant human scientist.

Among the many Weizmann investigators who use AI, three of them—Prof. Amos Tanay, Prof. Ilan Koren, and Prof. Eilam Gross—demonstrate how emerging AI tools are helping the scientific community achieve world-changing discoveries.

    

                                                                                       Prof. Amos Tanay                                           Prof. Ilan Koren

AI and health

What could we learn if we had detailed health data, relating to an entire population over decades, at our fingertips? This is what Prof. Amos Tanay—who holds appointments in both the Department of Biological Regulation and the Department of Computer Science and Mathematics—aims to find out.

The advent of Electronic Health Records as a replacement for traditional, handwritten medical charts has helped standardize how patient data is recorded, making it easier to share and use clinically important information. Prof. Tanay is taking this to the next level, by developing software for data access and manipulation that makes it possible to “scan” population-wide data.

He is the director of a unique collaboration between the Weizmann Institute of Science and Clalit Health Systems, Israel’s largest HMO. The Weizmann/Clalit project (and its Bench-to-Bedside Program, directed by Prof. Gabi Barbash, a physician and former Director General of Israel’s Ministry of Health), makes available for scientific research more than 20 years of data, comprising computerized records of lab tests, treatments, and results for over four million Israeli citizens. This data repository, based on anonymized patient records, is being analyzed using AI machine learning protocols, as well as insights from the emerging research field of data science.

The ability of AI to recognize patterns within huge data sets has helped Prof. Tanay and his colleagues identify previously unrecognized factors that play a role in human health. For example, in collaboration with Dr. Liran Shlush of the Department of Immunology, Prof. Tanay established an AI-based strategy for the early diagnosis of Acute Myeloid Leukemia (AML).  Based on the Clalit medical records, deep sequencing of the genes recurrently mutated in AML, and machine learning, the scientists identified a distinct gene mutation profile that accurately predicted which patients would live to a healthy old age, without developing the disease—a model that could potentially be used to identify pre-AML risk many years prior to disease onset.

In another recent AI achievement, Prof. Eran Segal—who holds appointments in both the Department of Molecular Cell Biology and the Department of Computer Science and Mathematics—designed an algorithm that can predict the risk of gestational diabetes even before pregnancy. This advance—based on machine learning algorithms that revealed clinically significant patterns in the Clalit data—may one day allow doctors to prevent gestational diabetes in specific patients, by prescribing lifestyle interventions for reducing high blood sugar.

If Prof. Tanay has his way, the next AI-based health discoveries may emerge from closer collaboration between scientists and physicians. The Tanay team is now putting the finishing touches on a new interface that would allow doctors—with no training in AI or machine learning—to query the Clalit database, test out their hypotheses, and provide better, more personalized care for their patients.

AI and climate research    

A new AI-based strategy based on the work of three researchers, including Prof. Ilan Koren of the Department of Earth and Planetary Sciences—is using machine learning to achieve an unprecedented understanding of how cloud formation mediates the Earth’s energy balance and water cycle, and influences climate.

Every climate model must take clouds into account, but such data is usually gathered by satellites that capture low-resolution images that miss many small clouds, and reveal only cloud systems’ most basic properties. Prof. Koren, on the other hand, is developing a completely different approach to cloud analysis that, with the help of AI, will generate a wealth of new climate data.

Prof. Koren’s strategy, called Cloud Tomography, uses medically inspired CT algorithms to enable a coordinated fleet of 10 tiny satellites, each the size of a shoebox, to gather images of clouds’ external and internal 3D structures, as well as the size and concentration of water droplets within them. A scientific space mission—called CloudCT—will target small cloud fields that are often missed by remote-sensing technologies and, it is hoped, will resolve some of the unknowns surrounding climate prediction.

After the satellites are launched into orbit, they will adopt the formation of a continuously moving and networked satellite “swarm” spread over hundreds of kilometers. The satellites will gather images from various points within cloud fields simultaneously, and transmit images to the ground, allowing scientists to derive 3D information about how such clouds influence, and respond to, changing environmental conditions.

With the help of machine learning, Prof. Koren and his colleagues will be able to identify very complex interactions, with a special emphasis on the smaller cloud structures that temper climate and can also be very sensitive to climate change. The new system is expected to improve the accuracy of current climate models.  

AI and the nature of the universe

Machine learning methods are designed to explore and independently analyze large data sets. And if you’re looking for large data sets, then particle physics—a discipline in which scientists examine the behavior of subatomic particles at very high energies—is a great place to start.

Weizmann Institute scientists are prominent leaders in ATLAS, a detector that is part of CERN’s Large Hadron Collider (LHC)—the world’s most powerful particle accelerator.  At the LHC, thousands of magnets speed up sub-atomic particles so they collide at close to the speed of light. Over a billion such collisions occur in the ATLAS detector every second, generating vast quantities of data analyzed at 130 computing centers around the globe.

Complex data produced by particle physics experiments strains the data storage capacity of the world’s strongest computers.  AI can help, by generating real-time results from detected events. AI architectures might also be trained to identify and save events that don’t match expectations—rather than rejecting them—something that might alert scientists to phenomena that hold the key to the next major breakthrough.

Prof. Eilam Gross is a member of the Department of Particle Physics and Astrophysics who has devoted much of his scientific career to the ATLAS project. Prof. Gross was the overall team coordinator for the international group of scientists responsible for the LHC’s most celebrated accomplishment—the discovery of the Higgs boson. He is now hard at work on a new challenge: helping to design algorithms that will improve monitoring and data analysis in the coming upgrade of the ATLAS experiment. These algorithms are based on deep learning—a machine learning method based on artificial neural networks—and will enable faster and more efficient data analysis. This will make it possible to characterize rare sub-atomic events that have been neglected because of the enormous density—even by ATLAS standards!—of the data involved.

    

                                                                                                       Prof. Eilam Gross                           Prof. Yaron Lipman

A world-renowned expert on the meeting point between AI and particle physics, Prof. Gross moved the field forward through a collaboration with Prof. Yaron Lipman from the Department of Computer Science and Applied Mathematics. Together, the scientists developed a novel method using geometric deep learning to improve detector performance by “tagging” particles of interest.

In another project, Prof. Gross used a type of machine learning called convolutional neural networks to predict how much of a certain type of energy would be “deposited” in components of the ATLAS detector. This advance makes it easier to separate inconsequential background noise from significant experimental findings. He is also using machine learning protocols to identify malfunctions in the detector itself.

Rapid AI progress has Prof. Gross and his colleagues dreaming of what could become possible in the near future. Rather than using machine learning to find patterns in high-density data in order to answer existing questions, tomorrow’s AI platforms may be able to ask their own questions independently, and even run the experiments.  And if tomorrow’s AI systems ask their own questions, will they experience a “Eureka moment” in their circuits when they discover the answer?  Only time will tell. 

 

Prof. Eilam Gross is supported by Nella and Leon Benoziyo Center for High Energy Physics.

Prof. Ilan Koren is supported by de Botton Center for Marine Science, Scott Eric Jordan, Dr. Scholl Foundation Center for Water and Climate Research, Sussman Family Center for the Study of Environmental Sciences, Bernard & Norton Wolf Family Foundation.

Prof. Yaron Lipman is supported by the European Research Council.

Prof. Amos Tanay is supported by Barry and Janet Lang, Ilana and Pascal Mantoux Institute for Bioinformatics, Edmond de Rothschild Foundations, Steven B. Rubenstein Research Fund for Leukemia and Other Blood Disorders, Estate of Alice Schwarz-Gardos, Wolfson Family Charitable Trust.