Home / Sections / Features / Artificial Intelligence

Artificial Intelligence

Are we outsmarting ourselves?

Features

Date: March 12, 2018
Source: 
WEIZMANN MAGAZINE VOL. 13

Explosive developments in artificial intelligence (AI) research are reaching a tipping point. Similar to how, in the 19th century, the steam engine augmented what could be achieved through muscle power alone, in the very near future it is expected that artificial intelligence will augment capabilities—such as contextual analysis and values-based decision-making—traditionally associated with the human brain.

But who will chart the course of the coming AI revolution? According to Prof. Shimon Ullman, an Israel Prize laureate and member of the Weizmann Institute’s Department of Computer Science and Applied Mathematics, it is critically important for academia—as opposed to industry—to lead the way.

“Corporate giants like Facebook, Amazon, and Google currently invest many billions of dollars annually in artificial intelligence-related R&D, hoping to cash in on AI products’ profitable potential,” says Prof. Ullman, the designated Director of the Weizmann Institute’s new Institute for Artificial Intelligence Research, now in formation. “Basic research, free of commercial interests, is essential for establishing AI’s technical limits, as well as its potential.”

Far from “settled” science—and in fact, some find the entire concept distinctly unsettling—artificial intelligence presents a complex social challenge. Stephen Hawking, the Cambridge University astrophysicist who is one of the world’s pre-eminent scientists, has warned that the creation of powerful artificial intelligence will be “either the best or the worst thing ever to happen to humanity.”

From the fear that robot-powered industries will create human unemployment, to the risk that decision-making algorithms could compromise data privacy or even human rights, artificial intelligence could conceivably affect our ability to exercise judgment and promote shared values. Academic stewardship is needed to ensure that AI progress will lead us toward utopia (imagine no more house cleaning!) rather than a dystopian world in which we humans are booted from the driver’s seat and lose control of our collective future.

“What are the human-like capabilities that can be programmed into machines?” asks Prof. Ullman, whose own scientific work bridges the gap between computerized systems and neuroscience. “Once such machines are built, how will they interact with humans? Can we program for creativity and ingenuity? And how will these technologies impact society as a whole? Weizmann Institute scientists together with experts recruited from industry will establish ‘ground rules’ needed to guide artificial intelligence responsibly into the future. This will ensure Israel’s status as a high-tech powerhouse for generations to come.”

Expanding the limits of sight

The term artificial intelligence was first coined in the 1950s to describe the programming of autonomous capabilities into machines. But such machines can only serve as partners to humankind if they can perceive the world around them, and—based on their programming code—use the data they gather for autonomous decisions. The Weizmann Institute has emerged as a leader in this emerging area of research, with the work of Prof. Ullman providing a case in point.

Comparing the way in which humans and computer vision systems process visual data, Prof. Ullman discovered a sharp cut-off point between recognition and non-recognition of images that is structurally hard-wired into the human brain—something that, he says, “has no parallel in our current technologies.” He also found that, unlike the typical “bottom-up” algorithms by which computerized systems first gather low-level data in order to identify a complex object, the human brain processes simple and more complex visual data simultaneously to establish object recognition.

“By incorporating this two-way analysis into automated systems, it may be possible to narrow the performance gap between humans and the AI systems they build,” he says.

AI also promises change in the medical arena. In some medical images, like the scans produced by MRI, CT, or ultrasound, useful information is obscured by irrelevant visual data. Computer scientist Prof. Ronen Basri is developing an algorithm that makes it possible to filter out “noise” and zero in on objects of interest. It works for both two-dimensional and three-dimensional structures. In this and other ways, he’s establishing the theoretical limits of what a computer vision system can distinguish, something that will offer new ways in which future AI machines might achieve the best possible visual performance.

Prof. Basri’s work is already having a practical impact on scientific research. A computer vision system based on his approach and developed by staff scientist Dr. Meirav Galun scans neurons both in culture and in live animal models, providing automated and highly accurate accounting of how the axons of nerve cells re-grow after injury. The system, called WIS-Neuromath, can quantify the overall growth of complex assemblies of multiple neurons. Downloaded hundreds of times and contributing to the work of neuroscience teams around the world in addition to researchers at the Institute, the WIS-Neuromath system provides data that may someday contribute to clinical strategies for nerve repair.

In the future, Prof. Basri hopes to solve other dynamic problems in computer vision, like getting computers to visually identify actions such as walking, running, and jumping, something that will make future AI systems more adept at identifying the needs of the humans they were created to serve.

Seeing the invisible

Inducing automated systems to make intelligent inferences based on limited data is the focus of another Weizmann scientist, Prof. Michal Irani. An expert in video information analysis, Prof. Irani has discovered how repeating visual patterns—hidden to the human eye but easily identified by her original algorithms—can allow computers to fill in the blanks of an incomplete picture.

“One of the central goals of AI research is to achieve unsupervised learning: to create computer systems that can learn from what they see for the first time, rather than comparing what they see to huge numbers of examples previously fed into the system,” she says. “We have found that, by exploiting the natural computational redundancy in visual data—tiny patterns that repeat consistently, both in 2D images and in video—we can give computers everything they need to perform very complex visual inference tasks, even if the system has not been trained using prior examples.”

Prof. Irani’s work forms the basis of an AI-based approach that exceeds the physical and optical limitations of today’s most advanced sensors, providing new visual capabilities. It also allows computers to do something that sounds almost magical: to “see” the invisible.

“The internal redundancy of visual data gives computers the ability to identify patterns within an image, and, based on statistical analysis, fill in missing information,” she says. "Part of this unsupervised internal-learning approach, for which we have been granted a series of patents, is now built into Adobe Photoshop, where it forms the basis of ‘Content-Aware Fill’—a tool that makes intelligent inferences about how to fill in corrupted or totally missing parts of images.”

Beyond what meets the eye

Fundamental findings in computer vision are applicable to other high-priority AI goals—from speech recognition, to automated translation, to the operation of networked fleets of autonomous cars. Dr. Ohad Shamir is an expert in machine learning—an approach to artificial intelligence that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. He says that the future of AI depends on scientists achieving a more complete understanding of the system architectures that underlie today’s ever-more-successful AI approaches.

Deep learning, for instance, is a type of machine learning based on neural networks—a complex, biologically inspired programming paradigm that enables a computer to learn independently from observational data. “The neural network approach has led to dramatic improvements in computer vision and other AI tasks,” he says, “but such tasks are carried out in a ‘black box’ of computation that we, as human programmers, cannot penetrate. That means that when something goes wrong, it’s hard to pinpoint why. A basic research challenge is to reverse-engineer successful machine learning, something that would give us the means to replicate the powerful AI algorithms that the computers have figured out on their own.”

Deep learning’s success can be traced to rapid expansion in computing power. “Today’s very large computer networks have a mathematical advantage, because they offer many possibilities for learning optimization. And as our networked AI systems get larger, they will become even easier to train.”

How does the brain perceive and generate movement?

It’s one thing to build systems capable of intelligently completing tasks within a computer-simulated environment. It’s quite another to create autonomous, intelligent machines that can move among us in the physical world. This is the research focus of AI and robotics pioneer, Prof. Tamar Flash, whose work has implications for Parkinson's and other other diseases involving loss of motor control.

Joining the computer science faculty soon after she completed her doctorate at MIT in the mid-80s, Prof. Flash went on to build a unique research facility on the Weizmann campus, where data drawn from the precise observation of motion reveal how the brain perceives and generates movement. She then translates the mathematical underpinning of these processes into algorithms capable of generating movement in robots.

From identifying ’tricks’ that enable a robotic hand to reach out, grasp, and draw, to exploring how the brain processes emotion perception and virtual reality, Prof. Flash’s most recent work focuses on human subjects. But she has also gone further afield, modeling the many-splendored movement of a particularly flexible biological system: the octopus.

“With so many degrees of freedom, the octopus is a useful model for the ‘soft’ robots being designed for everything from search and rescue operations, to rehabilitation clinics,” she says, adding that—flexible yet strong—some soft robots can lift a thousand times their own weight.

“In the world of autonomous machines, AI is the software, and robotics is the hardware,” she says. “By translating brain-based motion control into computer-based language, we are contributing to the ability of robots to gather information about their environment, and eventually, independently, plan their next move.”

A new recruit

Much of the world’s top AI talent, including programmers and research scientists, is concentrated in the industrial research centers that make data giants like Google and Amazon such an important part of our day-to-day lives. The Weizmann Institute is now actively recruiting such scientists and technical experts, who will work alongside the Institute’s academic faculty to define and establish the AI protocols of the future.

One recent recruit is Dr. Daniel Harari, an expert in computer vision who worked at some of Israel’s leading high-tech companies, and who, after completing his PhD at the Weizmann Institute, continued his training at MIT’s Center for Brains, Minds and Machines before returning to Weizmann as a research scientist. He was recently promoted into a role in which he will help design and advance the planned Institute for Artificial Intelligence, as the first Neustein AI Fellow.

A computer vision expert who is also fascinated by cognitive development, Dr. Harari has studied how infants first learn visual concepts, such as the identification of objects in their visual field, without external guidance. His findings may be incorporated in future AI systems designed to learn independently, based on data gathered from the surrounding environment.

Prof. Shimon Ullman is supported by the Estate of Albert Delighter, the Estate of Mary Catherine Glick, the Estate of David Sofaer, the Estate of Elizabeth Wachsman, the Estate of Sylvia Wubnig, and the European Research Council. He is the incumbent of the Ruth and Samy Cohn Professorial Chair of Computer Sciences. 

Prof. Ronen Basri is supported by The J&R Center for Scientific Research. He is the incumbent of the Elaine and Bram Goldsmith Professorial Chair of Applied Mathematics.

Prof. Michal Irani is the incumbent of the Swiss Society Professorial Chair in Computer Science and Applied Mathematics.

Prof. Tamar Flash is the incumbent of the Dr. Hymie Moross Professorial Chair. 

Dr. Daniel Harari is supported by Robin Chemers Neustein. 

Prof. Shimon Ullman

Prof. Shimon Ullman

Prof. Ronen Basri

Prof. Ronen Basri

Prof. Michal Irani

Prof. Michal Irani

Dr. Ohad Shamir

Dr. Ohad Shamir

Dr. Daniel Harari

Dr. Daniel Harari