Smart Machines Driving the Fourth Industrial Revolution

Smart Machines Driving the Fourth Industrial Revolution

Steam engines, electricity and assembly lines provided the momentum for the first two industrial revolutions over the course of a couple of centuries. The third, ushered in from the 1960s through the 1990s through emerging digital technologies - chips, mainframes, PCs and the Internet - has paved the way in mere decades for the fourth industrial revolution. 

The foundations for this new era, with technology at its core, are already part of most people’s everyday lives via the Internet. In addition, cheap, powerful sensors feed massive flows of data to ever-more-powerful computers, which use Artificial Intelligence (AI) and machine learning to power technology that was science fiction 30 years ago. 

Machine learning, a type of AI, allows computer systems to “learn,” from data without additional programming. The more data such machines can digest, the smarter they get. 

For example, speech-recognition or speech-to-text programs that use machine learning become more accurate the more language they are exposed to. Microsoft announced last year that the transcription software it developed was now more accurate than what human teams can do. And thanks to the terabytes of images available in health records and sophisticated machine learning techniques known as “deep learning,” computers can now identify certain eye diseases from retinal scans as accurately as doctors. 

However, the current era is not just an evolution of computer systems, argues Klaus Schwab, founder of the World Economic Forum and author of The Fourth Industrial Revolution

Its scope is much wider,” Dr. Schwab writes. “Occurring simultaneously are waves of further breakthroughs in areas ranging from gene sequencing to nanotechnology… It is the fusion of these technologies and their interaction across the physical, digital and biological domains that make the fourth industrial revolution fundamentally different from previous revolutions.”

The potential benefits of all this are easy to imagine – cures for most ailments, ultra-efficient factories, personalized education for all. 

Catching up with science fiction

Machines already monitor themselves in today’s factories with the goal of reducing downtime to a minimum. Cutting edge production lines use sensors, the terabytes of data they collect and AI to recognize when there is a risk of breakdown, isolate the problem and determine the best time to replace or repair a part. A human operator, or in some cases a robot, will then make the fix. 

In the old days it was up to human engineers to predict the lifecycle of all of a machine’s parts, and develop a maintenance schedule based on a combination of calculation and educated guesses. Now, in theory and increasingly in practice, maintenance is done as and when needed and not tied to a schedule. The more machines learn about a production facility, the more efficient the facility’s processes become. 

By 2050, this type of automation will be pervasive. It will be agile to the point of proactivity and it will lead to unimagined efficiencies. Real time information about the market a factory is serving will be fed into the factory, allowing AI algorithms to reconfigure production, tuning it to best meet demand. 

Patient monitoring 24/7 

What will things look like from a patient’s point of view? Will automation, AI, ubiquitous sensors and real-time data make life easier or creepier? What about the relationships between patients and healthcare providers?

While this may all lead to a harmonious, ultra-efficient future, people and governments rightly worry about the loss of jobs and privacy, and alarmists project panic and visions of Terminator-like systems running amok. Indeed, a 2016 report from the World Economic Forum on the fourth industrial revolution focused on the need for legislation that would protect the universal values of human dignity, common good and stewardship during this time of rapid change. 

However, much depends on how attitudes and regulations adapt to this period of change. There is no doubt that health-related sensors will be everywhere and increasingly powerful. Today, a watch can call for help if you fall or tell if you are at risk of heart attack. Tomorrow, your shirt collar may report the nature of the cold virus you have based on a sneeze. 

In 2050, as Sanofi’s CEO, Olivier Brandicourt, points out, we won’t be calling the doctor. Powerful computers, fed by our personal data and data about the environment we live in, will tell the doctor whether we should be alerted. 

Medicine, like manufacturing, will use AI and data to make better predictions. If a system determines that your genetic makeup and environmental factors mean that you are at risk of hospitalization from a particularly nasty flu, the system may preemptively alert your doctor and have him call you in for some “maintenance.”

Data collection and processing in healthcare could become so advanced that personalized treatments will be tested in simulated trials. Today’s multistage drug trials could be replaced with virtual trials involving simulations of different types of people and various dosages and formulations. 

Patients may become the biggest consumers of their own data. Today, people use their smart watches and phones to maximize athletic performance or figure out healthier nutrition or sleep patterns. Tomorrow it may be possible to factor in your genetic makeup or even the genes in your microbiome, the microorganisms that live on and inside the human body. 

In addition to all of the data and computing power to help with diagnoses and treatments, AI scribes with voice recognition technology will assist doctors during patient visits, freeing them from their computers and allowing more face-to-face time with patients.  

There are pitfalls of course. Data privacy and ethics considerations have to be debated thoroughly and solutions hammered out in detail and in concert among companies, governments and citizens. And education will have to become more flexible, allowing people to take on multidisciplinary roles, and constant, reinforcing knowledge as needed throughout people’s careers. 

Sophisticated technology has its upsides and downsides, but, as Dr. Atul Gawande points out in an article on technology in the examining room, “We ultimately need systems that make the right care simpler for both patients and professionals, not more complicated. And they must do so in ways that strengthen our human connections, instead of weakening them.