AI and the Promise of Human-Machine Partnerships in Research and Development

By: Jennifer Riggins, Thu May 6 2021
Jennifer Riggins

Author: Jennifer Riggins

The opportunity of artificial intelligence isn’t in replacing human beings. AI is best served as a sort of digital assistant, complimenting or enhancing human potential and productivity. AI can help us automate repetitive tasks to unlock creativity. Reproduce results to increase verifiable accuracy. Easily perform advanced mathematics and information processing to accelerate time to result. Understand interconnectivity while staying compliant across massive amounts of data. AI can even be used to recognize and fill in the gaps that still exist within our own data. And of course, when applied correctly, it is a major time and cost saver. 

Innovative and cross-disciplinary research and development teams are among the first to partner with algorithms to increase the time from hypothesis and experiment to result. R&D teams across healthcare and finance are leading these human-machine partnerships. We share some of these success stories today.

Health intelligence in response to pandemics

The last year taught us a lot, including about the spread, tracking, and treatment of diseases. As Arjan Panesar writes in Machine Learning and AI for Healthcare, one of the greatest potentiates is for algorithms to manage the complexities of massive health-related data. He talks about the development of what he calls “health intelligence” that improves not only patient health, but population health and facilitates significant care-payer cost savings.

Global healthcare providers showcase the use of big data and AI to fight against both chronic and novel diseases, with it particularly playing an essential role in every aspect of the COVID-19 pandemic response, including:

  • accelerating medical research on drugs, vaccines and treatments
  • detecting and diagnosing the virus
  • predicting its evolution, including variants and next hotspots
  • slowing spread through contact tracing and surveillance

Each of these is achieved by pairing AI-driven results with human-led research and medical professionals on the ground, accelerating the response to this seemingly ever-changing pandemic.

One of the most exciting achievements has been in vaccine development, which has gone from an average of ten years to six months, due to a mix global urgency and AI advancements. Machine learning systems and computational analyses has allowed researchers to quickly understand this coronavirus and its structure, to determine the probability of which of the eight different kinds of vaccines will be most effective, and to predict which components will create a lasting immune response.

But as variants evolve, even months is not fast enough. Earlier this year, an engineering team at the University of Southern California announced its development of an AI framework that can create new vaccine candidates in a matter of seconds. This framework is the result of the collaboration of scientists around the world who have been compiling data about the coronavirus, among other diseases, into a bioinformatics database called the Immune Epitope Database (IEDB).

Kate Broderick, senior vice president of R&D at Inovio Pharmaceuticals, which is one of the 34 groups with a COVID-19 vaccine in human trials, told IEEE that “When it was uploaded by the Chinese authorities on January 10, our scientists immediately entered the sequence into our algorithm, and, within three hours, they had a fully designed and optimized DNA medicine vaccine.”

Of course, despite that mind-blowingly rapid design, Inovio's vaccine is not among the first six being deployed globally, which goes to show that there’s a lot of economical, logistical and human behavioral influences that factor just as much as AI-driven speed to innovation.

This crisis-driven rapid AI innovation has also sped up research for more viable long-term HIV and influenza vaccines.

And it’s not just researchers within Big Pharma driving health intelligence. One of the greatest advances is automated machine learning tooling that empowers citizen scientists. This pandemic has highlighted how mobile technology can give everyone AI power. The COVID Symptom Study has more than 4 million volunteers reporting a plethora of symptoms daily, while led to the discovery early on of loss of smell and taste as a key symptom. These truly public-private-machine partnerships will take a permanent place in the future of R&D.

The opportunity of data mining to unlock interconnectivity

While it’s still not driving profit, AI will only continue to drive innovation in the healthcare field. Earlier this year, the BenevolentAI platform hinted at preliminary results of its partnership with biopharma AstraZeneca, to apply AI-driven drug development to treat chronic kidney disease and idiopathic pulmonary fibrosis. By leveraging AI, AstraZeneca isn’t looking to remove the humans from the process, but rather allow those scientists to create the hypotheses and then to test them by applying deep learning to process vast quantities of interconnected data.

Text and data mining overall allows for the automated selection and analysis of massive quantities of data to find previously hidden patterns and relationships that the human brain couldn’t begin to process that quickly. Mining is a massive opportunity for researchers to gain a more holistic understanding of disease and development.

A promising application of image mining is in fetal medicine. Machine learning algorithms are driving research like the Human Placenta Project, which ran an investigation observing the oxygen flow to identical twins in utero to understand why one twin was smaller than another.

As principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, Professor Polina Golland applied algorithmic approaches to correct for very small motions because, as she said, “Inside the uterus, well, you can’t tell the mother not to breathe. And you can’t tell the baby not to kick.”

An MRI image can consist of hundreds of 2-D cross-sections that combine to make a 3-D image, that can be just one of hundreds of images per scan. Golland’s team applied AI algorithms to understand the flow of oxygen and subsequent organ development overtime. These rapid results were then validated and checked by researchers who spent weeks applying the traditional method of hand-drawing image boundaries.

This application of biological shape and function modeling of developing organs can be applied across all fetal medicine, driving rapid innovation in the development of statistical models and eventual treatments.

The ability for trained AI models to help human researchers better understand early triggers of diseases or to spot abnormalities on scans sooner and faster is an undeniably exciting result of AI-driven research.

AI drives alternative data and better financial decision-making

Another early adopter of AI strategy has been the banking and insurance sectors. After all, on average, the finance industry spends the highest percentage of their total revenue on IT projects.

Financial services already feature widespread use of algorithms in fraud detection automation, risk evaluation, credit scoring, voice and facial recognition, and customer support “staffed” 24/7 by chatbots. For a few years now, JPMorgan has been applying AI algorithms to tedious tasks like commercial contract reviews, cutting down the work of lawyers and loan officers from 360,000 hours a year to just a few seconds. And during the Great Recession, whole new banks backed by “robo-advisors” emerged to provide financial advice and investment management with little-to-no human intervention at all.

Financial inclusion is the enabler of eight of the 17 United Nations Sustainable Development Goals. Traditionally, banking relies on existing data to make decisions like who to give a loan or mortgage to, but, for the 1.7 billion adults worldwide who remain unbanked, that becomes an impossible barrier. The majority don’t even have identification or proof of address. One way to overcome that is by AI-backed analysis of alternative data. If you’ve never had a credit card, it’s impossible to create your credit history, but you may have had a mobile phone contract or flat rental that you’ve never missed a payment on in 12 years. If banks are going to meet financial inclusion goals, their risk measurement has to be retrained with this more inclusive alternative data.

A different kind of alternative data is already being used widely at hedge fund and investment firms to retrain algorithms not only on public data but other indicators like social media influence, references in the news, or even satellite imagery. Dow Jones writes of how a hedge fund built a sentiment algorithm to determine reputations and valuations of organizations going through mergers and acquisitions. This algorithm performed better than any other method in deciding who to include in the fund’s portfolio.

There continues to be an interesting friction between these quantitative hedge funds that rely on algorithmic strategies and the fundamental hedge funds that rely on human-backed intelligence. The most successful funds incorporate both.

Artificial intelligence opens up the opportunity for better and more tailor-made financial services, cost reduction, and the development of new business models that serve broader sections of the population. Still, there’s a reasonable fear, like with mortgage applications, that without the employment of an ethical approach to ensure algorithms are not learning on historically unfair and discriminatory policies, AI will just help unfair history repeat itself. There has to be a broader concentration on AI ethics in financial services.

As AI thought leader Dr. Anthony J Rhem writes, “the process and the output of any algorithm has to be explainable.”

Alternative data can also be applied to improving data training models. Biases arise in machine learning and AI for many reasons, including intrinsic biases and lack of diversity on research teams, but one reason is a simple lack of balanced data. When bringing banking to a new population, there may not be enough existing samples to train credit risk algorithms on. Or there may not be enough examples of actually detected fraud. Synthetic data models train on the behavior patterns of real data and then generate new artificial data, which can act as a way to create a potentially unlimited amount of realistic data samples, which in turn can train more balanced decision-making algorithms.

Sharing data for analysis is an operational requirement for financial institutions. Since synthetic data uses no actual customer data, it also eliminates privacy risks while allowing the information to be shared with third-party partners to solve real-world problems. Organizations like Nationwide Building Society and Accenture are applying synthetic data to test out third-party machine learning-backed fintechs to help customers make better financial decisions.

And, by anonymizing customer data, it opens the potential of data pools that allow for research and development at an international scale to solve cross-border problems like money laundering — without any risk to privacy.


The potential of artificial intelligence in both healthcare and banking is approaching limitless. By adding AI as an assistant to your research and development processes, you are able to work faster and to build a more holistic view of how seemingly disparate data sources can intersect. We are reaching the point where we can solve problems that affect all of humanity.

But this cannot occur in a vacuum. When you are applying algorithms to make choices about individuals’ health or wealth, it is your responsibility to understand and be able to explain what data and technology went into that decision making process. R&D teams will need to get better about translating their work across organizations, sectors, and even borders.

Jennifer Riggins

Author: Jennifer Riggins

Jennifer Riggins is a tech storyteller, journalist, writer, podcast host and community event organizer, helping to share the stories where culture and technology collide and to translate the impact of the tech we are building. She has been a working writer since 2003. Currently in London, Jennifer is the tech culture correspondent for The New Stack, co-organizer of the Aginext community event series, co-host of the podcast What We Talk About When We Talk About Tech, and provides branding, SEO and content consulting for high-tech scale-ups. Follow her on Twitter @jkriggins or connect on LinkedIn.