Home >Technology peripherals >AI >How machine learning could save clinical trial operations millions of dollars
As a large clinical trial service provider, WCG has considerable influence on the market paths of many drugs and medical devices. But as a collection of more than 30 formerly independent companies, it's difficult to get consistent data to support these services. This is where Tamr's data mastering solutions help.
As a clinical services organization, WCG represents pharmaceutical companies and device manufacturers in all aspects of clinical trials, from human resources and IT to patient engagement and ethics review, for default It provides critical services to pharmaceutical giants such as Crocker and Roche, as well as to thousands of small and mid-sized pharmaceutical startups and research groups seeking regulatory approval for new drugs and devices.
The only service the company does not provide is conducting actual trials. "We don't do that," said Art Morales, the company's chief technology officer and data officer.
Over the past decade, WCG has established a profitable niche in the clinical trials industry through the acquisition of 35 companies. Each company—some of them more than 50 years old—specializes in certain aspects of the clinical trial process. These companies develop their own custom software applications to automate their various business processes, providing a very valuable source of intellectual property.
Having different systems makes sense from the perspective of each individual business, but this poses a challenge for WCG, which wants to maintain a consistent view of all subsidiary operations.
The company initially attempted to resolve data inconsistencies manually. A team of about five to 10 people worked for two years to root out typos, duplicate entries and other data errors in the disparate systems used by 35 subsidiaries. The cleaned, standardized data is stored in the WCG data warehouse running in the cloud, where it can be analyzed using a variety of powerful analysis engines.
“One of the big questions we have is, how do you determine that a ‘node’ is the same ‘node’ in different organizations?” Morales explained. "In some systems, there may be an address, or there may not be an address, or the address may be spelled incorrectly. Some data may just be missing, and there's really a lot of uncertainty."
Because of this uncertainty And decisions need to be made one by one, the process of manually mastering the data is tedious and time-consuming. The company spent millions of dollars to master the data, but there were still inconsistencies in the data.
Morales realized there had to be a better way. He heard about Tamr, a data mastering tool that uses machine learning to automatically identify known entities in large data sets.
Tamr is a data quality tool that was born eight years ago and originated from academic research conducted by Mike Stonebraker, a famous computer scientist at MIT.
Stonebraker believes that machine learning is necessary to solve long-standing data quality issues that have It will be exacerbated at the scale of big data.
For years, the prescribed solution to this dilemma has been a master data management (MDM) project. Instead of relying on each individual system to make sure everything is correct, individual data systems will have pointers to known copies of the data—"golden records," so to speak.
The Golden Record’s approach can solve the problem, at least that’s what they think. However, the best-laid plans run the risk of turning into dust once they encounter reality. This is exactly what happens with traditional MDM.
Relying on humans to clean and manage data is futile. this will not work.
Stonebraker’s insight into this problem is to use machine learning to classify data, much like Google used machine learning to automatically classify websites in the early days of the internet, beating out Yahoo’s manual curation Internet efforts.
By training machines to recognize entities in business systems, Tamr has found a way to automatically create golden records. A key conclusion the team reached was that when people were asked to confirm consistency with a limited set of options, they did much better than when they were presented with dozens or hundreds of options simultaneously.
WCG’s Tamr trial began in May 2021. After a period of training, Tamr software observes and learns how employees deal with data differences.
A team of WCG employees worked with Tamr to review and cleanse all data sources in the data warehouse. The software identifies "clusters," two or more terms that mean the same thing in different applications, and loads them as golden records in WCG's cloud data warehouse.
Each data source is run through Tamr before loading data into the data warehouse. Data sources range in size from approximately 50,000 records to over 1 million records, and may have around 200 columns per entity. The problem is not quantity, but complexity. In addition to speeding up the data mastering process by approximately 4x, Tamr tools produce more standardized data, which means greater clarity for business operations.
“When you clean data, now you can use cleaner data to get better operational insights,” Morales said. "We can match through Salesforce and our applications to know these are the right things. Before, if the data wasn't cleaned, you'd match 50 percent. Now we can match 80 percent. So using what we're doing Things have very clear operational benefits."
Tamr cannot successfully match all entities into clusters, and there are still some edge cases that require human expertise. In these cases, the software lets the operator know that it has low confidence in the match. But according to Morales, Tamr is very good at finding obvious matches. He said the accuracy rate was about 95% from day one.
"You have to accept that with any data mastery project there will be mismatches. There will be Type I and Type II errors," he said. "It would be nice if you could trace the source of these errors from... because humans make the same mistakes."
Additionally, Tamr helps WCG better understand its data.
Morales said the company’s manual approach to data mastering cost millions of dollars in total, while Tamr’s cost was less than $1 million. Improvements in data quality are harder to quantify, but arguably more important.
##
The above is the detailed content of How machine learning could save clinical trial operations millions of dollars. For more information, please follow other related articles on the PHP Chinese website!