Great data alone does not drive transformation
Of course, using strict new ISO IDMP requirements as the basis for reorganizing product data offers gains in its own right. It will make compliance much easier, and offers companies new visibility and control over everything that happens to key operational data, ensuring its quality and accuracy. But, once they have achieved this new definitive ‘master data’ position (an agreed, single version of product truth which informs numerous use cases), they have a chance to exploit this to make their operations more nimble and cost-efficient.
MDM 2.0 is about taking companies’ investment in master data, and turning it into tangible value — through automation of processes that currently take an inordinate amount of time, hampering firms’ competitiveness and ability to move quickly.
The impact of this is potentially very significant. Currently, companies create regulatory submission documents, fill in forms, generate labels, packaging and patient information more or less from scratch each time there is a new requirement. This involves having to call up different systems, and look through various tables and spreadsheets to find data to manually copy and paste into the new output. This is a hugely laborious process which is fraught with risk of getting some detail wrong, using out of date information, or failing to conform to a market’s particular requirements. But with easy, confident access to the correct content components, organizations could be populating new documents at the touch of a button — automating at least 90 per cent of the process of content generation, so that all that remains is for someone to add any finishing touches and check everything over.
Reduce, re-use, recycle
Automated content creation relies on two things: good, definitive master data; and the ability to pull in and mix and match approved data components according to the given context.
If existing content exists primarily in monolithic form, in previous documents for instance, it is of little value for future use — unless someone checks and re-enters the information each time. If the latest version of that content exists in more granular form, in a central data bank—as a series of searchable and easily extractable content assets — not only is it easy to repurpose again and again, but this core content only has to be updated or amended once, in one place. Those edits can then be applied across all new use cases, with a few simple clicks. Crucially, everything can be viewed and monitored in one place, too.
This is the kind of process that happens as standard in other markets where there is a lot of live content to keep track of across sprawling operations. And, at last, proof-of-concept projects are beginning to take shape in life sciences. Here, companies are starting to create templates for common document creation, based on master data. In this kind of ‘structured authoring’ scenario, output is generated with minimal effort. Once the context has been indicated (the product, the type of content needed), the correct data assets can be automatically pulled together to form the target content. In the case of a standard application form, where no customized tweaking is required, 100 per cent of the document compilation could be automated, accurately matched to the given market and target language.
Collapsing content production cycles from 50 days to 5
The payback is still being calculated through these early trials, but the expectation is that the time savings will be at least 10-fold: so where new content preparation has previously taken 50 man days, it will now take just five. These are phenomenal efficiency gains, offering to significantly accelerate companies’ speed to market while freeing up experts to focus their time more on higher purposes.
Assuming the chosen content management system is able to take care of document creation and approved local translations simultaneously, there should be no need to create each local version of documents separately. Structured content templates will be able to pull in the correct, pre-verified text fragments in each language, meaning there is no need to re-translate content each time. That’s because approved translations of existing wording and text extracts already exist in the master database.
For the majority of life sciences organizations that still rely on very manual, decentralized processes for putting together product-related content, the transformation presented by master data management (MDM) and its next-generation manifestation, MDM 2.0, is huge. On top of the time and efficiency gains, it offers company HQ much greater confidence and oversight of the content being put out across global operations – minimizing the risk of product recalls resulting from inaccurate or incomplete information being submitted, or the wrong phrasing being used.
The vision for MDM 2.0 isn’t confined to structured authoring of content, either. It’s about boosting what companies can do with data to improve their operations and business impact.
While initial projects might focus on internal operational data about their own products and processes, there is great scope to enhance this with external intelligence — for instance, data about market conditions, or evolving regulatory requirements in different regions and countries. The more complete and rounded the data that is input into central systems, the easier it becomes to plan for and manage new requirements — and improve success rates.
Improving accuracy with AI
There is much to be excited about, particularly as artificial intelligence and machine learning enter the picture, helping systems to ‘learn’ how to produce better output, or the conditions most likely to result in a new marketing submission being accepted first time.
As companies move towards automatically-generated documents, machine learning could learn to recognize and adapt to common edits that users are making to complete or finesse given document output. Instead of admin staff conducting periodic reviews and restructuring the templates accordingly, an AI-enabled system would anticipate and propose improvements based on frequent changes that users have had to make. And, when building a submission, the system might suggest which documents to include; which contributors to involve in the authoring/review/approval process; how to set up the timelines—perhaps even anticipating questions that are likely to come from the authorities based on points raised previously for related or similar submissions. The scope is probably much bigger than we’re even able to imagine at this early stage.
For now, however, the goal must be to automate all of the routine activities that take away time users could be allocating to other, more demanding tasks. The enabler for this is the creation of a comprehensive master data model — one that also includes active relationships and dependencies between the data, in a way that can drive new efficiencies and increased impact through proactive process automation.
The vision they must work towards, and which is encapsulated by MDM 2.0, is one in which teams will simply tell a system what type of documents they need, for which product, and for what purpose (country/region, type of submission, and so on), leaving the technology to do the rest. That could be generating new documents from the master data and appropriate structured templates, or directing users to existing documents and even proposing updates, corrections, improvements based on previous use of the system or newly entered data (for example about the latest regulatory requirements).
It sounds so obvious as an aspiration, but until now it’s a reality that has eluded life sciences companies because of the not insignificant groundwork that’s involved to set this up. As the industry starts its IDMP preparation in earnest, as it must over the next year, the transformation needed to drive new agility can begin.