This document summarizes a presentation on clinical information governance at GlaxoSmithKline (GSK). GSK is combining data modelling, master data management, enterprise service bus, data stewardship, and enterprise architecture to simplify managing clinical study information. They have established different levels of data stewardship accountability and are implementing a clinical data stewardship framework. Their goal is to transform how clinical trial data is collected, reported, archived and retrieved to make trials more efficient and enhance patient safety.
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Simplifying Clinical Data Management
1.
2. Presenters Chris Bradley Business Consulting Director chris.bradley@ipl.com +44 1225 475000 Colin Wood Enterprise Information Architect Colin.s.wood@gsk.com +44 1438 766671
3. Clinical Information Governance ..... .... Keeping the Data Healthy How GlaxoSmithKline are combining the disciplines of Data Modelling Master Data Management Enterprise Service Bus Data Stewardship Enterprise Architecture and IT Governance ... to simplify the management of clinical study information
17. GSK Business Process - Description - Processes that create value by transforming ideas or raw materials into revenue generating assets and products. Business Processes are the core of the Enterprise B GSK Business Process - Processes that identify molecular targets and validate their association with relevant disease processes and find potentially therapeutic compounds with acceptable developability characteristics. B1 Develop Targets & Leads - Processes that refine the synthesis and delivery of a therapeutic compound and demonstrate safety, efficacy, and manufacturability within regulatory limits and economic feasibility. B2 Develop Drugs and Products - Positions new products for distribution in relevant world markets and secures legal authority to market and sell these products for appropriate levels of reimbursement. External participants of these processes are regulatory authorities, and government or private payers. Internal customers are country sales and marketing groups and global manufacturing. B3 Launch New Product - Direct generation of revenue by marketing and selling products that GSK has manufactured. Customers of this process are consumers, managed care organisations, hospitals, and 3rd party payers. B4 Develop Markets and Manage Customers - Scale-up and transfer of production specifications and technologies to deliver the capability and capacity to manufacture sufficient quantities of new product to meet market demands. Direct customers of this process are the manufacturing sites that will produce the new products. B5 Supply New Product - Supply Chain processes that generate revenue directly by manufacturing and distributing the products that GSK sells. Customers of this process include wholesale distributors and other organisations that supply and use GSK products. B6 Manufacture and Distribute Products
18. B2: Develop Drugs and Products - Description - Processes that refine the synthesis and delivery of a therapeutic compound and demonstrate safety, efficacy, and manufacturability within regulatory limits and economic feasibility. B2 Develop Drugs and Products - Optimise trade-offs among Risk, Value, Resources and Timing in order to drive decisions relating to Prioritisation of high value assets, Project progression, and management of the progression of opportunities from candidate selection through to launch B2.1 Manage Project Portfolio - Develop all necessary aspects of the large-scale manufacturing process of an active pharmaceutical ingredient. B2.2 Refine Synthetic Route - Develop all necessary aspects of the formulation of a drug product. B2.3 Develop Formulation - Develop all aspects of the manufacture of sufficient quantities of drug products for pre-cliical and clinical evaluation. B2.4 Deliver Physical Product - Pre-clinical and early clinical studies are performed on each candidate selected for progression to FTIM and PoC studies. These studies help to determine that the product profile is achievable and that there is therapeutic potential B2.5 Perform Preclinical Evaluation - Prove the clinical value, efficacy and safety of the product B2.6 Test Human Safety & Efficacy - Monitor the safety of the drug and provide further data to support the market. Evaluate the possibilities for new indications, formulations and presentations. B2.7 Manage Product Lifecycle - Define, communicate and manage regulatory strategy. Compile, review, submit, and maintain regulatory documentation. Influence regulatory policy by interfacing with external bodies. B2.8 NPD6 Manage regulatory activities
20. Simplifying Clinical Information Environment: Objective: Transform the way we collect, report, archive and retrieve clinical trials data More Effective Use of People’s Time Difficulty finding & accessing Information Lack of Information Re-use More Efficient Clinical Trials Multiple, Complex interfaces Difficulty Integrating Information Enhance Patient Safety & Risk Management Double Productivity by 2015
21. Clinical Data Steward Business Unit Stewards Asset Stewards CDS Governance + Support Study Stewards Team Members Current Clinical Data Stewardship Framework Levels of accountabilities have been established. Undertaken by existing roles. Based on existing SOPs and guidance wherever possible. Implementation driven within Business Unit by Business Unit Steward. Communities of practice established. Everyone who generates, transforms, uses, stores, archives and/or discards data or documents pertaining to GSK clinical trials is a Steward of clinical data and must understand their responsibilities and act accordingly. Similar frameworks in progression elsewhere within R&D
24. R&D Master Data Management Roadmap Draft Chart represents cross-organisational mastering of data – does not reflect on quality of information in individual solutions
25. SCIE Information Blueprint has been crucial to understanding the Information landscape Governance process Data Dictionary Application vs. Data Matrix Data structures – Logical Data Model Application Information Flow
26. Building a common understanding using layered information models Implementation focus Communication focus (Low) (High) (High) (Low)
27. Information Principles Data and data models are a critical business asset in GSK and will be managed as a shared asset. All data will be subject to data ownership and governance principles. A strong preference for the reuse and elaboration of existing data models should be exercised. Key data models should be communicated throughout the organisation. The GSK Data Model Repository will be the Source of Record for data models. Integrate Data Modelling with Application Life Cycle. Link models from Enterprise through to Physical.
30. Data Models linking IT and Business Data Governance IT Focus Business Focus BA Training Data Modelling (complete) Info Blueprints (planned) Business Training Use of Info Blueprints (planned) IT Project Governance Embed use of data models into architecture reviews Data Governance Cross-org teams actively engaged in data management Development Process Automation to embed models into the development process Data Stewardship All data with clear accountability for definitions and data quality Master Data Roadmap Shared master and reference data, built into IT Project Portfolio Data Quality Plans Defined for all shared master and reference data
51. Important in an SoA World. Definition of data & consequently calls to / results from services is vital. Straight through processing can exacerbate the issue what does the data mean? which definition of X (e.g. “cost of goods”)? need to utilise the logical model and ERP models definitions Models & SOA
52. XML messages are at the core Data Warehouse Claims Management RedBrick DB2 Unix Adapter OS/390 Adapter MessageQueues MessageQueues Message Broker MessageQueues MessageQueues Windows Adapter Unix Adapter SQLServer Oracle Billing Document Management
53.
54. Generally XML messagesMessage: Book details Book ISBN code Amazon URL Book name Category Publication date Publisher Book Recommended price Book Authorship
55. XML messages need models! DBMS B DBMS A Enterprise Service Bus System A System B XML message
62. Establish a Corporate Repository Share models across the Enterprise Enterprise Conceptual Industry Standard Project Move from data “mine”ing to data “ours”ing Extend the data architecture to incorporate Data Governance Training and mentoring Bake data considerations into the SDLC Data models are NOT just for new developments
63. Establish a Community of Interest Purpose Share best practices inside company Exchange ideas across projects Represent company on vendor user forum Charter ALL internal data management users Invited consultants & contractors Subjects Standards & guidelines Training & education “Best practices” Part of OUR job IS Marketing!
64. Measure Data Management Maturity Ideal, Obtaining Optimal Value from Data Delivering broad Quality & Re-use Obtaining Limited Benefits Operating in “Fire Fighting” Mode Level 5 - Optimised Undesirable Level 4 - Managed Level 3 - Defined Aspiration Level 2 - Repeatable Data Principles Level 1 - Initial As-Is To-Be As-Is To-Be As-Is To-Be As-Is To-Be
65. Beware this is not “fire & forget” Current position Avoid the abyss via investment in “sustain” activities Data Governance Visibility Typical Gartner “hype cycle” TechnologyTrigger Peak of inflated expectations Trough of disillusionment Slope of enlightenment Plateau of productivity Maturity @ your company
66. Conclusion Understand roles and motivations and work within the organization Federated governance model Avoid silo mentality Communicate Obtain buy in by starting small & document success Make it easy to get hold of Market, market, market! Follow up with a robust architecture Common repository Models appropriate for the audience Defined stewardship Unique definitions “Repurpose” data for various audiences: via the web, Excel, DDL, XML, etc. It’s the data that’s important, not the format.
67. Data Governance 2.0 Conclusion Data Governance and Modeling need to get out of the “old school” Use new technologies to reach users Approach users in their language Don’t forget the fundamentals
68. Chris Bradley Business Consulting Director Chris.Bradley@ipl.com +44 7501 224230 Intelligent Business My blog: Information Management, Life & Petrol http://infomanagementlifeandpetrol.blogspot.com Colin Wood Enterprise Information Architect Colin.s.wood@gsk.com +44 1438 766671 36
Hinweis der Redaktion
The theme of this presentation is how GSK is combining data modelling with master data management, an Enterprise Service Bus, Business Data Stewardship and Enterprise Architecture to build a simplified environment for the management of clinical studies and related data.
Intro slide to GSK.Turnover of £28.4bn in 2009R&D spend of £4.1bn
Intent of this slide is to give some context of what R&D do and where clinical studies fit. Might redraw to make this simpler.
This slide gives a more detailed breakdown of where clinical studies fit and gives a hint of why pharma R&D is complex. Clinical Studies are created to test 3 main parameters – is the product safe within humans, is it effective (i.e. does it cure or alleviate the medical condition that it was intended for) and does it provide value. All pharmaceutical products will go through a number of clinical studies before the product can be launched on the market. This is an ongoing process and we continually run clinical studies on our products to support ongoing licensing of the product. GSK currently runs clinical studies across more than 140 countries. This is a complex activity involving multiple organisations who recruit patients and run the clinical studies on behalf of GSK. There’s also a significant amount of complexity in collecting and reporting on the study results themselves.However the clinical study process sits within an overall Pharmaceutical Enterprise with a diverse and complex set of processes. To a large extent business solutions supporting these different parts of the business have grown up independently because of their highly specialised nature. Each area also has multiple IT solutions, each supporting some part of what is a very complex scientific business.For example Project Planning – plans the development of new products, including budgeting and planning studies. These are very large scale projects that run over many years. Deliver Physical Product – embedded within here is a supply chain that is used first to deliver materials for internal testing, but then out to healthcare organisations conducting clinical studies. An added complexity is the fact that the products are blinded – for example we may ship both Placebo’s and active products that appear identical. Manage Safety of the Product – a large part of clinical studies is to establish that the product is safe for human use. There is continual monitoring of the safety of the product, including the use of Manage Regulatory Activities – there are many regulatory activities related to the conduct and disclosure of clinical studies. These can be very demanding and compliance is critical to the ongoing operation of the company. Chemistry - there are also links back to earlier discovery operations (not shown here). For example the identification of products used within GSK clinical studies is derived from early chemistry activities which are focused on the identification of a molecule. Key point is that this is a complex business.Again may re-draw to make this simpler.
The complexity can be seen within the application landscape. GSK R&D has around 3,000 business applications. Whilst we’re working hard to reduce this, it does represent the diverse and complex nature of a pharmaceutical organisation. The figures here aren’t untypical for any pharmaceutical organisation and represents the diversity and scientific complexity of pharmaceutical operations. Over the years we’ve acquired lots of specialised applications focused on specific business needs – these are highly optimised for the specific scientific function, but don’t integrate well to support the overall operation of the enterprise.The diagram on the left illustrates the complex point to point world in which we currently operate (and remember there are 3,000 of these applications). In reality though there are many examples of where we have no direct interfaces between systems and it’s common to see that some elements of reference or master data are manually re-entered from one system into another. This is something our IT organisation is tackling on a strategic level as part of our Re-Wire R&D programme. We are implementing an infrastructure that supports the implementation of a full SOA environment across the organisation and are putting a lot of training focus both on the technology and the definition of business services. Some of the technologies supporting this are an Enterprise Service Bus (IBM) supporting reliable messaging and web services a Master Data Management solution (Siperian). We are also investing in a claims based security environment. Put all these together and we will have a environment that should support plug and play type integration. We’ve even put together an integration center of excellence to support delivery, so technically all of the pieces would appear to be there.Those of you with an interest in data governance will know that this isn’t sufficient in of its self of course. If we are going to achieve data consistency and information availability we are also going to have to invest a lot of time and effort in understanding and governing the data.
So why is this particularly relevant to GSK’s clinical information environment?GSK is currently making a major investment into it’s clinical information environment with an aim to double productivity by 2015. You can see some of the issues noted on the previous slide are very prevalent here. We have a complex environment that makes it difficult to find and re-use information across systems.The implementation is rationalising, simplifying and in many cases replacing the current IT solutions we have in place to support clinical information.The solutions are all being progressed as part of GSK R&D’s re-wire strategy – so the use of services, ESB, MDM etc will all be part of the mix.
What about master data management? This is very much part of our strategy for Re-Wire and moves us to a situation where we have one version of the truth on the ESB. Diagram shows the status of the major subject areas of interest within R&D at present. Within this model we’re using the following definitions… Data Custodian – we have agreement at a senior level for a specific organisation to manage the data on behalf of the rest of the organisation. This is critical to ensure that the relevant data stewardship roles are in place. Data Stewards – we have individuals assigned to the management of the data. This can include a variety of roles. Master Source identified – we can agree on a single source for the master data. Note that there are a lot of instances where we have only partial data. Can be a variety of reasons, including not all of the data entities we’d consider in scope are available or that the master data only represents a subset of the full organisations master data for this entity. Data Quality Plan defined – means that we have a formally defined document that describes how the master data will be sourced and managed. The document also defines expected quality characteristics for the data. As you can see there’s plenty more to do. A look down the subject area’s shows that there are a number of things that are unique to our industry – Clinical Study, Compound which represents a molecule, and Medical Condition which represents a disease or indication for example. You can also see some things that are familiar within other industries. Products for example, we also have our own version of customer data – GSK makes payments to health care professionals to conduct clinical studies on its behalf; there is increasing legislation requiring us to report these payments, particularly in the US. As you can see we’re not in a position where we have true global master data in place. The last 2 items span the entire GSK organisation – spanning multiple solutions including global SAP and several local Siebel implementations.
Not surprisingly good old data modelling is a key component of our solutions. Perhaps no surprise there – but this is starting from an environment where there was virtually no data modelling within our IT organisation. Any data modelling that was completed was seen as supporting the physical design of the database only. The presence of multiple vendor solutions within our environment made this all the more difficult, with the frequent assumption that we didn’t need a data model because the vendor was supplying one. I wont go into detail here – but main points… We are building a full logical data model for this area, using ER/Studio as our data modelling environment. The model is heavily influenced from external industry standard models such as BRIDG. We are constructing CRUD matrices that allow us to map the data to systems and business process and to identify master sources. We are building definitions into the models as a data dictionary and have automated the Last but not least we plan to use the models to automatically generate the message definitions that flow between systems.
What’s different about our approach to data modelling; compared to traditional approaches. Firstly we are attempting to approach the modelling within the context of an overall R&D and Enterprise data models – the aim being to ensure that we really do understand the common data that needs to be shared and flow across the organisation. We already have an R&D level conceptual data model, that describes all of the data entities of relevance to R&D. We are now seeking to embed the use of these models into our logical data modelling efforts. This will allow us to link common concepts into the our data models. We are also planning further automation for ER/Studio, so that we can automate the import of common entity definitions from a master data model. For example any data model that references clinical study would be able to import a master definition.So why is this such an issue for our organisation. Think about the highly specialised nature of the pharma R&D organisation and fragmented solutions we have in place currently. We really are seeking to integrate across a set of organisations who each have their own perspective and terminology relating to master data. Take one concept that is used right across pharmaceutical R&D – compound. Everyone thinks they know what they mean by this, but when you span the organisation you find that there are all sorts of slightly different interpretations and meanings. Also many different terms are used to describe the same thing – Active Ingredient, API, Investigational New Drug could all mean the same thing. I recently found more than 30 terms that could be used to describe this same entity. What we’re seeing are specialised functions that are representing the role of the data entity, rather than a common definition. As we’re generally dealing with scientific communities all with PhD’s and their own specialised terminology, it’s very hard to introduce generic terminology like party or material. A second area that we’re giving a lot of focus to is the generation of XML schema for messaging directly from the models. We don’t have this fully worked out yet – but it is our aim to be able to step straight from the logical data model to a defined XML message schema. By linking from a enterprise through to a physical level we anticipate much higher consistency for interfaces and hopefully higher quality of information moving across the organisation.
As part of our strategy we’ve defined a set of information principles that are being used direct our approach to Information Architecture and associated governance. You’ll note that there’s a strong emphasis on the use and re-use of data models. Each of these has an assigned set of actions. For example “Key data models should be communicated throughout the organisation” tells us that we need to set up a communication programme to ensure that the role of the R&D level model is understood and where appropriate is used within the definition of Information Architectures.
The diagram shows some of the steps we are taking to link business and IT data governance. A core part is the fact that we have data models – each linked to an R&D level model - as a key component of the environment. The models are managed within an ER/Studio repository. We also see the MDM – managing both master data and reference data, like lists of values, as a key component of our strategy. Shown on the diagram are a set of activities that we are planning or implementing to enable our vision. These are colour coded – items in green show that these are already underway, items in yellow are planned or progressing in a limited part of the organisation. Items in grey are under discussion, but not yet addressed or possibly even agreed to. The items in the top right hand side show that we still have a lot of challenges in bringing together a common view of data governance across the organisation.A few things to point out. We are currently training all of our Business Analysts and Tech Leads in the use of ER/Studio. Around 80 people have now been trained within R&D. As we develop the R&D models and Information Blueprints more fully we’ll also conduct focused communication and training on the use of these models. We are also beginning to plug the use of the data models into our IT Project Governance processes – over time it will become an expectation that IT projects demonstrate that they understand where their data fits and that they are aligned with any existing domain wide Information Blueprints. We of course do this using data models. Further activities are being planned, including linking into the software development process. I’ll say a little bit about this on the next slide.On the business side things are more patchy, but this represents the diversity of our business. We have some strong pockets of success and we are seeking to broaden this across the organisation. Once again we are looking to make full use of the data models to establish common definitions, ownership and responsible data stewards. One other gap (not shown) is the implementation of a data quality toolset which supports business data profiling. We do have a data quality service in house, but as yet we’ve not been able to fully explore business data profiling. I personally have a belief that data profiling tools will be a key enabler to business data stewardship.
Key point here is to make the message appropriate to the audience in question.