Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

download

199 Aufrufe

Veröffentlicht am

  • D0WNL0AD FULL ▶ ▶ ▶ ▶ http://1lite.top/yvaKV6 ◀ ◀ ◀ ◀
       Antworten 
    Sind Sie sicher, dass Sie …  Ja  Nein
    Ihre Nachricht erscheint hier
  • Gehören Sie zu den Ersten, denen das gefällt!

download

  1. 1. Learning for Biomedical Information Extraction with ILP Margherita Berardi Vincenzo Giuliano Donato Malerba
  2. 2. Outline of the talk <ul><li>IE for Biomedicine </li></ul><ul><li>Looking around </li></ul><ul><li>IE problem formulation </li></ul><ul><ul><li>which representation model on data? which features? </li></ul></ul><ul><ul><li>which framework for reasoning? </li></ul></ul><ul><li>Mutual Recursion in IE </li></ul><ul><li>Text processing & domain knowledge </li></ul><ul><li>Application to studies on mitochondrial genome </li></ul><ul><li>Conclusions & Future work </li></ul>
  3. 3. What is “Information Extraction” Filling slots in a database from sub-segments of text. As a task: October 14, 2002, 4:00 a.m. PT For years, Microsoft Corporation CEO Bill Gates railed against the economic philosophy of open-source software with Orwellian fervor, denouncing its communal licensing as a &quot;cancer&quot; that stifled technological innovation. Today, Microsoft claims to &quot;love&quot; the open-source concept, by which software code is made public to encourage improvement and development by outside programmers. Gates himself says Microsoft will gladly disclose its crown jewels--the coveted code behind the Windows operating system--to select customers. &quot;We can be open source. We love the concept of shared source,&quot; said Bill Veghte, a Microsoft VP. &quot;That's a super-important shift for us in terms of code access.“ Richard Stallman, founder of the Free Software Foundation, countered saying… NAME TITLE ORGANIZATION
  4. 4. What is “Information Extraction” Filling slots in a database from sub-segments of text. As a task: October 14, 2002, 4:00 a.m. PT For years, Microsoft Corporation CEO Bill Gates railed against the economic philosophy of open-source software with Orwellian fervor, denouncing its communal licensing as a &quot;cancer&quot; that stifled technological innovation. Today, Microsoft claims to &quot;love&quot; the open-source concept, by which software code is made public to encourage improvement and development by outside programmers. Gates himself says Microsoft will gladly disclose its crown jewels--the coveted code behind the Windows operating system--to select customers. &quot;We can be open source. We love the concept of shared source,&quot; said Bill Veghte , a Microsoft VP . &quot;That's a super-important shift for us in terms of code access.“ Richard Stallman , founder of the Free Software Foundation , countered saying… NAME TITLE ORGANIZATION Bill Gates CEO Microsoft Bill Veghte VP Microsoft Richard Stallman founder Free Soft.. IE
  5. 5. IE from Biomedical Texts: Motivation <ul><li>Complexity of biological systems: </li></ul><ul><ul><li>Too many specialized biological tasks </li></ul></ul><ul><ul><li>Several entities interacting in a single phenomenon </li></ul></ul><ul><ul><li>Many conditions to simultaneously verify </li></ul></ul><ul><li>Complexity of biomedical languages: </li></ul><ul><ul><li>Several nomenclatures, dictionaries, lexica </li></ul></ul><ul><ul><li>tending to quickly become obsolete </li></ul></ul>Genome decoding  increasing amount of published literature Too much to read!
  6. 6. IE History <ul><li>Message Understanding Conference (MUC) DARPA [’87-’95], TIPSTER [’92-’96] </li></ul><ul><li>Most early work dominated by hand-built models </li></ul><ul><ul><li>E.g. SRI’s FASTUS , hand-built FSMs. </li></ul></ul><ul><ul><li>But by 1990’s, some machine learning: Lehnert, Cardie, Grishman and then HMMs: Elkan [Leek ’97], BBN [Bikel et al ’98] </li></ul></ul><ul><ul><li>Wrapper Induction: initially hand-build, then ML [Soderland ’96], [Kushmeric ’97], … </li></ul></ul><ul><li>Most learning attempts based on statistical approaches </li></ul><ul><ul><li>Learning of production rules constrained by probability measures (e.g., HMMs, Probabilistic Context-free Grammars) </li></ul></ul><ul><li>Some recent logic-based approaches </li></ul><ul><ul><li>Rapier (Califf ’98) </li></ul></ul><ul><ul><li>SRV (Freitag ’98) </li></ul></ul><ul><ul><li>INTHELEX (Ferilli et al. ’01) </li></ul></ul><ul><ul><li>FOIL-based (Aitken ’02) </li></ul></ul><ul><ul><li>Aleph-based (Goadrich et al. ’04) </li></ul></ul>
  7. 7. Learning Language in biomedicine <ul><li>BioCreAtIvE - Critical Assessment for Information Extraction in Biology ( http://biocreative.sourceforge.net/ ) </li></ul><ul><li>BioNLP, Natural language processing of biology text ( http://www. bionlp . org ) </li></ul><ul><li>ACL/COLING Workshops on Natural Language Processing in Biomedicine </li></ul><ul><li>SIGIR Workshops on Text Analysis for Bioinformatics </li></ul><ul><li>Special Interest Group in Text Mining since ISMB ’03 (Intelligent Systems for Molecular Biology): BioLINK (Biology Literature, Information and Knowledge) </li></ul><ul><li>PSB (Pacific Symposium on Biocomputing) tracks </li></ul><ul><li>Genomic tracks in TREC (Text Retrieval Conference) </li></ul><ul><li>PASCAL challenges on information extraction http://nlp.shef.ac.uk/pascal/ </li></ul><ul><li>Workshops: IJCAI, ECAI, ECML/PKDD, ICML (Learning Language in Logic since ’99, challenge task on Extracting Relations from Biomedical Texts) </li></ul>
  8. 8. Is there “Logic” in language learning? <ul><li>IE systems limitations, in general: </li></ul><ul><ul><li>Portability (domain-dependent, task-dependent) </li></ul></ul><ul><ul><li>Scalability (work well on “relevant” data) </li></ul></ul><ul><li>Statistics-based approaches </li></ul><ul><ul><li>wide coverage, </li></ul></ul><ul><ul><li>scalability, </li></ul></ul><ul><ul><li>no semantics, </li></ul></ul><ul><ul><li>no domain knowledge </li></ul></ul><ul><li>Logic-based approaches: </li></ul><ul><ul><li>natural encoding of natural language statements and queries in first-order logic, </li></ul></ul><ul><ul><li>human-comprehensible models, </li></ul></ul><ul><ul><li>domain knowledge </li></ul></ul><ul><ul><li>refinement of models </li></ul></ul><ul><li>[R. J. Mooney, Learning for Semantic Interpretation: Scaling Up Without Dumbing Down , ICML Workshop on Language Learning in Logic, 1999] </li></ul>
  9. 9. IE problem formulation for HmtDB <ul><li>HmtDB resource of variability data associated to clinical phenotypes concerning human mithocondrial genome </li></ul>( http:// www.hmdb.uniba.it / )
  10. 10. Textual Entity Extraction <ul><li>Ex: “ Cytoplasts from two unrelated patients with MELAS (mitochondrial myopathy, encephalopathy, lactic acidosis, and strokelike episodes) harboring an A-*G transition at nucleotide position 3243 in the tRNALeU(UUR) gene of the mitochondrial genome were fused with human cells lacking endogenous mitochondrial DNA (mtDNA) ” </li></ul><ul><li>pathology associated to the mutation under study, </li></ul><ul><li>substitution that causes the mutation, </li></ul><ul><li>type of the mutation, </li></ul><ul><li>position in the DNA where the mutation occurs, </li></ul><ul><li>gene correlated to the mutation. </li></ul>By modelling the sentence structure: substitution (X)  follows (Y,X), type (Y) Extractors cannot be learned independently!!!
  11. 11. Textual Entity Extraction <ul><li>Each entity is characterized by some slots defining a template </li></ul><ul><li>The task is to learn rules to fill slots (template filling) </li></ul><ul><li>Relations in data may allow: </li></ul><ul><ul><li>intra-template dependencies to be learned </li></ul></ul><ul><ul><li>context-sensitive application of “extractors” </li></ul></ul>Mutation Sampled population DNA sample tissue DNA screening method … Title Abstract Introduction Methods
  12. 12. The learning task <ul><li>Classification </li></ul><ul><ul><li>Each class (slot) is a concept ( target predicate ), each model (template filler) induced for the class is a logical theory explaining the concept ( set of predicate definitions ) </li></ul></ul><ul><ul><li>Predefined models of classification should be provided </li></ul></ul><ul><ul><li>Importance of domain knowledge and first-order representations </li></ul></ul><ul><ul><li>Usefulness of mutual recursion (concept dependencies ) </li></ul></ul><ul><li>ILP = Inductive Learning  Logic Programming </li></ul><ul><li>From IL: inductive reasoning from observations and background knowledge </li></ul><ul><li>From LP: first-order logic as representation formalism </li></ul>
  13. 13. ATRE (Apprendimento di Teorie Ricorsive da Esempi) http://www.di.uniba.it/~malerba/software/atre/ <ul><li>Given </li></ul><ul><li>a set of concepts C 1 , C 2 , ... , C r </li></ul><ul><li>a set of objects O described in a language L O </li></ul><ul><li>a background knowledge BK described in a language L BK </li></ul><ul><li>a language of hypotheses L H that defines the space of hypotheses S H </li></ul><ul><li>a user’s preference criterion PC </li></ul>Find a (possibly recursive) logical theory T for the concepts C 1 , C 2 , ... , C r , such that T is complete and consistent with respect to the set of observations and satisfies the preference criterion PC .
  14. 14. ATRE <ul><li>Main Characteristics </li></ul><ul><li>Learning problem : induce recursive theories from examples </li></ul><ul><li>ILP setting : learning from interpretations </li></ul><ul><li>Observation language : ground multiple-head clauses </li></ul><ul><li>Hypothesis language : non-ground definite clauses </li></ul><ul><li>Constraints : linkedness + range-restrictedness </li></ul><ul><li>Generalization model : generalized implication </li></ul><ul><li>Search strategy for a recursive theory : separate-and-parallel-conquer </li></ul><ul><li>Continuous and discrete attributes and relations </li></ul><ul><li>Background knowledge : intensionally defined </li></ul>
  15. 15. … the learning strategy… Example: Parallel search for the predicates even and odd seeds even(0) odd(1) Simplest consistent clauses are found first, independently of the predicates to be learned
  16. 16. … the learning strategy… Example: Parallel search for the predicates even and odd seeds even(2) odd(1) A predicate dependency is discovered! even(X)  succ ( Y,X ) even(X)  succ( X , Y ) odd(X)  succ(Y,X) odd(X)  succ(X,Y) even(X)  succ(Y,X), succ(Z,Y) odd(X)  succ(Y,X), even(Y) odd(X)  succ(Y,X), zero(Y) even(X)  succ(X,Y), succ(Y,Z)
  17. 17. Data preparation <ul><li>ATRE’s observation language: multiple-head clauses </li></ul><ul><li>Enumeration of positive and negative examples (expert users manual annotations + unlabelled tokens) </li></ul><ul><li>Descriptions of examples: which features? </li></ul><ul><ul><li>Statistical (frequencies) </li></ul></ul><ul><ul><li>Lexical (alphanumeric, capitalized, …) </li></ul></ul><ul><ul><li>Syntactical (nouns, verbs, adjectives, …) </li></ul></ul><ul><ul><li>Domain-specific (dictionaries) </li></ul></ul>
  18. 19. Text processing <ul><li>The GATE (A General Architecture for Text Engeneering) framework ( http://gate. ac . uk / ) </li></ul><ul><li>ANNIE is the IE core : </li></ul><ul><ul><li>Tokeniser </li></ul></ul><ul><ul><li>Sentence Splitter </li></ul></ul><ul><ul><li>POS tagger </li></ul></ul><ul><ul><li>Morphological Analyser </li></ul></ul><ul><ul><li>Gazetteers </li></ul></ul><ul><ul><li>Semantic tagger (JAPE transducer) </li></ul></ul><ul><ul><li>Orthomatcher (orthographic coreference) </li></ul></ul><ul><li>Some domain specific gazetteers have been added (diseases, enzymes, genes, methods of analysis) </li></ul>
  19. 20. Text processing <ul><li>Some reg. expr. to capture some domain specific patterns (alphanumeric strings, appositions, etc.) </li></ul><ul><li>Shallow acronym resolution </li></ul><ul><li>Screening operations: </li></ul><ul><li>Some POSs (nouns, verbs, adjectives, numbers, symbols) </li></ul><ul><li>Punctuation </li></ul><ul><li>stopwords (glimpse.cs.arizona.edu. ) </li></ul><ul><li>Stemming (Porter) </li></ul>
  20. 21. Text description <ul><li>word_to_string(token) </li></ul><ul><li>Numerical: </li></ul><ul><li>lenght(token) , word_frequency(token) , distance_word_category(token1,token2) </li></ul><ul><li>Structural: </li></ul><ul><li>s_part_of(token1,token2) , fi rst(token) , last(token) , fi rst_is_char(token) , fi rst_is_numeric(token) , middle_is_char(token) , middle_is_numeric(token) , last_is_char(token) , last_is_numeric(token) , single_char(token) , follows(token1,token2) </li></ul><ul><li>Lexical: </li></ul><ul><li>type_of(token) , type_POS(token) </li></ul><ul><li>Domain dependent: </li></ul><ul><li>word_category(token) </li></ul>
  21. 22. Application <ul><li>We considered 71 documents selected by biologists </li></ul><ul><li>Expert users manually annotated occurrences of entities of interest, namely </li></ul><ul><ul><li>Mutation : position, type, substitution, type_position, locus </li></ul></ul><ul><ul><li>Subjects : nationality, method, pathology, category , number </li></ul></ul><ul><li>The extraction process (both learning and recognition) is locally performed to text portions of interest, automatically classified </li></ul>
  22. 23. Textual portions of papers were categorized in five classes: Abstract, Introduction, Materials & Methods, Discussion and Results The abstract of each paper was processed Avg. No. of categories correctly classified
  23. 24. <ul><li>An A-to-G mutation at nucleotide position (np) 3243 in the mitochondrial tRNALeu(UUR) gene is closely associated with various clinical phenotypes of diabetes mellitus. </li></ul><ul><li>[ annotation(3)=substitution , annotation(4)=no_tag, annotation(5)=no_tag, annotation(6)=no_tag, annotation(7)=position , annotation(8)=no_tag, annotation(9)=locus, annotation(10)=no_tag, annotation(11)=no_tag, annotation(12)=no_tag, annotation(13)=no_tag, annotation(14)=no_tag, annotation(15)=no_tag, annotation(16)=pathology], </li></ul><ul><li> </li></ul><ul><li>[ part_of(1,2)=true, contain(2,3)=true , …, contain(2,16)=true, word_to_string(3)=‘A-to-G ', word_to_string(4)='mutation', word_to_string(5)='nucleotid', word_to_string(6)='position', word_to_string(7)='3243 ', word_to_string(8)='mitochondri', word_to_string(9)='trnaleu(uur)', word_to_string(10)='gene', word_to_string(11)='clos', word_to_string(12)='associat', word_to_string(13)='variou', word_to_string(14)='clinic', word_to_string(15)='phenotyp', word_to_string(16)='diabetes_mellitus', type_of(3)=upperinitial , …, type_of(7)=numeric , type_POS(3)=jj , type_POS(4)=nn, …, type_POS(15)=nns, word_frequency(3)=3, word_frequency(4)=6, …, word_frequency(16)=1, word_category(9)=locus, word_category(16)=disease, distance_word_category(9,16)=1, follows(3,4)=true , follows(4,5)=true,…, follows(14,15)=true, follows(15,16)=true]). </li></ul>Example description
  24. 25. Background knowledge <ul><li>follows(X,Z)  follows(X,Y)=true, f ollows(Y,Z)=true </li></ul><ul><li>char_number_char(X)=true  first_is_char(X)=true, middle_is_numeric(X)=true, last_is_char(X)=true </li></ul><ul><li>number_char_char(X)=true  first_is_numeric(X)=true, middle_is_char(X)=true, last_is_char(X)=true </li></ul><ul><li>char_char_number(X)=true  first_is_char(X)=true, middle_is_char(X)=true, last_is_numeric(X)=true </li></ul><ul><li>Domain knowledge: </li></ul><ul><li>word_to_string(X)=transition  w ord_to_string(X)=transversion </li></ul><ul><li>word_to_string(X)=substitution  word_to_string(X)=replacement </li></ul>
  25. 26. Experiments <ul><li>Mutation template </li></ul><ul><li>6-fold cross validation </li></ul><ul><li>The user manually annotates 355 tokens (8.65 per abstract) </li></ul><ul><li>About 11% positives </li></ul>
  26. 27. Experiments
  27. 28. Learned theories <ul><li>annotation(X1)=position  </li></ul><ul><li>follows(X2,X1)=true, type_of(X1)=numeric, follows(X1,X3)=true, word_category(X3)=gene, word_to_string(X2)=position. </li></ul><ul><li>annotation(X1)=type  </li></ul><ul><li>follows(X1,X2)=true, word_frequency(X2) in [8..140], follows(X3,X1)=true, annotation(X3)=substitution </li></ul><ul><li>annotation(X1)=position  </li></ul><ul><li>follows(X2,X1)=true, annotation(X2)=substitution , follows(X3,X1)=true, follows(X1,X4)=true, word_frequency(X4) in [6..6], annotation(X3)=type , follows(X1,X5)=true, annotation(X5)=locus , word_frequency(X1) in [1..2] </li></ul>
  28. 29. Wrap-up <ul><li>IE in Biomedicine </li></ul><ul><li>The ILP approach to IE within a multi-relational framework allows to implicitly define </li></ul><ul><ul><li>Domain knowledge </li></ul></ul><ul><ul><li>Learning from users’ interaction </li></ul></ul><ul><ul><li>Relational representations </li></ul></ul><ul><ul><li>Learning relational patterns to allow context-sensitive application of models </li></ul></ul><ul><li>Recursive Theory Learning in IE: ATRE </li></ul><ul><li>Efforts on text processing level: </li></ul><ul><ul><li>Ambiguities </li></ul></ul><ul><ul><li>Data sparseness </li></ul></ul><ul><ul><li>Noise </li></ul></ul><ul><li>Encouraging results on a real-world data set </li></ul>
  29. 30. Where from here? <ul><li>Test on available corpus for Bio IE </li></ul><ul><ul><li>Genia </li></ul></ul><ul><ul><li>BioCreative </li></ul></ul><ul><ul><li>NLPBA </li></ul></ul><ul><ul><li>Genic interaction challenges </li></ul></ul><ul><li>Investigation of semisupervised approaches: online extension of dictionaries </li></ul><ul><li>How to encapsulate taxonomical knowledge? </li></ul><ul><li>Can information extracted by ATRE be really used as background knowledge for genomic database mining? </li></ul>
  30. 31. <ul><li>Data sparseness </li></ul><ul><li>Om + di com=il sistema nn regge le varietà morfosint </li></ul><ul><li>Locus e position=wordtostring modelli specifici </li></ul>
  31. 32. Textual Pattern Extraction <ul><li>“ immortal cells have lost their growth regulatory mechanisms and, thus, continue to divide indefinitely. ” </li></ul><ul><li>abstract(11695244). </li></ul><ul><li>contain_vx(11695244,'lose'). contain_nx(11695244,n1). </li></ul><ul><li>word(n1,'immort'). word(n1,'cell'). </li></ul><ul><li>close_to(n1,'immort','cell'). contain_nx(11695244,n2). </li></ul><ul><li>word(n2,'growth'). </li></ul><ul><li>word(n2,'regulatori'). </li></ul><ul><li>word(n2,'mechan'). </li></ul><ul><li>close_to(n2,'growth','regulatori'). close_to(n2,'regulatori','mechan'). </li></ul><ul><li>subject_object(n1,n2). </li></ul><ul><li>contain_vx(11695244,'divide'). </li></ul>Goal : to find descriptions of texts belonging to the “abstract” class Task relevant objects : Nominal chuncks, Words Reference object : abstract
  32. 33. Language bias <ul><li>A language bias has been defined in ATRE to allow users to suggest initial models that the learned theory has to satisfy. </li></ul><ul><li>Example declarations can be used to specify language biases: </li></ul><ul><li>starting_number_of_literals(p, N) </li></ul><ul><li>starting_clause(p, [L1,L2,…,LN]) </li></ul><ul><li>starting_literal(p, [L1,L2,…,LN]) </li></ul>
  33. 34. Efficiency issues in ATRE <ul><li>Caching the structure of already explored search space as much as possible : </li></ul><ul><ul><li>c lauses generated during the i-th learning step are saved and reused at the (i+1)-th learning step </li></ul></ul><ul><ul><li>s ome pruning and grafting operations are used to adapt previously explored hierarchies of clauses for current learning step </li></ul></ul><ul><li>Caching for clause evaluation: </li></ul><ul><ul><li>saving much of the computational effort spent to find the positive and negative examples covered by each generated clause </li></ul></ul><ul><ul><li>It can be applied only for independent clauses, since, their positive/negative examples can decrease or remain unchanged (but not increase) </li></ul></ul>
  34. 35. The learning strategy… <ul><li>The basic idea </li></ul><ul><li>Stepwise construction of a recursive theory T </li></ul><ul><li>T 0 =  , T 1 , …, T i , T i+1 , …, T n =T </li></ul><ul><li>such that: </li></ul><ul><li>T i+1 = T i  {C} for some clause C </li></ul><ul><li>LHM ( T i )  LHM ( T i +1 ),  i  {0, 1,  , n -1} </li></ul><ul><li>pos( LHM (T i+1 )) > pos( LHM (T i )) for each 1  i  n </li></ul><ul><li>neg( LHM (T i )) = 0 for each 1  i  n </li></ul>
  35. 36. … the learning strategy… <ul><li>1) pos( LHM (T i+1 )) > pos( LHM (T i )) for each 1  i  n </li></ul><ul><li>Choose at least one seed for each predicate p to be learned, namely a positive example e + of p such that e +  LHM (Ti). </li></ul><ul><li>Explore the space of clauses more general than e + looking for C such that neg( LHM (T i  {C})) = 0 </li></ul><ul><li>2) neg( LHM (T i )) = 0 for each 1  i  n </li></ul><ul><li>Select the best consistent clause and apply the layering technique whenever global inconsistency arises </li></ul><ul><li>Variation of the classical separate-and-conquer strategy </li></ul>
  36. 37. The ILP approach to Data Mining <ul><li>Relational representations </li></ul><ul><li>Domain knowledge </li></ul><ul><li>Learning from users’ interaction </li></ul><ul><li>Learning relational patterns to allow context-sensitive application of models </li></ul><ul><li>ILP = Inductive Learning  Logic Programming </li></ul><ul><li>From IL: inductive reasoning from observations and background knowledge </li></ul><ul><li>From LP: first-order logic as representation formalism </li></ul>

×