Idéalisation d'assemblages CAO pour l'analyse EF de structures - Thèse Informatique

Idéalisation d'assemblages CAO pour l'analyse EF de structures - thèse Informatique - Revenir à l'accueil

 

Thèses en informatique :

[TXT]

 Generation-automatiq..> 03-Jan-2015 22:04  2.6M  

[TXT]

 Interrogation-de-gra..> 03-Jan-2015 22:04  2.9M  

[TXT]

 La-gestion-du-trafic..> 03-Jan-2015 22:01  4.1M  

[TXT]

 Les-logiciels-de-ges..> 03-Jan-2015 22:03  3.1M  

[TXT]

 Lh-rs-p2p-une-nouvel..> 03-Jan-2015 21:59  2.7M  

[TXT]

 Normalisation-et-App..> 03-Jan-2015 22:01  4.1M  

[TXT]

 Prise-en-compte-de-l..> 03-Jan-2015 22:04  2.8M  

[TXT]

 Reconnaissance-de-co..> 03-Jan-2015 22:03  3.6M  

[TXT]

 Services-de-repartit..> 03-Jan-2015 21:59  4.7M  

[TXT]

 The-Emergence-of-Mul..> 03-Jan-2015 22:05  2.5M  

[TXT]

 Trigraphes-de-Berge-..> 03-Jan-2015 22:02  3.9M  

[TXT]

 Vers-une-capitalisat..> 03-Jan-2015 22:00  4.6M  
Congrès d'informatique :

[TXT]

 Application-Agnostic..> 03-Jan-2015 21:16  2.1M  

[TXT]

 Continuity-Editing-f..> 03-Jan-2015 17:35  4.0M  

[TXT]

 Double-WP-Vers-une-p..> 03-Jan-2015 17:36  4.0M  

[TXT]

 Effective-Reproducib..> 03-Jan-2015 21:18  2.0M  

[TXT]

 Enforcing-reuse-and-..> 03-Jan-2015 21:17  2.0M  

[TXT]

 Extracting-Bounded-s..> 03-Jan-2015 21:19  4.0M  

[TXT]

 Fingerprint-Quality-..> 03-Jan-2015 21:16  2.1M  

[TXT]

 GPU-Load-Balance-Gui..> 03-Jan-2015 21:18  4.0M  

[TXT]

 Minkowski-sum-of-pol..> 03-Jan-2015 21:17  2.0M  

[TXT]

 Quality-Assessment-o..> 03-Jan-2015 21:16  2.1M  

[TXT]

 Rester-statique-pour..> 03-Jan-2015 17:35  4.0M  

[TXT]

 The-Power-of-Polynom..> 03-Jan-2015 21:16  2.1M  
Cours d'informatique :

[TXT]

 Analyse-numerique-Co..> 03-Jan-2015 17:33  3.0M  

[TXT]

 Approches-m-k-firm-p..> 03-Jan-2015 17:27  3.7M  

[TXT]

 COURS-LA-CULTURE-INF..> 03-Jan-2015 17:25  3.8M  

[TXT]

 CRYPTANALYSE-DE-RSA-..> 03-Jan-2015 17:33  3.0M  

[TXT]

 Cours-Interconnexion..> 03-Jan-2015 17:34  3.0M  

[TXT]

 Cours-d-Analyse-et-C..> 03-Jan-2015 17:22  3.9M  

[TXT]

 Efficient-C++finite-..> 03-Jan-2015 17:30  3.5M  

[TXT]

 Efficient-C++finite-..> 03-Jan-2015 17:31  3.2M  

[TXT]

 Fondements-de-l-Info..> 03-Jan-2015 17:22  4.0M  

[TXT]

 INTRODUCTION-A-L-INF..> 03-Jan-2015 17:24  3.8M  

[TXT]

 Informatique-et-Ling..> 03-Jan-2015 17:24  3.8M  

[TXT]

 Initiation-a-l-infor..> 03-Jan-2015 17:26  3.8M  

[TXT]

 Intelligence-Artific..> 03-Jan-2015 15:16  2.5M  

[TXT]

 Introduction-a-l-ana..> 03-Jan-2015 17:27  3.7M  

[TXT]

 Introduction-a-la-ge..> 03-Jan-2015 17:26  3.8M  

[TXT]

 Le-routage-externe-B..> 03-Jan-2015 17:32  3.1M  

[TXT]

 Le-systeme-d-informa..> 03-Jan-2015 17:32  3.1M  

[TXT]

 Lecture1_Linear_SVM_..> 03-Jan-2015 14:57  2.4M  

[TXT]

 Lecture2_Linear_SVM_..> 03-Jan-2015 14:56  2.4M  

[TXT]

 Lecture3_Linear_SVM_..> 03-Jan-2015 14:56  2.4M  

[TXT]

 Lecture4_Kenrels_Fun..> 03-Jan-2015 14:55  2.4M  

[TXT]

 Lecture5_Kernel_SVM...> 03-Jan-2015 14:55  2.4M  

[TXT]

 Lecture6_SVDD.pdf.htm   03-Jan-2015 14:54  2.4M  

[TXT]

 Lecture7_Cross_Valid..> 03-Jan-2015 14:54  2.4M  

[TXT]

 Lecture8_Multi_Class..> 03-Jan-2015 14:57  2.4M  

[TXT]

 Lecture9_Multi_Kerne..> 03-Jan-2015 14:53  2.5M  

[TXT]

 Lecture10_Outilier_L..> 03-Jan-2015 14:53  2.5M  

[TXT]

 Les-reseaux-sans-fil..> 03-Jan-2015 15:17  2.5M  

[TXT]

 NooJ-pour-l-Intellig..> 03-Jan-2015 17:30  3.2M  

[TXT]

 Outils-Logiques-pour..> 03-Jan-2015 15:15  2.8M  

[TXT]

 Presentation-de-la-r..> 03-Jan-2015 17:33  3.0M  

[TXT]

 Projet-IP-SIG-Signal..> 03-Jan-2015 15:16  2.5M  

[TXT]

 Robotique-Mobile-PDF..> 03-Jan-2015 15:16  2.6M  

[TXT]

 Systeme-informatique..> 03-Jan-2015 15:17  2.5M  

[TXT]

 Systemes-Multi-Agent..> 03-Jan-2015 17:28  3.5M  

[TXT]

 Tutoriel-Android-TP-..> 03-Jan-2015 14:57  2.3M  

[TXT]

 Understanding-SVM-th..> 03-Jan-2015 14:57  2.4M  

[TXT]

 Une-histoire-de-la-m..> 03-Jan-2015 17:28  3.5M  

[TXT]

 Une-introduction-aux..> 03-Jan-2015 17:31  3.1M  

[TXT]

 Vers-une-signalisati..> 03-Jan-2015 17:25  3.8M 
Id´ealisation d’assemblages CAO pour l’analyse EF de structures Flavien Boussuge To cite this version: Flavien Boussuge. Id´ealisation d’assemblages CAO pour l’analyse EF de structures. Other. Universit´e de Grenoble, 2014. French. . HAL Id: tel-01071560 https://tel.archives-ouvertes.fr/tel-01071560 Submitted on 6 Oct 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destin´ee au d´epˆot et `a la diffusion de documents scientifiques de niveau recherche, publi´es ou non, ´emanant des ´etablissements d’enseignement et de recherche fran¸cais ou ´etrangers, des laboratoires publics ou priv´es.THESE ` Pour obtenir le grade de DOCTEUR DE L’UNIVERSITE DE GRENOBLE ´ Specialit ´ e : ´ Mathematiques et Informatique ´ Arretˆ e minist ´ erial : 7 ao ´ ut 2006 ˆ Present ´ ee par ´ Flavien Boussuge These dirig ` ee par ´ Jean-Claude Leon ´ et codirigee par ´ Stefanie Hahmann prepar ´ ee au sein ´ Laboratoire Jean Kuntzmann - INRIA Grenoble, et AIRBUS Group Innovations et de l’Ecole Doctorale Mat ´ ematiques, Sciences et Technologies de ´ l’Information, Informatique Idealization of CAD assemblies for FE structural analyses Idealisation d’assemblages CAO ´ pour l’analyse EF de structures These soutenue publiquement le ` TBD, devant le jury compose de : ´ Prof., Cecil Armstrong Queen’s University Belfast, Rapporteur Dr., Bruno Levy ´ Directeur de recherche, INRIA Nancy, Rapporteur Dr., Lionel Fine AIRBUS Group Innovations, Suresnes, Examinateur Prof., Jean-Philippe Pernot Arts et Metiers ParisTech, Aix en Provence, Examinateur ´ Prof., Jean-Claude Leon ´ INP-Grenoble, ENSE3, Directeur de these ` Prof., Stefanie Hahmann INP-Grenoble, ENSIMAG, Co-Directeur de these ` M., Nicolas Chevassus AIRBUS Group Innovations, Suresnes, Invite´ M., Franc¸ois Guillaume AIRBUS Group Innovations, Suresnes, Invite´iiThe research described in this thesis was carried out at the Laboratoire Jean Kuntzmann LJK-INRIA research team Imagine and the SPID team of AIRBUS Group Innovations. This work was supported by a CIFRE convention of the ANRT and the French Ministry of Higher Education and Research. �c 2014, F. Boussuge, all rights reserved. iiiivIdealization of CAD assembly for FE structural analysis Abstract Aeronautical companies face a significant increase in complexity and size of simulation models especially at the level of assemblies, sub-systems of their complex products. Pre-processing of Computer Aided Design (CAD) models derived from the digital representation of sub-systems, i.e., Digital Mock-Ups (DMUs), into Finite Elements Analysis (FEA) models requires usually many tedious manual tasks of model preparation and shape transformations, in particular when idealizations of components or assemblies have to be produced. Therefore, the purpose of this thesis is to make a contribution to the robust automation of the time-consuming sequences of assembly preparation processes. Starting from an enriched DMU with geometrical interfaces between components and functional properties, the proposed approach takes DMU enrichment to the next level by structuring components’ shapes. This approach extracts a construction graph from B-Rep CAD models so that the corresponding generative processes provide meaningful volume primitives for idealization application. These primitives form the basis of a morphological analysis which identifies the sub-domains for idealization in the components’ shapes and their associated geometric interfaces. Subsequently, models of components as well as their geometric representation get structured in an enriched DMU which is contextualized for FEA application. Based on this enriched DMU, simulation objectives can be used to specify geometric operators that can be robustly applied to automate components and interfaces shape transformations during an assembly preparation process. A new idealization process of standalone components is proposed while benefiting from the decomposition into sub-domains and their geometric interfaces provided by the morphological analysis of the component. Interfaces between sub-domains are evaluated to robustly process the connections between the idealized sub-domains leading to the complete idealization of the component. Finally, the scope of the idealization process is extended to shape transformations at the assembly level and evolves toward a methodology of assembly pre-processing. This methodology aims at exploiting the functional information of the assembly and interfaces between components to perform transformations of groups of components and assembly idealizations. In order to prove the applicability of the proposed methodology, corresponding operators are developed and successfully tested on industrial use-cases. Keywords : Assembly, DMU, idealization, CAD-CAE integration, B-Rep model, generative shape process, morphological analysis vviId´ealisation d’assemblages CAO pour l’analyse EF de structures R´esum´e Les entreprises a´eronautiques ont un besoin continu de g´en´erer de grands et complexes mod`eles de simulation, en particulier pour simuler le comportement structurel de soussyst`emes de leurs produits. Actuellement, le pr´e-traitement des mod`eles de Conception Assist´ee par Ordinateur (CAO) issus des maquettes num´eriques de ces sous-syst`emes en Mod`eles El´ements Finis (MEF), est une tˆache qui demande de longues heures de travail de la part des ing´enieurs de simulation, surtout lorsque des id´ealisations g´eom´etriques sont n´ecessaires. L’objectif de ce travail de th`ese consiste `a d´efinir les principes et les op´erateurs constituant la chaˆıne num´erique qui permettra, `a partir de maquettes num´eriques complexes, de produire des g´eom´etries directement utilisables pour la g´en´eration de maillages ´el´ements finis d’une simulation m´ecanique. A partir d’une maquette num´erique enrichie d’information sur les interfaces g´eom´etriques entre composants et d’information sur les propri´et´es fonctionnelles de l’assemblage, l’approche propos´ee dans ce manuscrit est d’ajouter un niveau suppl´ementaire d’enrichissement en fournissant une repr´esentation structurelle de haut niveau de la forme des composants CAO. Le principe de cet enrichissement est d’extraire un graphe de construction de mod`eles CAO B-Rep de sorte que les processus de g´en´eration de forme correspondants fournissent des primitives volumiques directement adapt´ees `a un processus d’id´ealisation. Ces primitives constituent la base d’une analyse morphologique qui identifie dans les formes des composants `a la fois des sous-domaines candidats `a l’id´ealisation mais ´egalement les interfaces g´eom´etriques qui leurs sont associ´ees. Ainsi, les mod`eles de composants et leurs repr´esentations g´eom´etriques sont structur´es. Ils sont int´egr´es dans la maquette num´erique enrichie qui est ainsi contextualis´ee pour la simulation par EF. De cette maquette num´erique enrichie, les objectifs de simulation peuvent ˆetre utilis´es pour sp´ecifier les op´erateurs g´eom´etriques adaptant les composants et leurs interfaces lors de processus automatiques de pr´eparation d’assemblages. Ainsi, un nouveau proc´ed´e d’id´ealisation de composant unitaire est propos´e. Il b´en´eficie de l’analyse morphologique faite sur le composant lui fournissant une d´ecomposition en sous-domaines id´ealisables et en interfaces. Cette d´ecomposition est utilis´ee pour g´en´erer les mod`eles id´ealis´es de ces sous-domaines et les connecter `a partir de l’analyse de leurs interfaces, ce qui conduit `a l’id´ealisation compl`ete du composant. Enfin, le processus d’id´ealisation est ´etendu au niveau de l’assemblage et ´evolue vers une m´ethodologie de pr´e-traitement automatique de maquettes num´eriques. Cette m´ethodologie vise `a exploiter l’information fonctionnelle de l’assemblage et les informations morphologiques des composants afin de transformer `a la fois des groupes de viicomposants associ´es `a une mˆeme fonction ainsi que de traiter les transformations d’id´ealisation de l’assemblage. Pour d´emontrer la validit´e de la m´ethodologie, des op´erateurs g´eom´etriques sont d´evelopp´es et test´es sur des cas d’application industriels. Mots-cl´es : Assemblage, Maquette Num´erique, int´egration CAO-calcul, mod`ele B-Rep, graphe de construction , processus g´en´eratif de forme, id´ealisation viiiTable of contents Abstract v R´esum´e vii Acronyms xxiii Introduction 1 Context of numerical certification of aeronautical structures . . . . . . . . . 1 Some limits faced in structural simulations . . . . . . . . . . . . . . . . . . . 2 Work Purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1 From a Digital Mock Up to Finite Element Assembly Models: Current practices 5 1.1 Introduction and definition DMU concept . . . . . . . . . . . . . . . . 5 1.2 Geometric representation and modeling of 3D components . . . . . . . 7 1.2.1 Categories of geometric families . . . . . . . . . . . . . . . . . . 7 1.2.2 Digital representation of solids in CAD . . . . . . . . . . . . . . 9 1.2.3 Complementary CAD software capabilities: Feature-based and parametric modeling . . . . . . . . . . . . . . . . . . . . . . . . 12 1.3 Representation and modeling of an assembly in a DMU . . . . . . . . . 16 1.3.1 Effective DMU content in aeronautical industry . . . . . . . . . 16 1.3.2 Conventional representation of interfaces in a DMU . . . . . . . 19 1.4 Finite Element Analysis of mechanical structures . . . . . . . . . . . . 22 1.4.1 Formulation of a mechanical analysis . . . . . . . . . . . . . . . 22 1.4.2 The required input data for the FEA of a component . . . . . . 24Table of contents 1.4.3 FE simulations of assemblies of aeronautical structures . . . . . 27 1.5 Difficulties triggering a time consuming DMU adaptation to generate FE assembly models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 1.5.1 DMU adaption for FE analyses . . . . . . . . . . . . . . . . . . 31 1.5.2 Interoperability between CAD and CAE and data consistency . 33 1.5.3 Current operators focus on standalone components . . . . . . . 34 1.5.4 Effects of interactions between components over assembly transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 1.6 Conclusion and limits of current practices about DMU manual adaption for FE assembly models generation . . . . . . . . . . . . . . . . . . . . 39 1.7 Research objectives: Speed up the DMU pre-processing to reach the simulation of large assemblies . . . . . . . . . . . . . . . . . . . . . . . 40 2 Current status of procedural shape transformation methods and tools for FEA pre-processing 43 2.1 Targeting the data integration level . . . . . . . . . . . . . . . . . . . . 43 2.2 Simplification operators for 3D FEA analysis . . . . . . . . . . . . . . . 45 2.2.1 Classification of details and shape simplification . . . . . . . . . 45 2.2.2 Detail removal and shape simplification based on tessellated models 47 2.2.3 Detail removal and shape simplification on 3D solid models . . . 49 2.2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 2.3 Dimensional reduction operators applied to standalone components . . 53 2.3.1 Global dimensional reduction using the MAT . . . . . . . . . . 53 2.3.2 Local mid-surface abstraction . . . . . . . . . . . . . . . . . . . 55 2.3.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 2.4 About the morphological analysis of components . . . . . . . . . . . . 58 2.4.1 Surface segmentation operators . . . . . . . . . . . . . . . . . . 58 2.4.2 Solid segmentation operators for FEA . . . . . . . . . . . . . . . 60 2.4.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 2.5 Evolution toward assembly pre-processing . . . . . . . . . . . . . . . . 64 2.6 Conclusion and requirements . . . . . . . . . . . . . . . . . . . . . . . . 67 3 Proposed approach to DMU processing for structural assembly simxTable of contents ulations 69 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.2 Main objectives to tackle . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.3 Exploiting an enriched DMU . . . . . . . . . . . . . . . . . . . . . . . . 71 3.4 Incorporating a morphological analysis during FEA pre-processing . . . 75 3.4.1 Enriching DMU components with their shape structure as needed for idealization processes . . . . . . . . . . . . . . . . . . . . . . 77 3.4.2 An automated DMU analysis dedicated to a mechanically consistent idealization process . . . . . . . . . . . . . . . . . . . . . 78 3.5 Process proposal to automate and robustly generate FEMs from an enriched DMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.5.1 A new approach to the idealization of a standalone component . 81 3.5.2 Extension to assembly pre-processing using the morphological analysis and component interfaces . . . . . . . . . . . . . . . . . 81 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4 Extraction of generative processes from B-Rep shapes to structure components up to assemblies 85 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.2 Motivation to seek generative processes . . . . . . . . . . . . . . . . . . 86 4.2.1 Advantages and limits of present CAD construction tree . . . . 87 4.2.2 A new approach to structure a component shape: construction graph generation . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.3 Shape modeling context and process hypotheses . . . . . . . . . . . . . 91 4.3.1 Shape modeling context . . . . . . . . . . . . . . . . . . . . . . 91 4.3.2 Generative process hypotheses . . . . . . . . . . . . . . . . . . . 93 4.3.3 Intrinsic boundary decomposition using maximal entities . . . . 96 4.4 Generative processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.4.1 Overall principle to obtain generative processes . . . . . . . . . 98 4.4.2 Extrusion primitives, visibility and attachment . . . . . . . . . . 100 4.4.3 Primitive removal operator to go back in time . . . . . . . . . . 103 4.5 Extracting the generative process graph . . . . . . . . . . . . . . . . . . 107 4.5.1 Filtering out the generative processes . . . . . . . . . . . . . . . 107 xiTable of contents 4.5.2 Generative process graph algorithm . . . . . . . . . . . . . . . . 109 4.6 Results of generative process graph extractions . . . . . . . . . . . . . . 110 4.7 Extension of the component segmentation to assembly structure segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5 Performing idealizations from construction graphs 119 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 5.2 The morphological analysis: a filtering approach to idealization processes 120 5.2.1 Morphological analysis objectives for idealization processes based on a construction graph . . . . . . . . . . . . . . . . . . . . . . 121 5.2.2 Structure of the idealization process . . . . . . . . . . . . . . . . 124 5.3 Applying idealization hypotheses from a construction graph . . . . . . 125 5.3.1 Evaluation of the morphology of primitives to support idealization126 5.3.2 Processing connections between ‘idealizable’ sub-domains Dij . . 133 5.3.3 Extending morphological analyses of Pi to the whole object M . 137 5.4 Influence of external boundary conditions and assembly interfaces . . . 141 5.5 Idealization processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 5.5.1 Linking interfaces to extrusion information . . . . . . . . . . . . 146 5.5.2 Analysis of GS to generate idealized models . . . . . . . . . . . 147 5.5.3 Generation of idealized models . . . . . . . . . . . . . . . . . . . 153 5.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 6 Toward a methodology to adapt an enriched DMU to FE assembly models 159 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 6.2 A general methodology to assembly adaptions for FEA . . . . . . . . . 160 6.2.1 From simulation objectives to shape transformations . . . . . . 161 6.2.2 Structuring dependencies between shape transformations as contribution to a methodology of assembly preparation . . . . . . . 164 6.2.3 Conclusion and methodology implementation . . . . . . . . . . 167 6.3 Template-based geometric transformations resulting from function identifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 xiiTable of contents 6.3.1 Overview of the template-based process . . . . . . . . . . . . . . 169 6.3.2 From component functional designation of an enriched DMU to product functions . . . . . . . . . . . . . . . . . . . . . . . . . . 169 6.3.3 Exploitation of Template-based approach for FE models transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 6.3.4 Example of template-based operator of bolted junctions transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 6.4 Full and robust idealization of an enriched assembly . . . . . . . . . . . 183 6.4.1 Extension of the template approach to idealized fastener generation185 6.4.2 Presentation of a prototype dedicated to the generation of idealized assemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Conclusion and perspectives 191 Bibliography 211 Appendices I A Illustration of generation processes of CAD components I A.1 Construction process of an injected plastic part . . . . . . . . . . . . . I A.2 Construction process of an aeronautical metallic part . . . . . . . . . . I B Features equivalence IX C Taxonomy of a primitive morphology XV D Export to CAE software XIX xiiiTable of contents xivList of Figures 1.1 The Digital Mock-Up as the reference representation of a product, courtesy of Airbus Group Innovations. . . . . . . . . . . . . . . . . . . . . . 6 1.2 Regularized boolean operations of two solids. . . . . . . . . . . . . . . . 9 1.3 CSG and B-Rep representations of a solid. . . . . . . . . . . . . . . . . 10 1.4 Examples of non-manifold geometric models. . . . . . . . . . . . . . . . 12 1.5 CAD construction process using features. . . . . . . . . . . . . . . . . . 15 1.6 Example of an aeronautical CAD assembly: Root joint model (courtesy of Airbus Group Innovations). . . . . . . . . . . . . . . . . . . . . . . . 16 1.7 Example of complex DMU assembly from Alcas project [ALC08] and Locomachs project [LOC16]. . . . . . . . . . . . . . . . . . . . . . . . . 18 1.8 Representation of a bolted junction in a structural DMU of an aircraft. 20 1.9 Classification of Conventional Interfaces (CI) under contact, interference and clearance categories. . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.10 Process flow of a mechanical analysis. . . . . . . . . . . . . . . . . . . . 23 1.11 Example of FE mesh models . . . . . . . . . . . . . . . . . . . . . . . . 24 1.12 Example of a FE fastener simulating the behavior of a bolted junction using beam elements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 1.13 Example of aeronautical FE models. . . . . . . . . . . . . . . . . . . . 31 1.14 Illustration of a shim component which does not appear in the DMU model. Shim component are directly manufacture when structural components are assembled. . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 1.15 Illustration of a manual process to generate an idealized model. . . . . 35 1.16 Example of contact model for a FE simulation. . . . . . . . . . . . . . . 38 2.1 Identification of a skin detail . . . . . . . . . . . . . . . . . . . . . . . . 46 2.2 Illustration of the MAT method . . . . . . . . . . . . . . . . . . . . . . 48 xvLIST OF FIGURES 2.3 Details removal using the MAT and detail size criteria [Arm94]. . . . . 49 2.4 Topology adaption of CAD models for meshing [FCF∗08]. . . . . . . . . 50 2.5 Illustration of CAD defeaturing using CATIA. . . . . . . . . . . . . . . 51 2.6 Illustration of the mixed dimensional modeling using a MAT [RAF11]. 54 2.7 Illustration of mid-surface abstraction [Rez96]. . . . . . . . . . . . . . . 56 2.8 An example of particular geometric configuration not addressed by facepairs methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 2.9 Illustration of different connection models for idealized components. . . 58 2.10 Mesh Segmentation techniques. . . . . . . . . . . . . . . . . . . . . . . 59 2.11 Automatic decomposition of a solid to identify thick/thin regions and long and slender ones, from Makem [MAR12]. . . . . . . . . . . . . . . 60 2.12 Idealization using extruded and revolved features in a construction tree, from [RAM∗06]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 2.13 Divide-and-conquer approach to idealization processes using a maximal volume decomposition (by Woo [Woo14]). . . . . . . . . . . . . . . . . 62 2.14 Assembly interface detection of Jourdes et al. [JBH∗14]. . . . . . . . . . 65 2.15 Various configurations of the idealization of a small assembly containing two components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.1 Current process to prepare assembly structures. Each component of the assembly is transformed individually. . . . . . . . . . . . . . . . . . . . 70 3.2 Structuring a DMU model with functional properties. . . . . . . . . . . 73 3.3 DMU enrichment process with assembly interfaces and component functional designations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.4 Interactions between simulation objectives, hypotheses and shape transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.5 Proposed approach to generate a FEM of an assembly structure from a DMU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.1 An example of a shape generation process. . . . . . . . . . . . . . . . . 87 4.2 An example of shape analysis and generative construction graph. . . . . 91 4.3 Modeling hypotheses about primitives to be identified in a B-Rep object. 92 4.4 Entities involved in the definition of an extrusion primitive in a B-Rep solid. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 xviLIST OF FIGURES 4.5 Illustrations of two additive primitives: (a) an extrusion primitive and (b) a revolution one. The mid-surfaces of both primitives lie inside their respective volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.6 Example of two possible decompositions into primitives from a solid. . 96 4.7 Pipeline producing and exploiting generative shape processes. . . . . . 97 4.8 Examples of configurations where faces must be merged to produce a shape-intrinsic boundary decomposition. . . . . . . . . . . . . . . . . . 98 4.9 Overall scheme to obtain generative processes. . . . . . . . . . . . . . . 99 4.10 An example illustrating the successive identification and removal of primitives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.11 An example illustrating the major steps to identify a primitive and remove it from an object . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.12 Example of geometric interface . . . . . . . . . . . . . . . . . . . . . . 103 4.13 Example of a collection of primitives identified from a B-Rep object. . . 103 4.14 Illustration of the removal of Pi. . . . . . . . . . . . . . . . . . . . . . . 105 4.15 Illustration of the removal operator for interface of surface type 1a. . . 106 4.16 Illustration of the removal operator for interface of volume type 3. . . . 107 4.17 Illustration of the simplicity concept to filtering out the generative processes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.18 Criteria of maximal primitives and independent primitives. . . . . . . . 110 4.19 Extraction of generative processes for four different components. . . . . 111 4.20 Result of generative process graph extractions. . . . . . . . . . . . . . . 113 4.21 Illustration of the continuity constraint. . . . . . . . . . . . . . . . . . . 114 4.22 A set of CAD construction trees forming a graph derived from two consecutive construction graph nodes. . . . . . . . . . . . . . . . . . . . . . 115 4.23 Illustration of the compatibility between the component segmentation (a) and assembly structure segmentation (b). . . . . . . . . . . . . . . . . . 116 4.24 Insertion of the interface graphs between primitives obtained from component segmentations into the graph of assembly interfaces between components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5.1 From a construction graph of a B-Rep shape to a full idealized model for FEA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 5.2 Global description of an idealization process. . . . . . . . . . . . . . . . 124 xviiLIST OF FIGURES 5.3 Determination of the idealization direction of extrusion primitives using a 2D MAT applied to their contour . . . . . . . . . . . . . . . . . . . . 126 5.4 Example of the morphological analysis of a component. . . . . . . . . . 129 5.5 Idealization analysis of components. . . . . . . . . . . . . . . . . . . . . 130 5.6 Illustration of primitives’ configurations containing embedded sub-domains Dik which can be idealized as beams or considered as details. . . . . . . 131 5.7 Example of a beam morphology associated with a MAT medial edge of a primitive Pi. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 5.8 Synthesis of the process to evaluate the morphology of primitives Pi. . . 134 5.9 Taxonomy of connections between extrusion sub-domains. . . . . . . . 135 5.10 Illustration of propagation of the morphological analysis of two primitives.139 5.11 Propagation of the morphology analysis on Pi to the whole object M. . 140 5.12 Influence of an assembly interface modeling hypothesis over the transformations of two components . . . . . . . . . . . . . . . . . . . . . . . 141 5.13 Illustration of the inconsistencies between an assembly interface between two components and its projection onto their idealized representations. 143 5.14 Two possible schemes to incorporate assembly interfaces during the segmentation process of components. . . . . . . . . . . . . . . . . . . . . . 144 5.15 Illustration of an interface graph derived from the segmentation process of a component. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 5.16 Illustration of an interface cycle between primitives. . . . . . . . . . . . 148 5.17 Examples of medial surface positioning improvement. (a) Offset of parallel medial surfaces, (b) offset of L-shaped medial surfaces. . . . . . . . 150 5.18 Example of identification of a group of parallel medial surfaces and border primitives configurations. . . . . . . . . . . . . . . . . . . . . . . . 151 5.19 Example of a volume detail configuration lying on an idealized primitive. 152 5.20 Interfaces connection operator . . . . . . . . . . . . . . . . . . . . . . . 154 5.21 Illustration of the idealization process of a component that takes advantage of its interface graph structures. . . . . . . . . . . . . . . . . . . . 155 5.22 Illustration of the successive phases of the idealization process. . . . . . 156 6.1 Setting up an observation area consistent with simulation objectives. . 162 6.2 Entire idealization of two components. . . . . . . . . . . . . . . . . . . 162 6.3 Transformation of groups of components as analytical models. . . . . . 163 xviiiLIST OF FIGURES 6.4 Influence of interfaces over shape transformations of components. . . . 163 6.5 Synthesis of the structure of an assembly simulation preparation process. 166 6.6 Use-Case 1: Simplified solid model with sub-domains decomposition around bolted junctions. . . . . . . . . . . . . . . . . . . . . . . . . . . 168 6.7 Overview of the main phases of the template-based process. . . . . . . 170 6.8 Subset of TF N , defining a functional structure of an assembly. . . . . . 171 6.9 Principle of the template-based shape transformations. . . . . . . . . . 174 6.10 Compatibility conditions (CC) of shape transformations ST applied to T. 174 6.11 Checking the compatibility of ST (T) with respect to the surrounding geometry of T. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 6.12 Multi-scale simulation with domain decomposition around bolted junctions, (courtesy of ROMMA project [ROM14]). . . . . . . . . . . . . . 177 6.13 Template based transformation ST (T) of a bolted junction into simple mesh model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 6.14 User interface of a template to transform ‘assembly Bolted Junctions’. . 181 6.15 Results of template-based transformations on CAD assembly models. . 182 6.16 Idealized surface model with FE fasteners to represent bolted junctions. 183 6.17 Illustration of Task 2: Transformation of bolted junction interfaces into mesh nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 6.18 Results of the template-based transformation of bolted junctions. . . . 184 6.19 User interface of the prototype for assembly idealization. . . . . . . . . 187 6.20 Illustration of a component segmentation which extracts extruded volumes to be idealized in task 3. . . . . . . . . . . . . . . . . . . . . . . . 187 6.21 Illustration of task 4: Identification and transformation of groups of idealized surfaces connected to the same assembly interfaces. . . . . . . 188 6.22 Final result of the idealized assembly model ready to be meshed in CAE software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 A.1 Example of a shape generation process 1/5 . . . . . . . . . . . . . . . . II A.2 Example of a shape generation process 2/5 . . . . . . . . . . . . . . . . III A.3 Example of a shape generation process 3/5 . . . . . . . . . . . . . . . . IV A.4 Example of a shape generation process 4/5 . . . . . . . . . . . . . . . . V A.5 Example of a shape generation process 5/5 . . . . . . . . . . . . . . . . VI xixLIST OF FIGURES A.6 Example of a shape generation process of a simple metallic component VII B.1 Examples of Sketch-Based Features . . . . . . . . . . . . . . . . . . . . X B.2 Examples of Sketch-Based Features . . . . . . . . . . . . . . . . . . . . XI B.3 Examples of Dress-Up Features . . . . . . . . . . . . . . . . . . . . . . XII B.4 Examples of Boolean operations . . . . . . . . . . . . . . . . . . . . . . XIII D.1 Illustration of the STEP export of a Bolted Junction with sub-domains around screw. (a) Product structure open in CATIA software, (b) associated xml file containing the association between components and interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XX D.2 Illustration of the STEP export of a Bolted Junction. Each component containing volume sub-domains is exported as STEP assembly. . . . . . XX D.3 Illustration of the STEP export of a Bolted Junction. Each inner interface between sub-domains is part of the component assembly. . . . . . . XXI D.4 Illustration of the STEP export of a Bolted Junction. Each outer interface between components is part of the root assembly. . . . . . . . . . . XXII D.5 Illustration of the STEP export of the full Root Joint assembly. . . . . XXIII xxList of Tables 1.1 Categories of Finite Elements for structural analyses. . . . . . . . . . . 25 1.2 Connector entities available in CAE software. . . . . . . . . . . . . . . 28 1.3 Examples of interactions or even dependencies between simulation objectives and interfaces as well as component shapes. . . . . . . . . . . . 30 5.1 Categorization of the morphology of a primitive using a 2D MAT applied to its contour. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 C.1 Morphology associated with a MAT medial edge of a primitive Pi. 1/2 XVI C.2 Morphology associated with a MAT medial edge of a primitive Pi. 2/2 XVII xxiLIST OF TABLES xxiiAcronyms B-Rep Boundary representation. 10–14, 17, 20, 21, 34, 45, 49, 50, 52, 62, 63, 77, 78, 85, 86, 89, 91–93, 96–98, 101, 102, 114, 116–118, 120, 140, 141, 147, 171, 192–195 CAD Computer Aided Design. IX, 2, 3, 5–7, 9–14, 16–19, 21, 24–26, 29, 30, 33–36, 39–41, 43, 44, 47–53, 55, 59, 60, 63, 65, 67, 70, 73, 77–79, 86–93, 97–99, 114, 116, 118–120, 122, 123, 140, 144, 155, 156, 159, 169, 179, 180, 183, 186, 189, 191, 193, 195 CAD-FEA Computer Aided Design to Finite Element Analysis(es). 12, 43, 191–193 CAE Computer Aided Engineering. 11, 12, 23, 25, 26, 33, 35, 37–41, 43–45, 47, 50–53, 67, 77, 123, 156, 165, 180, 189 CSG Constructive Solid Geometry. 9–11, 13, 62, 63 DMU Digital Mock-Up. XIX, 2, 3, 5–8, 11, 12, 16–21, 23, 30–35, 37–41, 43, 64, 67–75, 77–79, 82, 83, 85, 86, 142, 157, 159, 160, 167, 168, 171, 172, 178–180, 183, 189, 191–193, 196 FE Finite Elements. 5, 7, 22, 25, 27–32, 38–41, 44–47, 49, 52, 54, 64, 70, 72, 77, 79, 82, 85, 121–123, 125, 136, 145, 152, 153, 157, 159, 162, 165, 169, 177, 180, 183, 185, 186, 189, 191, 193 FEA Finite Element Analysis(es). 1–3, 5, 7, 12, 22, 28, 30–32, 40, 43, 45, 46, 49, 57, 58, 60, 64, 67–70, 72, 74–79, 83, 85, 87–89, 96, 117, 119–121, 123, 141, 156, 160, 168, 172, 173, 176, 177, 180, 191 FEM Finite Element Models. 1, 22–24, 26, 27, 29, 44, 46, 69–72, 76, 79, 82, 88, 121, 147, 183 KBE Knowledge Based Engineering. 168 MAT Medial Axis Transform. XV, 44, 47–49, 53–57, 60, 126, 128, 131–133, 137, 145, 166, 195 xxiiiAcronyms PDMS Product Data Management System. 72 PDP Product Development Process. 1–3, 5–7, 9, 12, 19, 32, 39, 44, 46, 88, 89, 180 PLM Product Lifecycle Management. 6, 16, 21, 32, 72 xxivIntroduction Context of numerical certification of aeronautical structures Aeronautical companies face increasing needs in simulating the structural behavior of product sub-assemblies. Numerical simulation plays an important role in the Product Development Process (PDP) of a mechanical structure: it allows engineers to numerically simulate the mechanical behavior of this structure submitted to a set of physical constraints (forces, pressures, thermal field, . . . ). The local or global, linear or nonlinear, static or dynamic analyses of structural phenomena using Finite Element Analysis(es) (FEA) simulations are now widespread in industry. These simulations play an important role to reduce the cost of physical prototyping, to justify and certify structural design choices. As an example, let us consider the last Airbus Program A350. There, a major physical test program is still required to support the development and certification of the aircraft. However, it is based on predictions obtained from Finite Element Models (FEM). Consequently, the test program validates the internal load distributions which have been computed numerically. Today, FEAs are not restricted anymore to the simulation of components alone; they can be applied to large assemblies of components. Simulation software capabilities associated with optimized mathematical resolution methods can process very large numbers of unknowns derived from their initial mechanical problems. Such simulations require few days of computations, which is an acceptable amount during a product development process in aeronautics. However, it is important to underline that these numerical simulations require the setting of a mathematical model from the physical object or assembly being analyzed. On purpose, the FEA method incorporates simplification hypotheses applied to the geometric models of components or assemblies compared to their real shapes and, finally, produces an approximate solution. To obtain the most faithful results with a minimum bias within a short amount of time, engineers are encouraged to spend a fair amount of time on the generation of simulation models. They have to stay critical with respect to the mathematical method used, the consequences of simplification hypotheses, in order to understand the deviations of the simulation model compared to the real tests to judge the validity of the simulation results. 1Introduction Some limits faced in structural simulations Numerical simulations of assemblies remain complex and tedious due to the preprocessing of assembly 3D models available from Digital Mock-Ups(DMUs) that stands as the virtual product reference in industry. This phase is highly time consuming compared to the numerical computations’ one. In the past, the use of distinct software tools between the design and simulation phases required generating once again the Computer Aided Design (CAD) geometry of a component in the simulation software. Today, the development and use of DMUs in a PDP, even with a rather small assembly models, bring 3D models at hand for engineers. The concept of DMU was initially developed for design and manufacture purposes as a digital representation of an assembly of mechanical components. Consequently, DMUs are good candidates to support digital analyses of several PDP processes, e.g. part assembly ones. In industry, DMUs are widely used during a PDP regarded as the virtual product geometry reference. This geometric model contains a detailed 3D representation of the whole product structure that is made available for simulation engineers. To prepare large sub-structure models for simulation, such as wings or aircraft fuselage sections; the DMU offers a detailed and precise geometric input model. However, speeding up the simulation model generation strongly relies on the time required to perform the geometric transformations needed to adapt a DMU to FEA requirements. The pre-processing phase implies reworking all DMU 3D data to collect subsets of components, to remove unnecessary or harmful areas leading to simplified shapes, to generate adequate FE meshes, to add the boundary conditions and material properties as needed for a given simulation goal. All these operations, the way they are currently performed, bring little added value to a PDP. Currently, time and human resources involved in pre-processing CAD models derived from DMUs into FE models can even prevent engineers from setting up structural analyses. Very tedious tasks are required to process the large amount of DMU components and the connections between them, like contact areas. Commercial software already provide some answers to the interactions between design and behavioral simulation processes for single components. Unfortunately, the operators available are restricted either to interactive geometric transformations, leading to very tedious tasks, or to automated simulation model generation adapted to simple models only. A rather automated generation of complex assembly simulation models still raises real difficulties and it is far too tedious to process groups of components as well as sub-assemblies. As detailed in Chapter 1, these difficulties arise because shape transformations are needed since designers and simulation engineers work with different target models, resulting in the fact that DMUs cannot be easily used to support the preparation of structural analysis models. Scientific research work has mainly focused on the use of global geometric transformations of standalone CAD components. Few contributions have addressed the au- 2Introduction tomation of assembly pre-processing (see Chapter 2) leaving engineers to interactively process each assembly component. Aeronautical structures are particularly complex to transform due to the large amount of transformations on hundred thousands of parts and interfaces joints. These operators are still not generic enough to be adapted to engineers’ needs, especially when idealizations of components or assemblies must be produced. Indeed, it still is a common practice for engineers to generate interactively their own models because preparation operations are still not generic enough. Consequently, some simulations are not even addressed because their preparation time cannot fit within the schedule of a PDP, i.e. simulation results would be available too late. Work Purposes To reach the needs of large assembly simulation model preparation, improvements in processing DMUs are a real challenge in aircraft companies. The contributions in this thesis are mainly oriented toward the transformation of 3D CAD models extracted from a DMU, and their associated semantics, for Finite Element analysis of large assembly structure. To handle large models, it is mandatory that the proposed principles and operators speed up and to automate as much as possible the DMU transformations required. This work will be guided by input DMU data defining the exact content of the simulation models to be built and will use mechanical and geometric criteria for identifying the necessary geometric adaptation. This research thesis is divided into 6 chapters: • Chapter 1 describes the current practices in aeronautical industries about the generation, from DMUs, of geometric models supporting the generation of simulation models. It will define the different geometric entities used in CAD software as well as the notion of mechanical analysis using the FE method. Also, it will detail the problematic of DMU geometry preparation for FE assembly models; • Chapter 2 is a current bibliographical and technological status on the methods and tools proposed for the preparation and adaptation of geometric models for FEA. This analysis will consider the review of component pre-processing as well as their assembly counterpart; • Chapter 3 presents the proposed contribution to assembly pre-processing based on the recommendations of chapter 1 and the analysis of chapter 2. This approach uses, as input model, an enriched DMU at an assembly level with geometric interfaces between its components, functional properties of these components and, at the component level, a structured volume segmentation using graph structure. From this enriched model, an analysis framework is able to connect the simulation hypotheses with the shape transformations. This chapter will also identify the geometric operators segmenting a component to transform it in accordance with the user’s simulation requirements; 3Introduction • Chapter 4 exposes the principles of the geometric enrichment of a component using a construction graph. An algorithm extracting generative processes from B-Rep shapes will be detailed. It provides a powerful geometric structure containing simple primitives and geometric interfaces between them. This structure contributes to an analysis framework and it remains compatible with an assembly structure containing components and geometric interfaces; • Chapter 5 details the analysis framework through the exploitation of the construction graph to analyze the morphology of component. Then, geometric operators are specified that can be robustly applied to automate components’ and interfaces’ shape transformations during an assembly preparation process; • Chapter 6 extends this approach toward a methodology using the geometric operators previously described that performs idealizations and template based transformations of groups of components. Results of this methodology will also be presented to illustrate it through aeronautical examples that use the transformation operators developed; 4Chapter 1 From a Digital Mock Up to Finite Element Assembly Models: Current practices This chapter presents the problematic of DMU adaptation for the generation of Finite Elements (FE) assembly models. In a first step, the technical context is addressed through the description of the DMU data content. This description deals with the geometrical entities and concepts used to represent 3D CAD components as well as with the representation of assemblies currently available before DMU pre-processing. Then, the notion of mechanical analysis using the FE method is defined and the main categories of geometric models within FEA are described. The analysis of current industrial processes and practical DMU data content highlights the issues regarding assembly simulation model preparation and points out the lack of tools in industrial software to reach the level of abstraction required by FEA, especially when idealizations are needed. The main time consuming shape transformations and the missing information about components’ interfaces in DMUs are identified as a starting point to improve the robustness of DMU pre-processing. 1.1 Introduction and definition DMU concept To speed up a Product Development Process (PDP), as stated in the introduction, aeronautical, automotive and other companies face increasing needs in setting up FE simulations of large sub-structures of their products. Their challenge covers the study of standalone components but it is now expanding to simulate the structural behavior of large assembly structures containing up to thousands of components. Today, aeronautical companies have to manage a range of products during their entire lifecycle, from their early design phase to their manufacture and even up to their destruction and recycling. The corresponding digital data management concept 5Chapter 1: From a DMU to FE Assembly Models: current practices Design Manufacturing Pre-Sales Maintenance Industrial Means Digital Mock-Up (DMU) as a Master Figure 1.1: The Digital Mock-Up as the reference representation of a product, courtesy of Airbus Group Innovations. aggregating all the information about each product is called Product Lifecycle Management (PLM). This concept includes the management of a digital product definition for all the functions involved in a PDP. To replace a physical mock up by its digital counterpart, the concept of a virtual representation of a product has been developed, i.e., the Digital Mock Up (DMU) (see Figure 1.1). As Drieux [Dri06] explained, a DMU is an extraction from the PLM of a product at a given time. The DMU was initially created for design and manufacture purposes as a digital representation of an assembly of mechanical components. Consequently, DMUs are convenient to support a virtual analysis of several processes, e.g., part assembly ones. For instance, DMU may be extracted at the manufacturing level, which let engineers to quickly generate and simulate trajectories of industrial robots and to set and validate assembly tolerances. During project reviews of complex products such as an aircraft, DMUs contribute to the technical analysis of a product, as conducted by engineering teams. Connected to virtual reality technology, a DMU can be at the basis of efficient immersive tools to analyze interferences among the various subsystems contained into the corresponding product [Dri06, IML08]. During the design phase, the DMU is considered as the reference geometry of the product representation. It provides engineers all the digital information needed during their PDP. The various CAD models representing different stages of the product during its development or meta data related to specific applications such as manufacturing are 6Chapter 1: From a DMU to FE Assembly Models: current practices examples of such information. The development and use of DMUs in a PDP bring 3D assembly models at hand for engineers. Because this reference model contains detailed 3D geometry, it offers new perspectives for analysts to process more complex shapes while speeding up their simulation model generation up to the FE mesh. However, speeding up the simulation model generation strongly relies on the time required to perform the geometric transformations needed to adapt the DMU to FE requirements. In order to understand the challenges involved in the preparation phase from DMUs of large sub-structure models for FE simulations, it seems appropriate to initially present the various concepts and definitions related to the Finite Element Analysis (FEA) of mechanical structures as well as those related to the current models and information available in DMUs. This chapter describes the current practices regarding shape transformations needed to generate the specific geometric models required for FEA from a DMUs model. Starting from the theoretical formulation of a mechanical analysis to the effective shape transformations faced by engineers to generate FE models, this chapter raises the time consuming preparation process of large assembly structures as an issue. This is detailed at section 1.5 and refers to the identification of key information content during current FE simulation preparation processes from two perspectives: a component point of view as well as an assembly point of view. In the last section 1.7, the research objectives are presented to give the reader an overview of the research topic addressed in this thesis. 1.2 Geometric representation and modeling of 3D components A DMU is straightforwardly related to the representation of the 3D components contained in the product. As a starting point, this section outlines the principles of mathematical and computer modeling of 3D solids used in CAD software. Also, it describes the common schemes available for designers to generate components through a construction process. Because a component is used in a DMU as a volume object, this section focuses on solids’ representations. 1.2.1 Categories of geometric families Prior to explanations about the common concepts for representing 3D components, it is important to recall that the geometric model describing a simulation model contains different categories of geometric entities used in the CAD and FEA software environments. These entities can be classified, from a mathematical point of view, in accordance to their manifold dimension. 7Chapter 1: From a DMU to FE Assembly Models: current practices 0-dimensional manifold: Point These entities, the simplest geometric representation, are not intended to represent the detailed geometry of components. However, they are often used in structural analysis, as abstraction of a component, i.e., its center of mass or a key point of a component where concentrated forces are applied, . . . . They are also frequently encountered to represent interfaces between aeronautical systems and structures in a DMU. They are also the lowest level entity in the description of a component’s solid model. 1-dimensional manifold: Line, Circle, Curve These entities, such as lines, circles and more generally curves, are mainly involved in the definition of models of higher dimension like surfaces. In structural analysis, they represent long and slender shapes, e.g., components behaving like beams, with the complement of section inertia. During a solid modeling process, they are part of the definition of 2D sketches (see Figure 1.5 for an example of sketch based form feature), or as profile curves in other shape design processes. Also, they represent the location of geometric interfaces between components, e.g., the contact of a cylinder onto a plane. 2-dimensional manifold: Surface Surfaces are used to represent the skin, or boundary, of a 3D object. Initially, they were introduced to represent complex shapes of an object, commonly designated as free-form surfaces. Polynomial surfaces like B´ezier, B-Spline, NURBS (Non-Rational B-Spline), Coons surfaces are commonly used for modeling objects with curved surfaces and for the creation of simulation models, e.g., CFD simulations for aerodynamics or simulations using the isogeometric paradigm. Here, surface models will be essentially reduced to canonical surfaces, i.e., plane, sphere, cylinder, cone, torus, which are also described by classical implicit functions. This restriction is set for simplicity purposes though it is not too restrictive for mechanical components because they are heavily used. In structural analysis, using a surface model is frequent practice to represent idealized models equivalent a volume component resembling a sheet. The notion of idealization will be specified in subsection 1.3.2. Even if a surface model can represent complex shapes, they are not sufficient to represent a 3D object as a volume, which requires an explicit representation of the notion of inside, or outside. 3-dimensional manifold: Solid A solid contains all the information to define comprehensively the volume of the 3D object it represents. Based on Requicha’s mathematical definition [Req77, Req80], a solid is a subset of the 3D Euclidian space. Its principal properties are: • A solid have a homogeneous three dimensionality. It contains a homogeneous interior. Solid’s boundary cannot have isolated portions; 8Chapter 1: From a DMU to FE Assembly Models: current practices A Ç B A È B A - B A Ç* B A È* B A -* B Additional Face (a) (b) A B A B Figure 1.2: Boolean operator of two solids: (a) Conventional boolean operator produce additional face, unclosed boundary. (b) CAD uses regularized boolean operator producing valid solid • A solid must be closed. When applied to solid, rigid motions (translation, rotation) or operations that add or remove material, must produce others solids; • The boundary of a solid must determine unambiguously the interior and exterior of the solid. To describe a solid, topological properties are mandatory in addition to the geometric entities defining this object. This requirement is particularly important when describing complex shapes because they are generated using a process that combines simple primitives to progressively increase the shape complexity until reaching the desired solid. Indeed, the principle of generation process exists for complex free-form surfaces is similar to that of complex solids. During the design process, the constructive processes used to combine elementary primitives are key information to enable efficient modification processes that are frequently required during a PDPIt is commonly admitted that 80% of the design time is spent on modifications processes. 1.2.2 Digital representation of solids in CAD Geometric modeling is the core activity of a CAD system. It contains all the geometric information describing 3D objects. Although there are various digital representations of 3D components falling into the category of solids, the two major representations used in CAD software are detailed in the following paragraphs. Constructive Solid Geometry (CSG) representation This representation designates Constructive Solid Geometry approaches devoted to the design and generation of 3D components. It is important to note that the ’usual’ set of Boolean operations cannot be directly applied on solid model. Otherwise, it would create invalid solids. As illustrated on Figure 1.2a, the conventional intersection 9Chapter 1: From a DMU to FE Assembly Models: current practices + - CSG Tree Resulting Solid (a) Face Edge Vertex B-Rep Decomposition (b) Figure 1.3: (a) Representation of the construction tree of a CSG component. (b) B-Rep solid decomposed into faces, edges and vertices (after Stroud [Str10]). operator applied to two solids would create a non regular solid with an isolated face, in Figure 1.2b the intersection of two cubes would generate a face and not a solid or the empty set. Therefore, it is necessary to define a new set of Boolean operations, the so-called regularized set intersection, union and difference (see Figure 1.2b). These operators are a modified version of the conventional operators and will be used in the algorithm presented in chapters 4 and 5. The CSG approaches represents a solid as a sequence of Boolean operations of elementary solids (cylinder, sphere, extrusion, revolution), i.e., primitives (see Figure 1.3a). The modeler stores primitives (cylinders, cubes, . . . ) and operations that have been applied to them, essentially regularized Boolean operations (union, intersection, difference). It can be visually represented as a tree structure but a CSG representation does not necessarily form a binary tree. The advantage of this model is to give a structure which can be easily modified, if the modification is compatible with the construction tree. The location of a simple primitive, e.g., a hole created with the subtraction of a cylinder, can be easily changed without modification of the CSG structure. Weaknesses are: the difficulty to represent complex geometric shapes with free-form surfaces, the complete tree re-evaluation under modifications and the non uniqueness of this tree with regard to a given shape. Boundary representation (B-Rep) In a B-Rep representation, the CAD kernel processes the skin of the object and the inside/outside of material. The B-Rep model contains the result of the operations, i.e., the information defining the shape of the solid (see Figure 1.3b). The volume of a solid is represented by a set of surfaces describing its boundary. Two categories of information are stored in a B-Rep model, a topological structure and a set of geometric entities: 10Chapter 1: From a DMU to FE Assembly Models: current practices • The geometric information: it consists into a set of surfaces defining the boundary of the solid and locating it in 3D space. These surfaces are bounded by trimming curves; • The topological information: this datastructure enables the expression of the mandatory topological properties, i.e., closure, orientation, that leads to the description of shells, faces, wires, edges, vertices, expressing the adjacency relationships between the topological entities. It can be represented using incidence graphs such as face-edge and edge-vertex graphs. In a B-Rep model, the set of surfaces is closed and Euler operators express the necessary conditions to preserve the consistency of a solid’s topology during modifications. The advantage of this representation holds in its ability to use non-canonical surfaces, i.e., NURBS, allowing a user to represent more complex shapes than CSG representation. Among the disadvantages of B-Rep models, the representation of the solid’s boundary contains information only about its final shape and this boundary is not unique for given shape. Today, the B-Rep representation is widespread in most of CAD geometric modelers and it is associated with a history tree to enable the description and use of parametric models. Nowadays, CAD modelers incorporate the B-Rep representation as well as Boolean operators and the B-Rep representation is the main representation used in aeronautical DMUs. In this thesis, the input CAD model of a 3D component is considered as extracted from a DMU and defined as a solid via a B-Rep representation. Representation of manifold and non-manifold geometric models To understand the different properties used in CAD volume modelers and Computer Aided Engineering (CAE) modelers, the notions of manifold solid and non-manifold objects have to be defined. One of the basic properties of a CAD modeler representing solids is that the geometric models of 3D components have to be two-manifold to define solids. The condition for an object to be two-manifold is that, ’at every point of its boundary, an arbitrary small sphere cuts the object’s boundary in a figure homomorphic to a disc’. This condition ensures that a solid encloses a bounded partition of the 3D space and represents a physical object. Practically, in a B-Rep representation, the previous condition reduces partly to the another condition at every edge of a manifold solid where it must be adjacent to two faces exactly. Because a solid is a two-manifold object, its B-Rep model always satisfies the Euler-Poincar´e formula as well as all the associated operators performing the solid’s boundary transformations required during a modeling process: v − e + f − h = 2 ∗ (s − g) (1.1) where v, e, f, h, s and g represent the numbers of vertices, edges, faces, hole-loops and the numbers of connected components (shells) and the genus of the solid, respectively. 11Chapter 1: From a DMU to FE Assembly Models: current practices Non-Manifold Connection Surface Volume Line Figure 1.4: Examples of non-manifold geometric models. As a difference, an object is said to be non-manifold when it does not satisfy the conditions to be a manifold. To address the various needs of object representation, the concept of manifold has been extended to represent a wider range of shapes as needed through a PDP. As illustrated in Figure 1.4, a non-manifold geometric modeling kernel incorporates the ability to describe geometric regions of different manifold dimensions, connected or not along other geometric regions of lower manifold dimensions. Consequently, an edge can be adjacent to more than two faces. However, some basic properties of solids are no longer valid, which increases the difficulty in defining the consistency of such models. In the context of the Computer Aided Design to Finite Element Analysis(es) (CAD-FEA), this also referred to as ‘cellular modeling’ [TNRA14] and few geometric modeling kernels, natively incorporate this category of models [CAS14] These models are commonly used in structural analysis where surfaces often intersect along more than one edge (see Figure 1.11c). Therefore, CAE software proposes datastructures to generate non-manifold geometry. However, most of the commercial FEA softwares contains manifold geometric modelers with extensions to be able to model non-manifold objects, which does not bring the desired end-user performances. Here, the input CAD models are considered as manifold solids and the generated models for FEA can be non-manifold. 1.2.3 Complementary CAD software capabilities: Feature-based and parametric modeling CAD software incorporate a volume geometric modeling kernel to create manifold solids and a surface modeler to enable the generation of two-manifold objects with free-form surfaces, i.e., two-manifold with boundary objects. CAD tools are essential for the generation of DMU because they are used to design and to virtually represent the components of a product. In addition to the presentation of the geometric models used in CAD software, (see Section 1.2.2), it is also crucial to mention some CAD practices contributing to the design of mechanical components. This will help understanding the additional information associated to B-Rep models which are also available in a DMU. 12Chapter 1: From a DMU to FE Assembly Models: current practices Concept of feature: As stated in Section 1.2.2, B-Rep models only describe the final shape of solids. To ease the generation process of 3D mechanical models and to generate a modeling history, CAD software uses pre-defined form features as primary geometric regions of the object [Sha95]. A feature is a generic concept that contains shape and parametric information about a geometric region of a solid. The features can represent machining operations such as holes, pockets, protrusions or more generic areas contributing to the design process of 3D components like extrusions or revolutions. Generative processes: The following chapters of this thesis use the term ”generative processes” to represent an ordered sequence of processes emphasizing the shape evolution of the B-Rep representation of a CAD component. Each generative process corresponds to the generation of a set of volume primitives to be added or to be removed from a 3D solid representing the object at one step of its construction. Features Taxonomy: The features of solid modeling processes can be categorized into two sets [Fou07]: • Features independent from any application: – Geometric entities: points, axes, curves, sketches; – Adding/removing material: extrusion, revolution, sweeping. Boolean operators; – Surface operations: fillets, chamfers, fillings; – Repetitions, symmetries. • Features related to an application (hole drilling, sheetmetal forming, welding). As an example, a material addition extrusion feature is illustrated in Figure 1.5. Its generative process consists in drawing a 2D sketch using lines, circles or planar curves to define a planar face and then, to submit it to a translation using an extrusion vector to generate a volume. The principle of feature-based modeling is to construct a part ”from a simple shape to a complex one” and it is similar to the CSG principle. As illustrated in Figure 1.5 and, more generally, in Appendix A where a complete part design is represented, the user starts with the design of simple volumes to represent the overall solid’s shape of a component and to add progressively shape details such as fillets and holes to reach its final shape. This qualitative morphological approach to a solid’s construction process 13Chapter 1: From a DMU to FE Assembly Models: current practices can be partly prescribed by company modeling rules but a user stays particularly free to choose the features and their sequence during a construction process. This is a consequence of a design process enabling the user to monitor step by step, with simple features, the construction process of a solid. In CAD software, the sequence of features to create a component is represented and stored in a construction tree (see Figure 1.5). Dependences between features and construction tree: The construction tree of a solid is connected to the notion of parametric modeling. In addition to the feature concept, parametric modeling has been introduced in CAD software to enable the regeneration of a solid when the user wants to apply a local shape modification. With parametric modeling, the user input defines the geometric constraints and dimensions of a feature in relation with others. In most cases, the localization of a new feature is not applied in the global coordinates system of the solid. A feature uses an existing geometric entity of the solid, e.g., a planar face, as basis for a new sketch (see Figure 1.5). The sketching face creates another dependency between features in addition to the parent/ child relationships that are stored in the construction tree. Conclusion Solid modeling is a way to represent a digital geometric model of a 3D component. The B-Rep representation used in commercial CAD software allows a user to design complex shapes but it provides only a low level volume description because it does not give a morphological description of a solid. As a complement, feature modeling is easy to learn for a CAD user because it allows him, resp.her, to naturally design mechanical components without an in-depth understanding of CAD modeling kernel. Construction trees structure a 3D object using simple feature models, extending the use of B-Rep models with a history representing their construction processes. Parametric modeling is also efficient to produce a parameterized representation of a solid to enable easy modifications of some of its dimensions. 14Chapter 1: From a DMU to FE Assembly Models: current practices 16 100 100 50 Extrusion Extrusion Cut Extrusion Fillet Fillet Hole Mirror Figure 1.5: CAD construction process using form features. The modeling sequence is stored in a construction tree. 15Chapter 1: From a DMU to FE Assembly Models: current practices X 21 X 24 X 45 X 4 X 2 Figure 1.6: Example of an aeronautical CAD assembly: Root joint model (courtesy of Airbus Group Innovations). 1.3 Representation and modeling of an assembly in a DMU Any mechanical system is composed of different components assembled together with mechanical joints. This section aims at presenting how an assembly is represented and created in a DMU. It underlines how an assembly is processed with a non-formal conventional representation. In particular, this section deals with the current content of a DMU extracted from a PLM in an aeronautic industrial context (data that are actually available for structural simulation). 1.3.1 Effective DMU content in aeronautical industry Assembly structure in a CAD environment In a CAD software, an assembly is a structure that organizes CAD components into groups. Each component contains the geometrical and topological data as described in Section 1.2.2. Then, the component is instantiated in the assembly structure as many times as it should appear. The Figure 1.6 represents an aeronautical structure (wing fuselage junction of an aircraft) with a sub-assembly for each composite part and instantiation of standard components such as screws and nuts. 16Chapter 1: From a DMU to FE Assembly Models: current practices To create an assembly structure in 3D, the user iteratively positions components in 3D space relatively to other components, (axis alignment of holes, surface mating, . . . ). These connections between components, called position constraints, connect degrees of freedom from each component involved in the corresponding constraints. However, these constraints may not represent the common geometric areas connecting the corresponding components. For instance, to connect a screw with the through hole of a plate, a coaxiality constraint can be applied between the cylindrical surface axis of the screw and the hole axis, independently from any contact between the surfaces. The radii of the cylindrical surfaces of these two components are not involved, i.e., any screw can be inserted in the hole. This does not match the reality and can lead to assembly inconsistencies. In a CAD environment, the assembly structure is stored in a product tree which connects each of its components to others with assembly constraints. On top of the B-Rep representation, each component can contain complementary information such as a name or a product reference, a contextual description of the component’s function, modification monitoring information, color, material designation. During a product design process, a CAD environment also offers capabilities to share external references to other components not directly stored in a product structure, to parameterize components’ dimensions with mathematical formulas, . . . DMU evolution during a PDP All along the product construction process in a PDP, its digital representation evolves. The information related to the product definition such as its 3D geometric representation gets modified. In addition, the product development process is shared among several design departments that address different engineering areas: the product mechanical structure, electrical systems, . . . All these areas have to share their design information and sub-assemblies definitions. Whether it is geometry or parameters, this information should be integrated in a common product representation. The DMU concept is a means to support the geometric definition and evolution of components while maintaining the product assembly updated. As an example, depending on the maturity of the product, a DMU can be reduced to a simple master geometry at the early stage of the product design using functional surfaces or, it can contain the full 3D representation of all its components as required for manufacturing. All these needs require that a Product Data Management System (PDMS) be used to support the definition and evolution of a DMU. The PDMS structures the successive versions, adaptions to customer requirements as well as the various technical solutions that can be collectively designated as variants. A product is represented as a tree referencing the CAD components and their variants. The various product sub-assemblies of the design team, initially created in a CAD environment, are transferred and centralized in the PDMS. It is based on this environment that an engineer can formulate a request to extract a DMU used as input model for his, resp. her, simulations. 17Chapter 1: From a DMU to FE Assembly Models: current practices Complex details Complex component geometry Large number of junctions in aeronautical products DMU reduced to a set of individual component, no positionning constraints Size constraint (large number of components) Various categories of Bolted junctions Figure 1.7: Example of complex DMU assembly from Alcas project [ALC08] and Locomachs project [LOC16]. The DMU content and management in the aeronautical industry: a pragmatic point of view. Today, as described previously, a DMU stands for the reference geometric representation of a product used by structural and systems engineers. A DMU is the input model to generate their simulation models. In practice however, the information of a DMU extracted from the PDMS reduces to a set of CAD components positioned in 3D space with respect to a global reference frame and a tree structure representing a logical structure of this product [Fou07, FL∗10]. This fair loss of information originates from: • The size and the fragmented location of a DMU: it contains a large amount of components (see Figure 1.7) created by different design teams during a PDP, e.g., in aeronautics, the extraction of a DMU from the PDMS requires one day (no centralized data); • The robustness of a DMU: Positioning constraints between components are not available. Components are standalone objects in a common reference frame. A DMU is an extraction from the PDMS at a given time. During the evolution of a PDP, engineers cannot maintain the interfaces between the geometric models 18Chapter 1: From a DMU to FE Assembly Models: current practices of the components; the corresponding geometric constraints, if they were set, have to be removed because their management becomes too complex. As an example, if a component is removed, the removal of its corresponding geometric constraints could propagate other modifications throughout the whole assembly. Additionally, the amount of geometric constraints gets very large for complex products and their consistency is still an open problem [LSJS13]. It can reach more than three hundreds for an assembly with less than fifty components. This is what motivates the solution to locate all components into a global coordinate system. Consequently, each component is positioned independently of the others, which increases the robustness of the DMU regarding to its modifications even though the consistency of the DMU becomes more difficult to preserve. As a result, if a product is complex and has a large amount of components created by different design teams during a PDP, the PDMS does not contain information specifying assemblies. Information about proximity between components is not available. The relational information between parts in an assembly created in a CAD environment, e.g., the assembly constraints, is lost during the transfer between the CAD environment and the PDMS. The DMU is restricted to a set of individual CAD components and a tree decomposition of a product. As Drieux explained in its DMU analysis [Dri06], a DMU is a geometric answer to design specifications, it does not contain multiple representations adapted to the various users’ specifications during a PDP, including the structural analysis requirements. 1.3.2 Conventional representation of interfaces in a DMU In order to carry on the pragmatic analysis of a configuration where a simulation engineer receives a DMU as input model, this section defines the notion of assembly interfaces between components and their conventional representations in industry. Lacking conventional representation of components Shahwan et al. showed [SLF∗13] that the shapes of digital components in a DMU may differ from the shape of the physical object they represent. These differences originates from a compromise between the real object shape that can be tedious to model and the need for their shape simplifications to ease the generation of a DMU. This is particularly true for standard parts such as components used in junctions (bolted, riveted). Regarding their large number, each geometric detail, e.g., a threaded area, are not represented because it would unnecessarily complicate the DMU without improving its efficiency during a design process. Since there is no standard geometric model of assembly components used in 3D junctions, each company is likely to set its own representation. As illustrated in Figure 1.8, in large aeronautical DMUs, bolted junctions as well as 19Chapter 1: From a DMU to FE Assembly Models: current practices Line to highlight The head position Line defining the shank of the fasterner Figure 1.8: Representation of a bolted junction in a structural DMU of an aircraft. riveted junctions may be represented with a simplified representation defined with two perpendicular lines. This representation is sufficient to generate an equivalent volume model using basic information of bolt type, nominal diameter and length. However, no information exists about the connections between the bolt and the junction’s components the bolt is related to. There is neither a logical link between the screw and nut with the plates they connect nor a geometric model of the interface between the screw and nut forming the junction. More generally, the lack of geometric interface exists for every interface between assembly components. As explained at Section 1.5.3, this poor representation complicates the generation of equivalent simulation models. This complexity issue applies also to deformable components1, which are represented under an operating configuration. Definition of interfaces between components Based on the description of the DMU content given at Section 1.3.1, there is no explicit information about the geometric interaction between components. Even the content of geometric interfaces between components can differ from one company to another. In this thesis, the definition of Conventional Interface (CI) of L´eon et al. [FL∗10, LST∗12, SLF∗13] is used. From their DMU analysis, they formalized the representation of conventional interfaces between components. To cover all the possible interactions between two B-Rep objects C1 and C2, they classified the CIs into three categories (see Figure 1.9): 1. Contacts: these are configurations where the boundary surfaces of the components C1 and C2 and relatives positions of these components are such that: ∂C1 ∩ ∂C2 = S �= ∅ and ∂C1 ∩∗ ∂C2 = ∅ where ∂C1 and ∂C2 represent the boundary surfaces of C1 and C2, respectively. S refers to one or more geometric 1Here, deformable components refer to a category of components with plastic or rubber parts whose displacements under loading conditions are of a magnitude such that the designer take them into account when setting up the DMU. This is to oppose to metal parts where their displacements are neglected. 20Chapter 1: From a DMU to FE Assembly Models: current practices Surface Interface δC ∩ δC Volume Interface δC ∩* δC Contact Interference Clearance C1 C2 1 2 1 2 Figure 1.9: Classification of Conventional Interfaces (CI) under contact, interference and clearance categories. elements that can be surface-type, line-type or point-type. Figure 1.9a illustrates contacts between CAD components; 2. Interferences: these are configurations where the boundary surfaces of the components C1 and C2 and the relative positions of these components are such that: ∂C1 ∩∗ ∂C2 = C12 �= ∅ where C12 is the intersection volume. Interferences are detected and analyzed during a DMU review to ensure that there is no physical integration problem between components. However, according to L´eon et al., interferences, also named clashes, may occur when components’ shapes are simplified with respect to their physical models (see Figure 1.9b), or in case of incorrect relative positions of components. Interferences resulting from these partial positions make a DMU virtually inconsistent, which requires user’s analysis. Interferences between standard components generate specific classes of interferences, which is used to process DMUs in the present work; 3. Clearances: they represent 3D domains without a clear geometric definition, which is difficult to identify and to represent, (see Figure 1.9c). In this work, clearances are considered as functional clearances and are identified as design features. The concept of CI can be used in our assembly context, since it is independent from any modeling context. Section 3.3 explains how CI can be extracted from a DMU. Conclusion The development and use of DMUs in a PDP bring 3D models at hand for engineers. The DMU extracted from a PLM contains the complete geometric representation of the product using a B-Rep representation and CAD construction trees. Complex shapes are directly available without having to be rebuilt them in a simulation software environment. However, due to considerations of robustness and size, a DMU is reduced 21Chapter 1: From a DMU to FE Assembly Models: current practices to a set of isolated components without an explicit geometric representation of the interfaces between them. 1.4 Finite Element Analysis of mechanical structures This section aims at introducing some principles of the Finite Element Method (FEM) for structural analysis. Because the scope of this thesis covers the pre-processing of geometrical data for FEA, this section does not detail the resolution method but focuses on the input data required by FEA. First of all, it introduces the concept of mechanical model for FEA. Then, it enumerates the data needed for FEA ranging from the geometric model of each component using a FE mesh to the representation of connections between meshes as required to propagate displacements and stress fields over the assembly that stand for the mechanical models of the interfaces between components. Subsequently, it describes the industrial approach to various mechanical analyses of an aircraft structure at different levels of physical phenomena from large coarse models representing global deformations to detailed small assemblies devoted to the analysis of stress distributions, as examples. Within each category, the geometric models representing the components and their connections are described. 1.4.1 Formulation of a mechanical analysis The goal of a numerical simulation of the mechanical behavior of a structure is to anticipate or even supersede a physical test. It allows engineers to simulate a mechanical behavior of a virtual structure, i.e., without the existence of the real structure. The mechanical analysis process Independently from a resolution method, e.g., the finite element analysis or the finite difference method, as stated in Fine [Fin01], the mechanical analysis process may be split into three main steps (see Figure 1.10): 1. The formulation of the model behavior: Just as in a physical test, each virtual simulation has a specific objective: a simulation objective (type of behavior to be observed such as displacements in a particular area, maximal loads under a prescribed mechanical behavior, accuracy of the expected results, . . . ). As Szabo [Sza96] describes, the first formulation phase consists in building a theoretical model integrating the mechanical behavior laws representative of the physical system. The analyst specifies and identifies the key attributes of the physical system and the characteristic values of the mechanical behavior: the simulation hypotheses. Then, the analyst applies a 22Chapter 1: From a DMU to FE Assembly Models: current practices Pre-processing Formulate the model mechanical behaviour from hypotheses relative to the simulation objective. Adapt the input DMU data (geometry, physical properties, boundary condition) to resolution method requirements. 01 step Apply the resolution method (e.g. Finite Element Method) to the simulation model and obtain results Resolution Analyse the results. Determine the accuracy, discuss with design team, validate and integrate in the PDP Post-processing. 03 step 02 step Figure 1.10: Process flow of a mechanical analysis. set of modeling rules related to the simulation hypotheses in order to create a reduced numerical simulation model ready to be sent to the resolution system. The choice of the modeling rules implies decisions on the mechanical behavior of the structure. When defining the shape of the structure derived from its real shape (see Section 1.4.2) and setting up the constraints and hypotheses related to analytical resolution methods, the mechanical engineer limits its range of observations to the simulation objectives. In practice, the formulation of the model behavior may be viewed as the transformation of the DMU input, which is regarded as the digital representation of the physical structure, into the numerical simulation model. Section 1.5 gives details of this crucial integration phase; 2. The resolution of the model behavior: Once the simulation model is generated, the mechanical engineer launches the resolution process. This phase is performed automatically by the CAE software. Currently, the main resolution method used for structural analysis is the FEM, which sets specific constraints at the level of the mesh generation process; 3. The results analysis: Once the resolution process has ended, the mechanical engineer has to analyze the results, i.e., the solutions fields computed and the output parameters that can be derived from these fields. He, resp. she, determines the solutions’s accuracy, discusses with design teams to decide about shape modifications, validates and integrates the results in the PDP. 23Chapter 1: From a DMU to FE Assembly Models: current practices CAD Model Volume Mesh Shell Mesh (a) (b) (c) Figure 1.11: Example of FE mesh models: (a) CAD initial model of a structure, (b) Meshed model with 3D volume elements,(c) Meshed model with idealized 2D shell elements. 1.4.2 The required input data for the FEA of a component Although other resolution methods exist (analytical or numerical, e.g., finite difference method), in mechanical simulation, the FEM is a method widespread in industry. The FEM is a general numerical method dedicated to the resolution of partial differential equations and its applicability is not restricted to structural simulation, it covers thermal, electromagnetism, thermodynamics, . . . Many documents exist which relate in detail the principles of this method, reference books of Zienkiewicz [ZT00], Bathe [Bat96] are among them. This section concentrates on the data requirements of the method to formulate a simulation model and addresses pragmatically how the engineer can collect/generate these data. Geometry: Finite Element Mesh To solve partial differential equations applied to a continuum, i.e., a continuous medium, the FEM defines an equivalent integral formulation on a discretized domain. This discrete domain is called a Finite Element Mesh and is produced by decomposing the CAD model representing the structure into geometric elements of simple and well known geometry, i.e., triangles, tetrahedra forming the finite elements, . . . , whose individual physical behavior reduces to a simple model (see Figure 1.11). When the structure is subjected to a set of physical constraints, the equilibrium equations of the overall structure percolate through all the elements once they have been assembled under a matrix form. The model size, i.e., the number of finite elements, has a direct influence on the computation time to obtain the solution fields and it may introduce approximation errors if not set correctly. The engineer must efficiently identify the right level of mesh refinement related to the mechanical phenomenon he, resp. she, wants to observe. In practice, to ease the meshing phase, the input CAD geometry is simplified to adapt its shape to the simulation objectives and the mesh generation requirements. If this 24Chapter 1: From a DMU to FE Assembly Models: current practices Finite Element Geometry Morphological properties 1D-element: Beam << L l ,l << L 1 2 L l2 l1 L Long and slender sub domain having two dimensions that are small enough compared to the third one. These two dimensions define the beam section parameters. 2D-element: Shell, plate, membrane e l 1 l 2 2 e << l ,l 1 Thin sub domain having one dimension which is small compared to the two others. This dimension defines the thickness parameter. 3D-element: Volume Sub domain without any specific morphological property that must be processed with a three-dimensional mechanical behavior. Table 1.1: Categories of Finite Elements for structural analyses. simplification is carried out properly, it would not only generate a good mesh but this mesh is obtained quickly also. This simplification phase incorporates shape transformations and all their inherent issues are discussed in Section 1.5. Finite Element Choice and families When setting up a simulation model, the choice of finite elements is essential. Each finite element has an approximation function (polynomial function) which has to locally approximate at best the desired solution. As explained in Section 1.4.1, it is the engineer who chooses the type of finite element in adequacy with the prescribed simulation objectives. There are many types of finite elements to suit various applications and their selection is conducted during the early phase of the CAD model pre-processing. It is a matter of compromise between the geometry of the components, the desired accuracy of the simulation results as well as the computation time required to reach this accuracy. Figure 1.1 presents the main categories of finite elements classified in accordance with their manifold properties (see Section 1.2.1). Idealized elements Based on the shell theory of Timoschenko [TWKW59], specific finite elements are available in CAE software to represent a thin volume, e.g., shell elements. These elements can significantly reduce the number of unknowns in FE models, leading to 25Chapter 1: From a DMU to FE Assembly Models: current practices a shorter computation time compared to volume models. Also, using shell elements rather than volume ones gives access to different mechanical parameters: section rotation and stress distribution in the thickness is implicitly described in the element. Rather than discretizing a volume into small volume elements, it is represented by its medial surface (see Table 1.1 2D-element). The thickness becomes a numerical parameter associated with the element. Long and slender sub domains can be processed analogously. A beam element is well suited to represent these volume sub domains using an equivalent medial line (see Table 1.1 1D-element). From the sections of these volumes, their inertia parameters are extracted and they become numerical parameters assigned to the beam elements. Such elements imply a dimensional reduction of the initial volume, a 1-dimensional reduction for shells and 2-dimensional reduction for beams. This modeling hypothesis is called idealization. In a CAD-CAE context, the idealization refers to the geometric transformation converting a initial CAD solid into an equivalent medial surface or medial line which handles the mechanical behavior of a plate, a shell or a beam, respectively. This geometrically transformed model is called idealized model. Such an example is given on Figure 1.11c. Idealized sub domains are particularly suited to aeronautical structures, which contain lots of long and thin components (panels, stringers, . . . ). Using idealized representations of these components can even become mandatory to enable large assembly simulations because software license upper bounds (in terms of number of unknowns) are exceeded when these components are not idealized. However, Section 1.5.3 illustrates that the practical application of such an idealization process is not straightforward. The sub domains candidates for idealization are subjected to physical hypotheses: • The simulation objectives must be compatible with the observed displacements or stress field distributions over the entire idealized sub-domains, i.e., there is no simulation objective related to a local phenomenon taking place in the thickness or section of an idealized domain; • The sub domains satisfy the morphological constraints of idealization hypotheses, e.g., a component thickness must be at least 10 times smaller than the other two dimensions of its corresponding sub domain. Material data, loads and boundary conditions On top of the definition of a mesh geometry and its associated physical properties, e.g., sections, thickness, inertias, the FEM requires the definition of material parameters, loads and boundary conditions. Material data are associated with each finite element in order to generate the global stiffness matrix of the equivalent discretized sub domains representing the initial CAD model. The material properties (homogeneity, isotropy, linearity, . . . ) are themselves inherent to the model of constitutive law representative of the component’s mechanical 26Chapter 1: From a DMU to FE Assembly Models: current practices behavior. In case of a component made of composite material, the spatial distribution of the different layers of fibers should be carefully represented in the meshed model of this component. Loads and boundary conditions are essential settings of a mechanical simulation to describe the mechanical effects of other components on the ones of interest. Consequently, loads and boundary conditions are also part of the mechanical simulation pre-processing. A load can be a punctual force applied at a finite element node, a pressure distributed over the surface of a set of finite elements or even a force field, e.g., gravity force. Similarly, boundary conditions have to be attached to a particular set of nodes. Boundary condition settings interact with idealization processes (see Section 1.5.3), e.g., a force acting on an elongated side of a long slender volume is applied to a linear sequence of nodes of the idealized equivalent model defined as a beam model. Consequently, the boundary condition is also dimensionally reduced. In practice, an engineer defines the loads and boundary conditions areas over a component using partitioning operators prior to mesh the component. 1.4.3 FE simulations of assemblies of aeronautical structures The FEM issues have been extensively addressed for standalone components and integrated in a PDP. However, the FE simulation target is now evolving toward assembly structures, which are under focus in the next section. An assembly can be regarded as a set of components interacting with each other through their interfaces. These interfaces contribute to mechanical functions of components or sub-assemblies [BKR∗12, KWMN04, SLF∗13]. An assembly simulation model derives from shape transformations interacting with these functions to produce a mechanical model containing a set of sub domains discretized into FEs and connected together to form a discretized representation of a continuous medium. Interactions between sub domains in assembly models and associated hypotheses An assembly simulation model is not just a set of meshed sub domains positioned geometrically in a global coordinate system. These sub domains must be connected to each other to generate global displacement and stress fields over the assembly. To process every assembly interface (see Section 1.3.2), the user should decide which mechanical behavior to apply. Connections between components through their interfaces can be of type kinematic or physical and associated with physical data (stiffness, friction coefficient, . . . ) and material parameters, as necessary. The selection of connector types is subjected to user’s hypotheses regarding the relative behavior of sub domains representing the components, e.g., motion and/or relative interpenetration. Here, a sub domain designates either an entire component or a subset of it when it has been 27Chapter 1: From a DMU to FE Assembly Models: current practices With relative motion Without relative motion With interpenetration Deformable junctions models: used to model complete mechanical connections with deformable connectors elements, i.e., springs, dampers, . . . Rigid junctions models: used to model rigid connections, i.e., ball joints, welds, rivets, bolts, . . . Deformable fastener Hinge Junction Without interpenetration Normal and tangential contact: Used to model the normal and tangential stresses (friction) transmitted between two solids in contact during the simulation. Kinematic constraints: used to model relationships expressed as displacement/velocity between nodes, e.g., tie constraints, rigid body, . . . Contact Kinematic Constraint Table 1.2: Connector entities available in CAE software. idealized. The connection types are synthesized in Figure 1.2. The introduction in a FEA of a relative motion between components (contacts condition) considerably increases the complexity of this analysis. Indeed, a contact is not a linear phenomenon and requires the use of a specific nonlinear computational model, which slows down the simulation time. Setting up a contact is a strong hypothesis, which leads to the definition of the potential contact areas on both components. The sets of FEs in contact must be carefully specified. On the one hand, they should contain sufficient elements, i.e., degrees of freedom, to cover the local phenomenon while limiting the interpenetration between meshes. On the other hand, they should not contain too many elements to avoid increasing unnecessarily the computation time. During the early design phases, in addition to idealized models of components, it is common to perform simulations using simplified representation of junctions. In this case, the junction simulation objectives aim at transferring plate loads throughout the whole assembly and FE beam elements are sufficient to model the bolts’ behavior. In this configuration, the whole group of components taking part of the junction is replaced by a unique idealized model (see Figure 1.12). When applied to a FEA of large aeronautical structures, these models are called FE connections with fasteners and they are widely used to integrate component interactions with or without contact conditions. A fastener connection may be applied either to mesh nodes or may be 28Chapter 1: From a DMU to FE Assembly Models: current practices A Fastening point defined with region of influence FE Fastener Bolted Junction Beam connection B C A B C Region of Influence Figure 1.12: Example of a FE fastener simulating the behavior of a bolted junction using beam elements. mesh-independent, i.e., a point to point connection is defined between surfaces prior to the mesh generation process. Interactions between simulation objectives and the simulation model preparation process Simulation objectives drive the shape transformations of CAD solids and interact with the simulation hypotheses to model connections between components. During a PDP, simulations may be used at various steps of a design process to provide different informations about the mechanical behavior of components and sub systems. Based on Troussier’s [Tro99] classification, three simulation objectives are taken as examples in Table 1.3 to illustrate how simulation objectives influence the idealization process and the models of interactions between components as part of different simulation models. As an illustration of the influence of simulation objectives on the generation of different simulation models, Figure 1.13 presents two FE models derived from the same assembly structure of Figure 1.6: • A simplified model used at a design stage of pre-dimensioning and design choices (see Figure 1.13a). The simulation objective is to estimate globally the load transfer between plates through the bolted junctions and to identify the critical junctions. This model contains idealized components with shell FE in order to reduce the number of degrees of freedom. The junctions are modeled with FE fasteners containing beam elements and a specific stiffness model, i.e., the Huth’s law [Hut86]. This model contains 145 000 degrees of freedom and solving it takes 15 minutes, which allows the engineer to test various bolted junctions layouts, part thicknesses and material characteristics; • A full 3D FEM to validate design choices and check conformity with the certi- fication process prior to physical testing (see Figure 1.13b). The simulation objectives contain the validation of the load transfer distribution among the bolted junctions and the determination of the admissible extreme loads throughout the structure. To adapt the FE model to these simulation objectives while repre- 29Chapter 1: From a DMU to FE Assembly Models: current practices Element of a simulation process Pre-dimensioning and design choices Validation of mechanical tests Contribution to phenomenon understanding Simulation objectives Determine of the number of junctions, a component thickness or material, . . . Analyze the distribution of the stress field in a structure. Locate possible weaknesses. Understand the behavior of the structure to correlate with results after physical tests Internal Connections (Interfaces) Physical junction simpli- fied, no contact (rivet and pin models associated to fasteners). Physical junction simpli- fied or use of volume patch model. Contact interactions between components. Complete physical junction, use of volume model with contact interactions. Components’ shape Large number of components. Idealized: thin parts represented as shell models. Simplified (shell models) for large assemblies, volume model or mixed dimensional model accepted if rather small number of components. Small number of components. Complete volume model. Simulation model Linear Linear or nonlinear Nonlinear Table 1.3: Examples of interactions or even dependencies between simulation objectives and interfaces as well as component shapes. senting the physical behavior of the structure, an efficient domain decomposition approach [CBG08, Cha12] uses a coarse 3D mesh (tetrahedral FE) far enough from each bolted junction and a specific sub domain around each bolted junction (structured hexahedral mesh) where friction and pretension phenomena are part of the simulation model. Here, the objective is not to generate a detailed stress distribution everywhere in this assembly but to observe the load distribution areas among bolts using the mechanical models set in the sub domain, i.e., the patch, around each bolt. This model contains 2.1106 degrees of freedom and is solved in 14 hours. Only one such model is generated that corresponds to the physical test. Conclusion This section described the main categories of geometric models used in the FEA of structures. The simulation objectives drive the generation of the simulation models, i.e., FE meshes, boundary conditions, . . . , used as input data for solving the FE models. In addition to each component definition, a FE assembly must integrate connection models between meshed components. In Section 1.5, the different modeling hypotheses are analyzed with regard to the geometric transformations applied on the DMU input in order to obtain a new adapted CAD model that can be used to support the generation of a FE mesh model. 30Chapter 1: From a DMU to FE Assembly Models: current practices Full 3D FEM Volume Model Idealized FEM Shell model (a) (b) Fastener Refined Mesh in subdomains around junction Figure 1.13: Example of aeronautical FE models: (a) an idealized model with fasteners, (b) a full 3D model with a decomposition of plates around each bolted junction and a fine mesh in the resulting sub domain around each bolted junction. 1.5 Difficulties triggering a time consuming DMU adaptation to generate FE assembly models This section aims at illustrating the complexity of the generation of FE models from DMUs. It highlights the differences between a component’s shape in a DMU with respect to the level of abstraction required for a given FEA, especially when a FEA requires an idealization process. This section characterizes and analyzes some specific issues about assembly simulation model preparation and exposes the lack of tools in industrial software, which leads engineers to process manually all the shape transformations and strongly limit the complexity of assemblies that can be processed in a reasonable amount of time. 1.5.1 DMU adaption for FE analyses Today mechanical structures used in mechanical simulations contain a large number of components, each with a complex shape, binded together with mechanical junctions. In the aeronautic industry, the dimensioning and validation of such structures leads engineers to face two digital challenges: • The formulation of mechanical simulation models, as developed in Section 1.4, 31Chapter 1: From a DMU to FE Assembly Models: current practices that can simulate the mechanical behavior of a structure lead to the components’ dimensioning as well as the validation of the joint technologies selected (bolting, welding, riveting). During this phase, the engineers have to determine the most adapted simulation model regarding the physical phenomena to observe. They need to set up a FEA and its associated simulation hypotheses that produce the FE model which best meets the simulation objectives with the simulation software environment and technologies available. In practice, a simulation engineer supervises the DMU extraction processes to specify the components to be extracted and/or those having negligible mechanical influences with respect to the simulation objectives. Yet, this assessment is qualitative and is strongly dependent upon the engineer’s know-how. Another issue about data extraction stands in the component updates during the PDP. Any geometrical change of a DMU component has to be analyzed by the simulation engineer. Due to the tedious interactive transformations required, a trade-off has to be reached between the time required for the shape update in the FE model and the mechanical influence of the component with respect to the simulation objectives. Here, we face a qualitative judgment; • The generation of appropriate component shapes from a DMUs to support the generation of simulation models. As explained at Section 1.1, the DMU stands for the geometric reference of a product definition. Through the PLM software, engineers have typically access to DMUs containing the geometry of the 3D assembly defining the product and additional information, essentially about the material properties of each component. However, the extracted DMU representation is not directly suited for numerical FE simulations. Shape transformations are mandatory because designers and mechanical engineers work with different component shapes, resulting in the fact that a DMUs cannot directly support the mesh generation of structural analysis models. These models must meet the requirements of the simulation hypotheses, which have been established when setting up the simulation objectives and the specifications of the mechanical model as part of the FEA. The component shapes generated for the FEA have to be adapted to the level of idealization derived from the specifications of the desired mechanical model, the shape partitioning required for the application of the boundary conditions and loads as well as the level of details of the shape with respect to the FE size required when generating the FE mesh. During the generation of mechanical assembly models, the engineer must also take into account the total number of components, the representation of multiple interfaces between components and a higher level of idealization and larger details than for standalone components, to produce coarse enough assembly models. To increase the scope of physical assembly simulations, these two challenges lead engineers to use models with simplified 3D representations using idealized shells rather than representations using solids, from a geometric point of view and, from a compu- 32Chapter 1: From a DMU to FE Assembly Models: current practices tational mechanics point of view, models with specific component interfaces models. These targets require specific treatments during the preparation process of a simulation. Now, the purpose is to describe the major difficulties encountered by engineers during the preparation of assembly simulation models. 1.5.2 Interoperability between CAD and CAE and data consistency The first difficulty to generate assembly simulation models derives from the interoperability between the CAD and CAE systems. CAD tools have been initially developed in the 60s to help designers modeling solids for applications such as machining or freeform surfaces. CAD has evolved along with CAM (Computer Assisted Manufacturing), driving the functionalities of CAD software. However, simulation software has evolved independently. CAD systems do not support a full native integration of simulation preparation modules. The current practice is to export a DMU to subcontracting companies in charge of the simulation pre-processing, which themselves use specialized CAE software to read and transform the CAD components geometry. Each of these two software, (CAD and CAE), efficiently supports its key process. CAD software are efficient to manage robustly and intuitively modify B-rep solids, to generate large assembly models but they contain basic meshing strategies and most of them are able to model non-manifold objects. CAE software are dedicated to simulation processes, they provide capabilities to describe non-manifold geometry (useful for idealized models) but are limited in modeling non-manifold models. They incorporate robust meshing tools (with topological adaption capabilities) and extensive capabilities to describe contact behaviors, material constitutive laws, . . . . However, CAE software relies on a different geometric kernel than CAD, which breaks the link between them and leaves open the needs for shape transformation operators. Also, a transfer of a component from a CAD to a CAE environment has a severe impact on the transferred information. The geometry has to be translated during its import/export between softwares that use different datastructures and operators. This translation can be achieved through a neutral format like STEP (Standard for The Exchange of Product model data) [ISO94, ISO03]. However, this translation may lead to solid model inconsistencies resulting from different tolerance values used in the respective geometric modeling kernels of CAD and CAE software. These inconsistencies may prevent the use of some transformation operators, involving manual corrections. Additionally, the coherence of the input assembly data is crucial. An assembly containing imprecise spatial locations of components and/or components shapes that do not produce consistent CIs (see Section 1.3.2) between components or even the non existence of a geometric model of some components (such as shim components which are not always designed as illustrated in Figure 1.14) implies their manual repositioning or even their redesign to meet the requirements of the simulation model. In the proposed 33Chapter 1: From a DMU to FE Assembly Models: current practices Functional Gap to assemble components Shim component (not design in DMU) to fill the gap Figure 1.14: Illustration of a shim component which does not appear in the DMU model. Shim component are directly manufacture when structural components are assembled. approach, the input DMU is assumed to be free of the previous inconsistencies and therefore, it is considered as coherent. 1.5.3 Current operators focus on standalone components To transform the shape of an initial B-Rep CAD model of a standalone component into a new one as required for its simulation model, the mechanical engineer in charge of the simulation pre-treatment sequentially applies different stages of shape analysis and geometric transformations. His, resp. her, objectives is to produce a new CAD model that can support the mesh generation process. This mesh must be consistent with respect to the simulation objectives and produced in a reasonable amount of time. Based on the simulation objectives reduced to this component, the engineer evaluates qualitatively and a priori, the interactions between its boundary conditions and its areas of simulation observation, e.g., areas of maximum displacements or maximum stresses, to define whether or not some sub domains of this component should be suppressed, idealized. Currently, engineering practices iteratively apply interactive shape transformations: 1. Idealizations, which are the starting transformations, because they are of highest shape transformation level since they perform manifold dimension reductions; 2. Details removal comes after with topological and skin detail categories [Fin01] that can be also grouped together under the common concept of form feature; 3. Mesh generation requirements leading to solid boundary and volume partitioning are the last step of shape transformations that can be achieved with the socalled ‘virtual topology’ operators or, more generally, meshing constraints [She01, FCF∗08]. 34Chapter 1: From a DMU to FE Assembly Models: current practices 1 : Extract Pair of faces from CAD geometry 2 : Generate medial face from Pair of Faces (Automatic) 3 : Connect all medial faces together 4: Assign thickness/offset to medial surface P1 P2 P3 P1 P2 P3 P1 P2 P3 P1 P2 P3 x 14 Shell model with thickness 3D initial model Repetitive Process on each Medial Surface connections (a) (b) Figure 1.15: Illustration of a manual process to generate an idealized model: (a) initial solid superimposed with its idealized model, (b) iterative process using face pairing identification and mid-surface extensions to connect mid-surfaces. Commercial softwares already provide some of these operators to adapt DMUs to CAE processes but they are restricted to simple configurations of standalone components. Fewer software, like Gpure [GPu14], offer capabilities to process specific DMU configurations using large facetted geometric models. Component shape transformations, which is the current target of high level operators, are reduced to manual interactions to apply defeaturing operations on CAD parts such as blend removal [ZM02, VSR02], shape simplifications [LAPL05] or to directly remove features on polyhedral models [MCC98, Tau01, LF05]. In all cases, the flow of interactions is monitored by the engineer. This results in very tedious and time consuming tasks requiring a fair amount of resources. When an idealization is required, engineers can create the resulting mid-surface with a manual and qualitative identification of face pairs [Rez96] or using a medial axis surface generation process [ABD∗98, AMP∗02]. However, information in between idealizable areas is not available and the engineer has to manually create the connections by extending and trimming mid-surfaces, which is highly tedious and relies also on his, resp. her, mechanical interpretation. Figure 1.15 illustrates a manual idealization process where the user identifies faces pairs, then generates mid-surfaces and creates manually new faces to connect medial faces together while locating the idealized object as much as possible inside the initial volume. Applied to complex shapes, e.g., aircraft structure machined parts, this process flow is highly time consuming as a consequence of the numerous connection areas required and can be hardly automated because slight 35Chapter 1: From a DMU to FE Assembly Models: current practices shape modifications strongly influence the process flow. Creating idealized domains in areas where face paring cannot be applied rather than leaving a volume domain in these areas, is a common industrial practice to reduce the number of degrees of freedom of a simulation model and reduce the use of mix dimensional models, thus avoiding transfers between shell and volume finite elements because it is not recognized as a good mechanical model. A volume mesh in connection areas is only beneficial if it brings a gain in accuracy, pre-processing or simulation time. Today, generating volume meshes in connection areas requires lots of manual interventions because these volume shapes can be quite complex. Often, the main difficulty is to partition the initial object into simple volumes to generate structured meshes. The lack of robust and generic operators results in a very time consuming CAD pre-processing task. These geometric operators are analyzed in detail in Chapter 2 to understand why they are not generic and robust enough. 1.5.4 Effects of interactions between components over assembly transformations The amount of shape transformations to be performed significantly increases when processing an assembly. The engineer has to reiterate numerous similar interactive operations on series of components, the amount of such components being large. Unlike modeling a standalone component having no adjacent component, an assembly model must be able to transmit displacements/stresses from one component to another. Therefore, the preparation of an assembly model compared to a standalone component implies a preparation process of their geometric interfaces. Consequently, to obtain a continuous medium, the engineer must be able to monitor the stress distribution across components by adding either kinematic constraints between components or prescribing a non-interpenetration hypothesis between them by adding physical contact models. Thus, modeling hypotheses must be expressed by the engineer at each component interface of an assembly. Today, the interactive preparation of the assembly depicted at Figure 1.13 requires a 5 days preparation to produce either an idealized model or a model based on simplified solids. When looking at this model, some repetitive patterns of groups of components can be observed. Indeed, these patterns are 45 bolted junctions that can be further subdivided into 3 groups of identical bolt junctions, i.e., same diameter. Each group can be further subdivided in accordance with the number of components tightened. The components forming each of these attachments belong to a same function: holding tight in position and transferring forces between the plates belonging to the wing and the fuselage. While a standalone component contributes to a function, an assembly is a set of components that fulfill several functions between them. During an interactive simulation preparation process, even if the engineer has visually identified repetitive 36Chapter 1: From a DMU to FE Assembly Models: current practices configurations of bolts, he, resp. she, has to transform successively each component of each bolt. A property, by which some components share similar interactions than others and could be grouped together because they contribute to the same function, cannot be exploited because there is no such functional information in the DMU and the geometric models of the components are not structured with their appropriate boundary decomposition to set up the connection with their function, e.g., imprint of contact areas are not generated on each component boundary and the contact areas are connected to a function. Thus, the engineer has to repeat similar shape transformations for each component. However, if the geometric entities contributing to the same function are available, grouped together and connected to their function before applying shape transformations, the preparation process could be improved. For instance, bolted junctions would be located and transformed directly into a fastener model through a single operator. Further than repetitive configurations, it is here the impossibility to identify and locate the components and geometric entities forming these repetitive patterns that reduces the efficiency of the preparation process. Processing contacts Hypothesizing the non-interpenetration of assembly components produces non linearity and discontinuities of the simulation model. In this case, the engineer must locate the potential areas of interpenetration during the analysis. Due to the lack of explicit interfaces between components in the DMU, all these contact areas must be processed interactively. At each contact interface, the analyst has to manually subdivide the boundary of each component to generate their geometric interface and then, assign mechanical parameters, such as a friction coefficient, to this interface. In the use-case represented in Figure 1.6, every bolted junction contains between 5 and 7 geometric interfaces at each of the 45 junctions, which amounts to 320 potential contact conditions to define interactively. To avoid these tedious operations, in a context of non linear computations, there is a real need to automate the generation of contacts models in assembly simulations. This automation can be applied to a DMUs with the: • Determination of geometric interface areas between components, i.e., – Localize geometric interfaces between components likely to interpenetrate during the simulation; – Estimate and generate the extent of contact areas over component boundaries. Meshed areas of the two components can be compatible or not depending on the capabilities of CAE software; • Generation of functional information to set the intrinsic properties of contact models, i.e. – Define the friction parameters; 37Chapter 1: From a DMU to FE Assembly Models: current practices Functional Tolerance Loose fit Fitted Snug fit User’s choice Mechanical components DMU Nominal diameter for bearing and shaft CAE Preprocessing Virtualization Simplified representation of a bearing Shaft Contact Area Bearing Simulation Model with Friction contact Figure 1.16: Example of contact model for a FE simulation. – Define the kinematic relations between component meshes in contact areas with respect to the dimensional tolerances between surfaces. Figure 1.16 exemplifies a contact between a shaft and a bearing. Commonly, a DMU exhibits CIs [SLF∗12, SLF∗13] where components’ representations can share the same nominal diameter while they can fulfill different functions according to their fitting (clearance, loose fit, snug fit), thus requiring different settings in their FE respective contact models. As a result, DMUs do not contain enough information to automate the generation of contact models. FE models need geometric and functional information about components interfaces to delineate contact areas as well as to assign contact model parameters. Contribution of component functions to the simulation preparation To automatically handle these repetitive configurations related to components contributing to the same function in an assembly, the simulation preparation process must be able to identify these functions from the input DMU. Currently, the engineer is unable to automate these repetitive tasks because he, resp. she, has no information readily identifying connections in the assembly. Simulation models chosen by the engineer in a CAE library to replace the junctions are geometrically simple and basic interactive operators are available to achieve the necessary shape transformations. As shown in Figure 1.12, an idealized model of a bolted connection modeled with a fastener consists in a set of points connected by line elements to describe the fastener. Using a mesh-independent fastener, the points representing the centers of the bolt holes in the tightened components do not even need to coincide with a surface mesh node. These idealization transformations are rather simple locally, given the component shapes. Hence, the challenge is neither the 38Chapter 1: From a DMU to FE Assembly Models: current practices geometric complexity nor the mesh generation. Indeed, it holds in the term ‘bolted junction’ to identify this geometric set of components and generate geometric relationships between areas of their boundaries. The issue consists in determining the function of each component in an assembly in order to group them in accordance with identical functions and to make decisions about modeling hypotheses (simplification, idealization) on component shapes associated with these identified functions. Conclusion Shape transformations taking place during an assembly simulation preparation process interact with simulation objectives, hypotheses and functions attached to components and to their interfaces. To improve the robustness of the geometric operators applied during simulation preparation, and to make them applicable not only to components but also to assemblies, is a first objective to reduce the amount of time spent on assembly pre-processing. 1.6 Conclusion and limits of current practices about DMU manual adaption for FE assembly models generation Currently, configuring rather complex assembly models for simulations is difficult to handle within the time scale prescribed by an industrial PDP. The pre-processing of CAD models derived from DMUs to produce FE models is far too long compared to the simulation time, it may represent 60% of the whole simulation process (see Section 1.4.1). Consequently, some simulations are not even addressed because their preparation time cannot fit within the schedule of a PDP, i.e., simulation results would be available too late. Because the shape of CAD models obtained from the engineering design processes is neither adapted to the simulation requirements nor to the simulation solvers, shape transformations are mandatory to generate the simulation models. Consequently, DMUs cannot directly support the preparation process of structural analysis models. Today, the operators available in CAD/CAE software allow an engineer to perform either interactive geometric transformations leading to very tedious tasks or automated model generation adapted to simple models only or models containing only a restricted set of form features [TBG09, RG03, SGZ10, CSKL04, LF05, LAPL05, ABA02, SSM∗10, DAP00]. Unfortunately, these operators are still not generic enough to be adapted to analysts’ needs and a rather automated generation of complex component simulation models still raises numerous difficulties, especially when component idealizations must be performed. 39Chapter 1: From a DMU to FE Assembly Models: current practices To generate assembly simulation models, in addition to its component transformations, the engineers need to generate all the connections between its components. Simulation models for assemblies strongly need geometric interfaces between components to be able to set up boundary conditions between them and/or meshing constraints, e.g., to satisfy conformal mesh requirements. Studying the content and structure of an assembly model, as available in a PDMS, reveals that product assemblies or DMUs are reduced to a set of components located in 3D space without geometric relationships between them. The information about the interfaces between components are generally very poor or nonexistent, i.e., real contact surfaces are not identified or not part of each component boundary. As a consequence, it is common practice for engineers to generate interactively the connections between components, which is error prone, due to the large number of repetitive configurations such as junction transformation. Finally, processing complex DMUs for the simulation of large assembly models is a real challenge for aircraft companies. The DMUs, used in large industrial groups such as Airbus Group, consist in hundreds of thousands of components. Thus, engineers in charge of such simulations can hardly consider applying the usual methods involving manual processing of all components as well as their interfaces. To meet the needs for large assembly simulation models, improvements in processing DMUs are a real challenge in aircraft companies and it is mandatory to robustly speed up and automate, as much as possible, the DMU transformations required. 1.7 Research objectives: Speed up the DMU preprocessing to reach the simulation of large assemblies To improve the simulation preparation process of large assembly simulation models, this thesis aims at defining the principles that can be set up to automate the shape adaption of CAD models for the simulation of large assembly structures and developing the associated shape transformation operators. The range of CAD models addressed is not restricted to standalone components but covers also large assembly structures. The tasks planned are mainly oriented toward the transformation of 3D geometric models and the exploitation of their associated semantics for the FEA of structural assemblies applicable to static and dynamic analyses. The task breakdown is as follows: • Analyze FE simulation rules to extract and classify modeling criteria related to user-defined simulation objectives; • Based on CAE discipline’s rules, specifications and process structure, formalize shape transformations operators to increase the level of automation of component transformations as well as the transformation of its geometric interfaces; 40Chapter 1: From a DMU to FE Assembly Models: current practices • Implement and validate idealization operators to transform assembly component shapes and assembly interfaces between components while preserving the semantics of the mechanical behavior intended for this assembly; • Specify the transformation process monitoring and the methodology contributing to the generation of mechanical (CAE) models exploiting a functionally enriched DMUs. Prior to any automation, a first step outlined in Chapter 2 analyzes in detail the available operators and scientific contributions in the field of data integration and shape transformations for mechanical simulations. The objective is to understand why the current operators and approaches are not robust enough to be applied to aeronautical assemblies. From this analysis, Chapter 3 refines the thesis objectives and exposes a new approach to speed up the shape adaption of CAD assembly models derived from DMUs as needed for FE assembly models. The proposed method is able to adapt a component shape to the simulation objectives and meshing constraints. It incorporates the automation of tedious tasks part of the CAD component idealization process, specifically the treatment of connections between idealizable areas. The proposed algorithms detailed in Chapters 4 and 5 have to be robust, applicable for CAD aeronautical components and preserve the semantic of the mechanical behaviors targeted. These operators contribute to an assembly analysis methodology, presented in Chapter 6, that definitively generalizes assembly transformation requirements in order to prove the capacity of the proposed approach to challenge the generation of large assembly simulation models. 41Chapter 1: From a DMU to FE Assembly Models: current practices 42Chapter 2 Current status of procedural shape transformation methods and tools for FEA pre-processing The transformation of DMUs into structural analysis models requires the implementation of methods and tools to efficiently adapt the geometric model and its associated information. Therefore, this chapter proposes a review of the current CAD-FEA integration related to data integration and shape transformations. In this review, the procedural transformations of CAD components are analyzed, from the identification of details to the dimensional reduction operations leading to idealized representations. The geometrical operators are also analyzed with regard to the problem of assembly simulation preparation. Moreover, this chapter identifies that current geometric operators are lacking application criteria of simplification hypotheses. 2.1 Targeting the data integration level Chapter 1 described the industrial needs to reduce the time spent on assembly preparation pre-processing for FEA, now the objective of this chapter is to understand why the available procedural geometric modeling methods and operators still do not meet the engineers’ requirements, leading them to generate interactively their own models. Different approaches have been proposed for a better interoperability between CAD and CAE, which can be mainly classified into two categories [DLG∗07, HLG∗08]: • Integration taking place at a task level: It refers to the integration of activities of design and structural engineers, hence it relates to design and FEA methodologies and knowledge capitalization in simulation data management; 43Chapter 2: Current status of methods and tools for FEA pre-processing • Data integration level: It addresses data structures and algorithms performing shape transformations on 3D models of standalone components. More generally, these data structures and operators help connecting CAD and CAE software. To support the integration of simulation tasks into a PDP, Troussier [Tro99] explains that the knowledge involved in the generation of geometric models is not explicitly formalized. The simulation model definition and generation are based on the collective knowledge of some structure engineers. Therefore, the objective of CAD/CAE integration is not only to reduce the pre-processing time but also to decrease the level of expertise needed to choose and apply the correct transformations to CAD models. Eckard [Eck00] showed that the early integration of structural simulation in a design process could improve a PDP leading to a shorter time-to-market, which applies to assembly processing as well as to standalone components. Badin et al. [Bad11, BCGM11] proposed a specific method of knowledge management used in several interacting activities within a design process. According to them, structure engineers and designers collaborate and exchange design information. However, the authors assume that relationships between dimensional parameters of CAD and simulation models of components are available, which is not necessarily the case. Additionally, they refer to configurations where the shapes of components are identical in both the design and simulation contexts, which is not common practice for standalone components and hardly applicable to assemblies where the reduction of complexity is a strong issue. To help structure engineers, Bellenger [BBT08], Troussier [Tro99] and Peak [PFNO98] formalized simulation objectives and hypotheses attached to design models when setting up simulations. These objectives and hypotheses are subsequently used for capitalization and reuse in future model preparations. This integration at a task level underlines the influence of simulation objectives and hypotheses without setting up formal connections with the shape transformations required. Since the industrial problems addressed in this thesis focus on the robust automation of shape transformations, it seems appropriate to concentrate the analysis of prior research on the data integration level. These research contributions can categorized in: • Detail removals performed either before or after meshing a component [LF05, LAPL05, FMLG09, GZL∗10]; • Shape simplifications applied to facetted models [FRL00, ABA02]; • Idealization of standalone components [CSKL04, SRX07, SSM∗10, RAF11, Woo14] using surface pairing or Medial Axis Transform (MAT) operators. Section 2.2 analyzes the two first categories of shape simplifications and Sections 2.3 concentrates on the specific transformation of dimensional reduction, which is widely used to generate assembly FEMs as illustrated in Section 1.4. Section 2.4 explores morphological approaches such as the geometric analysis and volume decomposition of 3D solids to enforce the robustness of FE models generation. Finally, Section 2.5 44Chapter 2: Current status of methods and tools for FEA pre-processing addresses the evolution of the procedural simplification and idealization during component pre-processing from standalone components toward an assembly context. 2.2 Simplification operators for 3D FEA analysis In CAE applications, the removal of details to simplify a component before meshing it has led to local shape transformations based on B-Rep or polyhedral representations. These transformations create new geometric entities that can incorporate acceptable deviations of a FEA w.r.t. reference results. This section analyzes the different operators and methods aiming at identifying and removing the regions considered as details on 3D solids. 2.2.1 Classification of details and shape simplification As explained in Section 1.4, the level of detail of a solid shape required for its FE mesh is related to the desired accuracy of its FEA. The removal or simplification of a sub-domain of a solid is valid depending when its associated FEA results meet the accuracy constraint. Armstrong and Donaghy [DAP00] and Fine [Fin01] define details as geometric features which do not significantly influence the results of an FE simulation. Starting from this definition, Fine [Fin01] classifies the details under three categories: • Skin details: They represent geometric regions which can be removed without changing neither the 3-dimensional manifold property of the solid (see Section 1.2.1) nor its topology (see Section 1.2.2). This category includes the removal of fillets, chamfers, bosses, . . . ; • Topological details: This category represents geometric regions which can be removed without changing the 3-dimensional manifold property of the solid but their removal modifies the solid’s topology. For example, removing a through hole changes the topology of the solid and the number of hole-loops in the EulerPoincar´e formula decreases; • Dimensional details: This category represents geometric regions which can be removed and reduce locally the manifold dimension of the solid along with a modification of its topology. This category is related to the idealization process where entire solid models can be represented either with surfaces (dimensional reduction of 1), lines (dimensional reduction of 2) or may even be replaced by a point (dimension reduction of 3). In this categorization L´eon and Fine [LF05] define the concept of detail from a physical point of view. According to them, the result of a FEA can be evaluated with 45Chapter 2: Current status of methods and tools for FEA pre-processing CAD Model FEM Volume FEM idealized Volume region which does not influence the result of FE simulation ➱ to be considered as detail Volume which is not represented in idealized model ➱ cannot be considered as detail using idealized FEM model Figure 2.1: Identification of a skin detail related to the accuracy of a FE volume model. With an idealized model, a skin detail cannot be characterized. ‘a posteriori error estimators’ [EF11, BR78, LL83]. These error estimators characterize the influence of a discretization process, i.e., the FE mesh generation, over the solution of the partial differential equations describing a structure behavior. However, as explained in Section 1.4.3, the behavior simulation of large assemblies is heavily based on idealized models to reduce the size, as much as possible, of simulation models and improve their use during a PDP. In this context, skin and topological details cannot be related to the accuracy of the FEA since the error estimators cannot be applied to shape transformations subjected to a dimensional reduction. Indeed, a volume region which does not satisfy the idealization conditions (see Section 1.4.2) is part of an idealized model but not dimensionally reduced. Therefore, as illustrated in Figure 2.1, evaluating the physical influence of small volume details using an idealized FEM has no meaning because the notion of discretization process is not meaningful over the entire model. When considering idealizations, there is currently no ‘error estimators’ to evaluate the influence of the dimensional reductions achieved through these transformations. The definition of skin and topological details has to be discussed and extended in the context of dimensionally reduced models. Even if this classification cannot address idealized models, the simplification operators have to be studied to determine the geometry they are able to process and the information they are able to provide to reduce the complexity of an idealization process. Effectively, it is important to evaluate into which extent skin and topological simpli- fication operators should be applied prior to dimensional reduction or if dimensional reduction takes place first and further simplifications should operate on the dimensionally reduced model. Therefore, the next sections detail the principles of the geometric operators identifying and removing details and determine their suitability to interact with a dimensional reduction process. As mentioned in [Fou07], these operators aim at identifying the geometric regions on the 3D object considered as details and then, remove them from the object in order to generate a simplified model. Approaches to detail removals can be subdivided in two categories depending on the geometric model 46Chapter 2: Current status of methods and tools for FEA pre-processing describing the component: those which act on tessellated models1 and those which modify an initial CAD model. 2.2.2 Detail removal and shape simplification based on tessellated models Although a tessellated object is a simplified representation of an initial CAD model, its geometric model is a collection of planar facets, which can be processed more generically than CAD models. Therefore, the operators based on tessellated models are generic enough to cover a large range of geometric configurations. In what follows, shape simplification operators applicable to the object skin are analyzed first then, a particular attention is paid to the Medial Axis Transform (MAT) operator which extracts an object structure. Shape simplification Different approaches have been proposed to simplify the shape of a CAD component using an intermediate faceted model or modifying a FE mesh of this component. These approaches can be synthesized as follows: • Dey [DSG97] and Shephard [SBO98] improve directly the FE mesh quality by eliminating small model features based on distance criteria compared to the targeted level of mesh refinement. The objective is to avoid poorly-shaped elements and over-densified mesh areas and the treatments proposed are generic; • Clean-up operators [JB95, BS96, RBO02] repair the degeneracies of CAD models when they have lost information during a transfer between CAD/CAE environments or when they contain incorrect entity connections. Their main issue is the computational cost to recalculate new geometries more suitable for analysis [LPA∗03] and the ability of the algorithms to process a wide range of con- figurations. Furthermore, the geometric transformations are inherently small compared to the model size, which may not be the case for simulation details; • Others methods [BWS03, HC03, Fin01, QO12] generate and transform an intermediate tessellated model derived from the CAD component. Fine [Fin01] analyses this tessellated geometry using a ‘tolerance envelope’ to identify and then, remove skin details. Andujar [ABA02] generates new, topologically simpli- fied, models by discretizing the solid object input using an octree decomposition. The advantage of these approaches, dedicated to 3D volume FE, holds in their 1Here, it is referred to tessellated models rather than meshes, as commonly used in computer graphics, to distinguish faceted models used in computer graphics from FE meshes that are subjected to specific constraints for FE simulations. Here, the term mesh is devoted to FE mesh. 47Chapter 2: Current status of methods and tools for FEA pre-processing (a) MAT 2D (b) MAT 3D Medial Axis Medial Surface Figure 2.2: Illustration of the MAT: (a) in 2D, (b) in 3D. independence with respect to the CAD design model. These approaches can support of a wide variety of shapes while avoiding inherent CAD systems issues, i.e., surfaces connections, tolerances, . . . . Nevertheless, any shape modification of the CAD model cannot be taken into account easily and trigger new iterations of the simplification process; • Hamri and L´eon [OH04] propose an intermediate structure, the High Level Topology, in order to preserve a connection between the tessellated model and the CAD model. As a result, bi-directional mappings can be set between these models, e.g., boundary conditions, B-rep surface types, . . . . However, the shape transformations are still performed on the tessellated model. Detail removal using the MAT To identify shape details in sketches, Armstrong [Arm94, ARM∗95] uses the MAT. The MAT has been initiated by Blum [B∗67] and represents, in 2D, the shape defined by the locus of centroids of the maximal inscribed circles in a contour (see Figure 2.2a) or, in 3D, by the maximal spheres inscribed in a solid (see Figure 2.2b). The combination of the centerlines and the diameter of the inscribed circle on these centerlines, respectively the center-surfaces in 3D, forms the skeleton-like representation of the contour in 2D, respectively the solid in 3D, called MAT. As described in [ARM∗95], The MAT operator is particularly suited to provide simplification operators with geometric proximity information in 3D and to identify geometric details on planar domains. The authors use a size criterion to identify: • Details in 2D sketches using the ratio between the length of boundary sketch edges and the radius of the adjacent maximal circle; • Narrow regions using an aspect ratio between the length of the medial edge to the maximal disk diameter on this local medial edge. A region is regarded as narrow when this ratio is lower than a given threshold. In addition, the authors refer to 48Chapter 2: Current status of methods and tools for FEA pre-processing Geometric model Simplified model remove remove Groove reduced to line Hole reduced to point Inner MA Outer MA Figure 2.3: Details removal using the MAT and detail size criteria [Arm94]. the principle of Saint-Venant that relates to the boundary conditions location, to categorize a region as a detail. This method demonstrates the efficiency of the MAT in 2D to analyze, a priori, the morphology of sketch contours. It can compare and identify local regions smaller than their neighborhood. Figure 2.3 illustrates the analysis of a 2D sketch with the MAT [Arm94] to identify details to be removed or idealized. Here, the MAT is addressed as a detail removal operator because the manifold dimension of the 2D domain is not reduced. Nevertheless, it can act also as a dimensional reduction operator. An analysis of the pros and cons of the MAT as a dimensional reduction operator is performed in Section 2.3.1. Operators based on tessellated models may be applied to a large range of configurations because the input model uses a simple polyhedral definition to represent surfaces in 3D. These operators are efficient to remove skin details before meshing. Yet, large modifications of CAD models are difficult to take into account. 2.2.3 Detail removal and shape simplification on 3D solid models As explained in the introduction of this chapter, simplifying CAD solids before meshing is a way to enable a robust mesh generation and to obtain directly the shape required for a FEA without local adjustments of the FE mesh. Transformations can be classified into two complementary categories: transformations modifying the boundary decomposition of a B-Rep model without changing the model’s shape, transformations modifying the shape as well as its boundary decomposition. Topology adaption Virtual topology approaches [SBBC00, She01, IIY∗01, LPA∗03, ARM∗95] have been developed to apply topological transformations to the boundary of an initial B-Rep 49Chapter 2: Current status of methods and tools for FEA pre-processing Narrow regions Edge deletion Face simplification (a) (b) (c) Figure 2.4: Topology adaption of CAD models for meshing [FCF∗08]: (a) CAD model, (b) Meshing Constraint Topology obtained with the adaption process, (c) Mesh model generated with respect to Meshing Constraint Topology. model in order to generate a new boundary decomposition that meet the simulation objectives of this B-Rep model and express the minimum required constraints for mesh generation. Virtual topology approaches belong to the first category of transformations. To anticipate the poorly-shaped mesh elements resulting from B-rep surfaces having a small area, the operation include splitting, merging edges and clustering faces. Anyhow, the objective is to contribute to the generation of a boundary decomposition of a B-Rep model that is intrinsic to the simulation objectives rather being tied to the decomposition constraints of a geometric modeling kernel. Foucault et al. [FCF∗08] propose a complementary topology structure called ‘Meshing Constraint Topology’ with automated adaption operators to enable the transformation of CAD boundary decomposition with mesh-relevant faces, edges and vertices for the mesh generation process (see Figure 2.4). In addition to the topological transformations (edge deletion, vertex deletion, edge collapsing and merging of vertices), the data structure remains intrinsic to the initial object which makes it independent from any CAD kernel representations. Topology adaption is an efficient operator before mesh generation and it is available in most CAE software. However, virtual topology operators are neither generic across CAE software nor they form a complete set of transformations. Form feature extraction The main category of solid model simplification is the extraction or recognition of features (holes, bosses, ribs, . . . ). Different application domains’ requirements lead to a wide variety of feature definitions. Here, a feature is defined as in [Sha95] and refers to a primary geometric region to be removed from a B-Rep object and hence, simplifies its shape. The corresponding operators belong to the second category of transformations. The simplification techniques initially define a set of explicit geometric areas identified on an object. Then, specific criteria are applied, for example metrics, to evaluate and select the candidate features to remove. The construction tree resulting from components’ design (see Section 1.3) directly provides features that can be evaluated. 50Chapter 2: Current status of methods and tools for FEA pre-processing (a) (b) Figure 2.5: Illustration of CAD defeaturing using CATIA: (a) CAD initial model, (b) Simplified CAD model with holes, fillets and chamfers suppressions. However, this approach relies on the availability of this tree, which is not always the case (see Section 1.5.2 on interoperability). Feature recognition approaches are based on the fact that the construction tree is not transferred from a CAD to a CAE system, and they process directly the B-rep model to recognize features. A reference survey of CAD model simplification covering feature recognition techniques has been performed by Thakur [TBG09]. For specific discussions on geometric feature recognition see Shah et al. [AKJ01]. A particular domain, mostly studied in the 80-90s is the recognition of machining features. The methods [JC88, VR93, JG00, JD03] in this field are efficient in recognizing, classifying and removing negative features such as holes, slots or pockets. Han et al. [HPR00] give an overview of the state-of-the-art in manufacturing features recognition. Machining feature recognition has been pioneered by Vandenbrande [VR93]. Woo et al. [WS02, Woo03] contributed with a volume decomposition approach using a concept of maximal volume and observed that some of them may not be meaningful as machining primitives. In the field of visualization, Lee et al. [LLKK04] address a progressive solid model generation. Seo [SSK∗05] proposes a multi-step operator, called wrap-around, to simplify CAD component. To reduce the complexity of assembly models, Kim [KLH∗05] uses this operator and proposes a multi-resolution decomposition of an initial B-rep assembly model for visualization purposes. These operators simplify the parts after detecting and removing small or negative features and idealize thin volume regions using face pairing. Simplification is based on local information, i.e., edge convexity/concavity, inner loops, . . . The obtained features are structured in a feature tree depending on the level of simplification. A wide range of shapes is generated with three basic operators. However, the multi-resolution model is subjected to visualization criteria, which may not produce shape transformations reflecting the application of simulation hypotheses, in general. Lockett [LG05] proposes to recognize specific positive injection molding features. Her method relies on an already generated Medial Axis (MA) to find features from idealized models. However, it is difficult to obtain a MA in a wide range of configurations. Tautges [Tau01] uses size measures and virtual topology to robustly identify geometric regions considered as details but is limited to local surface modification. 51Chapter 2: Current status of methods and tools for FEA pre-processing One common obstacle of feature recognition approaches is their difficulty to set feature definitions that can be general enough to process a large range of configurations. This is often mentioned by authors when features are interacting with each other because the diversity of interactions can lead to a wide range of configurations that cannot be easily identified and structured. In addition, in most cases, the definition of the geometric regions considered as features is based on a particular set of surfaces, edges and vertices extracted from the boundary of the B-Rep object. The assumption is that the detection operations based on the neighboring entities of the features are sufficient to construct both the volume of the features and the simplified object. However, the validity of this assumption is difficult to determine in a general setting, e.g., additional faces may be required to obtain a valid solid, which fairly reduces the robustness of these approaches. Blend removal Removal of blends can be viewed as a particular application of features recognition. Automatic blend features removal, and more precisely, finding sequences of blend features in an initial shape, is relevant to FE pre-processing and characterizes shape construction steps. Regarding blends removal, Zhu and Menq [ZM02] and Venkataraman [VSR02] detect and classify fillet/round features in order to create a suppression order for removing these features from a CAD model. CAD software has already proposed blend removal operators and it is these operators that are considered in this thesis (see Figure 2.5 for a example of a CAD component defeaturing result). In general, blend removal can be viewed as a first phase to prepare the model for further extraction and suppression of features. 2.2.4 Conclusion This section has shown that detail removals essentially address 3D volume simulations of standalone components. The suitability of these simplification operators for assembly structures has not been investigated, up to now. Additionally, the approaches to the automation of detail removal have not been developed for idealization. The definition of details addresses essentially volume domains and refers to the concept of discretization error that can be evaluated with posteriori error estimators. As a result, the relationship between detail removal and idealization has not been addressed. Approaches based on tessellated models produce a robust volume equivalent model but incorporating them with idealization processes, which are often refering to B-Rep NURBS models, does not seem an easy task. Many features-based approaches exist but they are not generic enough to process a wide range of shapes. The operators presented in this section can be employed in a context of CAD to CAE adaption, provided the areas being transformed are clearly delineated. The difficulty is to determine the relevant operator or sequence of operators in relation to the user 52Chapter 2: Current status of methods and tools for FEA pre-processing simulation objective. For now, only operators simplifying surfaces of 3D objects have been presented, in the following section idealization operators introduce categories of complexity with the dimensional reduction of standalone components. 2.3 Dimensional reduction operators applied to standalone components As explained in Section 1.4, to generate idealized models, operators are required to reduce the manifold dimension of 3D solids to surfaces or lines. Different approaches have been proposed to generate automatically idealized models of components for CAE. These approaches can be divided into two categories: • Global dimensional reduction. These approaches refer to the application of a geometric operator over the whole 3D object, e.g., using the MAT that can be globally applied to this object, to generate an overall set of medial surfaces; • Local mid-surface abstraction. Mid-surface abstraction addresses the identification of local configurations characterizing individual medial surfaces (using face pairs, deflation) on CAD models and, subsequently, handles ithe connection of these medial surfaces to generate an idealized model. 2.3.1 Global dimensional reduction using the MAT Armstrong et al. [DMB∗96, ABD∗98, RAF11] come up with the MAT to generate idealized models from 2D sketches and 3D solids. To identify geometric regions in shell models, which may be represented in an FE analysis with a 1D beam, Armstrong et al. [ARM∗95, DMB∗96, DAP00] analyze a skeleton-based representation generated with the MAT. Although the MAT produces a dimensionally reduced geometry of an input 2D contour, local perturbations (end regions, connections) need to be identified and transformed to obtain a model suitable for FEA. As for the details identification of Section 2.2, an aspect ratio (ratio of the minimum length between the medial edge and its boundary edges to the inscribed maximum disk along this medial edge) and a taper criterion (maximum rate of diameter change with respect to medial edge length) are computed to automatically identify entities that must be either reduced or suppressed. Based on a user input threshold for aspect ratio and taper, the corresponding areas of the MAT are categorized into regions idealized either as 1D beam element, or regions kept as 2D elements, or regions idealized as 0D element (concentrated mass). Beam ends, that differ from the resulting MAT methodology, are also identified through the topology of the MAT in order to extend the idealizable regions. More recently, Robinson and Armstrong [RAM∗06, RAF11] generalize the approach to 3D solid to identify 3D regions which, potentially, could be idealized as 2D shell 53Chapter 2: Current status of methods and tools for FEA pre-processing Volume Mesh in Interface area Idealized areas Interface offset for coupling Perturbation in ends MA (a) (b) (c) Figure 2.6: Illustration of the mixed dimensional modeling using a MAT [RAF11]: (a) the MAT representation, (b) the model partitioned into thin and perturbations features, (c) the resulting mixed dimensional model. elements. In a first step, a 3D MAT is used to identify potential volume regions. Then, the MATs of these regions are analyzed by a second 2D MAT to determine the inner sub-regions which fully meet an aspect ratio between their local thickness and MAT dimensions. The final candidates for idealization should satisfy the 2D ratio within the face resulting from the 3D MAT as well as the 3D ratio. Similarly to 2D end regions derived from a 2D MAT, the residual boundary faces from the MAT 3D are extended. Some limitations of the MAT with respect to idealization processes are: • The generation of the MAT. Although progress has been made in MAT generation techniques for 3D objects, the computation of an accurate 3D MAT is still a research topic [RAF11]. Even if approaches [CBL11, RG03] exist which enable the computation of a MAT as a G0 geometric object, 3D MAT from free-form surfaces [RG10, BCL12], B-splines surfaces [MCD11] and planar polyhedra [SRX07], the most efficient algorithms are still based on a discrete representations. The most efficient way to obtain a MAT derives from Voronoi diagrams or from distance fields [FLM03]. An efficient implementation of an algorithm has been proposed by Amenta [ACK01] and, more recently, by Miklos [MGP10]. However, the result is also a discrete representation, which has to be somehow approximated to produce a more global geometric object; • The need for processing local perturbations (see Figure 2.6b). For mechanical analysis purposes, the topological features in ending regions have to be modified to extend the medial surfaces. These undesirable regions complicate and restrain the analysis domain of the MAT; • The connection areas. The MAT generates complex configurations in connection areas. Armstrong and Robinson [RAF11] produce mixed dimensional FE models with idealized surfaces or lines and volume domains in the connections between 54Chapter 2: Current status of methods and tools for FEA pre-processing these regions (see Figure 2.6). These mixed dimensional models, which involve specific simulation techniques using mixed dimensional coupling, do not contain idealized models in connections areas. In addition, to ensure an accurate load transfer from one surface to another, they increase the dimensions of volume connections based on the Saint-Venant’s principle (see Figure 2.6c). As a result, the idealized areas are reduced. However, the current industrial practice, as explained in Section 1.4.3, aims at generating fully idealized models incorporating idealized connections. This practice reduces the computational time, reducing the number of degrees of freedom, and ensures a minimum model accuracy based on user’s know-how. In this context, the major limit of MAT methods is the processing of these connections areas which do not contain proper information to link medial surfaces. 2.3.2 Local mid-surface abstraction To generate fully idealized models, alternative approaches to MAT identify sets of boundary entities of the CAD models as potential regions to idealize. Then, midsurfaces are extracted from these areas and connected together. Face pairing techniques Rezayat [Rez96] initiated the research in mid-surface abstraction from solid models. His objective was to combine the geometric and topologic information of the B-rep model to robustly produce idealized models while transforming geometric areas. This method starts with the identification of surfaces which can be paired based on a distance criterion between them. During this identification phase, an adjacency graph is generated representing the neighbouring links between face-pairs. This graph uses the topological relationships of the initial B-rep model. Then, for each face-pair, a midsurface is generated as the interpolation of this geometric configuration, as illustrated in Figure 2.7a. During the final step,the mid-surfaces are connected together using the adjacency graph of the B-Rep model (see Figure 2.7b). Although this method generates fully idealized models close to manually created ones, the underlying issue is the identification of areas that could potentially be idealized. Indeed, the identification of face-pairs does not ensure that the thickness, i.e., the distance between face-pairs, is as least ten times smaller than the other two directions (see idealization conditions described in Section 1.4.2). The areas corresponding to these face pairs is designated here as tagged areas. In addition, the connection between mid-surfaces requires the definition of new geometric entities which result from an intersection operator. This intersection operator solely relies on the boundary of the areas to be connected, i.e, the face-pairs. There is no information about the boundary of the regions to be idealized as well as the interface areas between their mid-surfaces, e.g., limits of valid connections areas. As illustrated in Figure 2.8d, this information does not appear directly 55Chapter 2: Current status of methods and tools for FEA pre-processing F2 F1 Mid-surface (a) (b) Figure 2.7: Illustration of mid-surface abstraction [Rez96], (a) creation of mid-surfaces from face-pairs, (b) connection of mid-surfaces to generate a fully idealized model. F1 F2 F3 F5 F4 F6 F1 F3 F4 F6 F3 F6 F1 F2 F3 F5 F4 F6 Fi / Fj Invalid face pair Valid face pair Fi / Fj F1 F3 F4 F6 Tagged boundary Non-tagged boundary Interface between regions to be idealized Regions to be idealized Rejected configuration Rejected configuration Accepted configuration CAD Mid-surface results Face-pair information (a) (b) (c) (d) Figure 2.8: An example of particular geometric configuration not addressed by face-pairs methods: (a) Valid configuration without mid-surface connection, (b) and (c) rejection of an invalid face-pair due the overlapping criterion, (d) information on non-tagged areas and interfaces between idealizable regions that are not evaluated with face-pairs methods. on the initial model. These areas are the complement of tagged areas with respect to the boundary of the initial model; they are named non-tagged areas. As a result, mid-surface abstraction are reduced to specific geometric configurations when the facepairs overlap each other. As depicted in Figure 2.8, the face-pairs F3-F6 and F2-F5 are rejected due to the overlapping criterion. So, the idealized configurations 2.8b and c are rejected whereas they could be suitable for FEA. In order to improve the robustness of idealized areas processing, Lee and al. [LNP07a] use a propagation algorithm through the B-rep model topology to identify face-pairs. However, this approach is limited to configurations where the face-pairs can be connected in accordance with predefined schemes. Ramanathan and Gurumoorthy [RG04] identify face-pairs through the analysis of mid-curve relationships of all the faces of the solid model. For each face, its mid-curve is generated using a 2D MAT. This generation is followed by the analysis of the mid-curve graph in order to identify face-pairs. The resulting mid-faces, derived from face-pairs, are then connected to each other in 56Chapter 2: Current status of methods and tools for FEA pre-processing accordance with the mid-curve adjacency graph. This method increases the robustness of face-pairing, indirectly using the morphology of the paired faces. Analyzing the midcurve relationships of adjacent faces enables a morphological comparison of adjacent faces. Since mid-curves have been obtained through a MAT, the face-pairs identifi- cation depends on the accuracy of this mid-curve generation. This method comes up with face-pairs close to each other and sufficiently large along the two other directions to meet the idealization hypothesis. However, this approach is limited to planar areas. Negative offsetting operations Sheen et al. [SSR∗07, SSM∗10] propose a different approach to generate mid-surfaces: the solid deflation. The authors assume that a solid model can be seen as the result of the inflation of a shell model. Their principle is to deflate the solid model, shrinking it down to a degenerated solid with a minimum distance between faces close to zero. This generates a very thin solid model looking like an idealized model. In a next step, faces are extracted and sewed together to create a non-manifold connected surface model. The issue of this method lies in the generation of the deflated model. Indeed, a facepairs detection is used to generate the mid-surfaces input to the shrinking operation. This face-pair detection does not cover configurations with a thickness variation, which is common for aeronautical parts and other mechanical components. This approach is similar to a straightforward MAT generation [AA96, AAAG96], which applies a negative offset to boundary lines in 2D, surfaces in 3D, respectively, in order to obtain a skeleton representation. Yet, this representation being an approximation of the MAT, it does not meet everywhere the equal distance property of a mid-surface and does not provide an answer for all polyhedral solids [BEGV08]. 2.3.3 Conclusion As explained in Section 1.4 and 1.5, the shape of a component submitted to a mesh generation depends on the user’s simulation objectives. This analysis of dimensional reduction operators highlighted the lack of idealization-specific information to delimit their conditions of application. All geometric regions do not satisfy the idealization conditions and hence, these idealization operators cannot produce correct results in these areas. A purely geometric approach cannot produce directly a fully idealized model adapted to FEA requirements. An analysis process is necessary to evaluate the validity of the idealization hypotheses and determine the boundary and interfaces of the regions to be idealized. The MAT is a good basis to produce a 3D skeleton structure and provides geometric proximity information between non adjacent faces. However, it is difficult to obtain in 3D and requires post-processing local perturbations and connection areas. Face-pair techniques are efficient in specific configurations, especially for planar objects. Yet, their main issues remain in their validity with respect to the idealization hypotheses and 57Chapter 2: Current status of methods and tools for FEA pre-processing 1 2 Kinematic connection Shortest distance Offset connection Perpendicular connection 1 2 Figure 2.9: Illustration of different connection models for idealized components. difficulties to process the connection between mid-faces. As illustrated in Figure 2.9, the connection areas derive from specific modeling hypotheses. The user may decide on the connection model that is most appropriate for his, resp. her, simulation objectives. To improve the dimensional reduction of components, the objectives are expressed as: 1. Identify the volume sub-domains candidate to idealization, i.e., the regions that meet the idealization hypotheses; 2. Obtain additional information on interfaces between sub-domains to generate robust connections there. 2.4 About the morphological analysis of components As a conclusion of the previous Section 2.3, geometric operators require a pre-analysis of a component shape to determine their validity conditions. Shape decomposition is a frequent approach to analyze and then structure objects. This section aims at studying the operators dedicated to a volume decomposition of 3D objects with an application to FEA. 2.4.1 Surface segmentation operators There are many methods of 3D mesh2 segmentation developed in the field of computer graphics. They are mainly dedicated to the extraction of geometric features from these 3D meshes. A comparative study of segmentation approaches of 3D meshes, including 2The word mesh is used in the computer graphics context, which refers to a faceted model with no constraint similar to FE meshes. 58Chapter 2: Current status of methods and tools for FEA pre-processing (a) (b) Figure 2.10: Mesh Segmentation: (a) face clustering of Attene [AFS06], (b) shape diameter function of Shapira [SSCO08]. CAD components, is proposed by Attene et al. [AKM∗06]. Reference work by Hilaga [HSKK01] applies a Reeb-graph approach to find similarities between 3D shapes. Watershed [KT03, Kos03], spectral analysis [LZ04], face clustering [AFS06], regions growing [ZPK∗02, LDB05], shape diameter functions [SSCO08] are other techniques to subdivide a 3D mesh for shape recognition, part instantiation, or data compression. Figure 2.10 illustrates two mesh segmentation techniques [AFS06, SSCO08] on a mechanical part. These algorithms are not subjected to parameterization issues like B-Rep CAD models are. They partition a mesh model into surface regions but do not give a segmentation into volume sub-domains and region boundaries are sensitive to the discretization quality. A post-processing of the surface segmentation has to be applied to obtain volume partitions. The main objective of the methods cited above is to divide the object in accordance with a “minima rule” principle introduced by Hoffman and Richards [HR84]. This rule consists in dividing this object to conform to the human perception of segmentation. The authors state that human vision defines the edges of an object along areas of high negative curvature. Hence, the segmentation techniques divide a surface into parts along contours of negative curvature discontinuity. In these areas, the quality of an algorithm is based on its ability to meet this “minima rule”. Searching for regions of high concavity, algorithms are sensitive to local curvature changes. Depending on the threshold value of extreme curvature, the object may be either over-segmented or under-segmented. Even if this threshold is easier to monitor for CAD components because they contain many sharp edges, the curvature criterion is not related to the definition of idealized areas. Consequently, the methods using this criterion do not produce a segmentation into regions satisfying the idealization hypotheses and regions that can be regarded as volumes. This section has covered surface segmentation operators that are not appropriate in the context of a segmentation for idealization. The next section studies volume segmentation operators producing directly a decomposition of a solid model into volume partitions. 59Chapter 2: Current status of methods and tools for FEA pre-processing Thin-sheet region in green (thin-sheet meshable) (a) (b) Semi-structured hybrid mesh of thick region (c) Long/slender region in blue (swept mesh) Complex region in yellow (unstructured mesh) Figure 2.11: Automatic decomposition of a solid to identify thick/thin regions and long and slender ones, from Makem [MAR12]: (a) initial solid model, (b) segmented model, (c) semi-structured hybrid mesh of thick regions. 2.4.2 Solid segmentation operators for FEA Recently, researches concentrated on the identification of specific regions to automatically subdivide a complex shape before meshing. They address shape transformations of complex parts. The automatic segmentation of a mechanical component into volume regions creates a positive feature decomposition, i.e., the component shape can be generated by the successive merge of the volume regions. This principle particularly applies to dimensional reduction processes, i.e, idealizations. Volume region identification for meshing In FEA, solid segmentation methods have been developed to simplify the meshing process. The methods of Lu et al. [LGT01] and Liu and Gadh [LG97] use edge loops to find convex and sweepable sub-volumes for hex-meshing. More recently, the method proposed by Makem [MAR12] automatically identifies long, slender regions (see Figure 2.11). Makem [MAR12] shows that the decomposition criteria have to differ from the machining ones. Heuristics are set up to define the cutting strategy and to shape the sub-domains based on loops characterizing the interaction between sub domains. Setting up these heuristics is difficult due to the large diversity of interactions between sub-domains. Criteria for loop generation aim at generating a unique decomposition and are not able to evaluate alternatives that could improve the meshing scheme. To reduce the complexity of detecting candidate areas for dimensional reduction, Robinson and al. [RAM∗06] use preliminary CAD information to identify 2D sketches employed during the generation of revolving or sweepable volume primitives in construction trees. Figure 2.12 illustrates this process: the sketches are extracted from the construction tree, analyzed with a MAT to determine thin and thick areas forming a feature. Then, this feature is reused as an idealized profile to generate a mixed dimensional model. However, in industry, even if the construction tree information exists in a native CAD model, the creation of features depend on the designer’s modeling choices, 60Chapter 2: Current status of methods and tools for FEA pre-processing 2D Sketch of a revolution feature Slender regions to revolve as surface 3D CAD Component (volume) Mix dimensional Model (volumes and surfaces) Figure 2.12: Idealization using extruded and revolved features in a construction tree, from [RAM∗06]. which do not ensure to obtain appropriate sketches mandatory to get efficient results. Divide-and-conquer approaches An alternative to the complexity of the idealization process can be found in divideand-conquer approaches. Firstly, the solid is broken down into volume sub-domains, which are smaller to process. Then, idealizing these sub-components and combining them together produces the final idealized model. Chong [CSKL04] proposes operators to decompose solid models based on shape concavity properties prior to mid-surface extractions that reduce the model’s manifold dimension. Mid-surfaces are identified from closed loops of split edges and sub-domains are processed using mid-surfaces. The solid model decomposition algorithm detects thin configurations if edge pairs exist in the initial model and matches an absolute thickness tolerance value. Some volume regions remain not idealized because of the nonexistence of edges-pairs on the initial object. In the feature recognition area, Woo et al. [WS02, Woo03] set a volume decomposition approach using a concept of maximal volume. Their decomposition is based on local criteria, i.e., concave edges, to produce the cell decomposition. Consequently, Woo et al. observe that some maximal volumes may not be meaningful as machining primitives and further processing is required in this case to obtain machinable sub-domains. Recently, Woo [Woo14] describes a divide-and-conquer approach for mid-surface abstraction (see Figure 2.13). A solid model is initially decomposed into simple volumes using the method of maximal volume decomposition [WS02, Woo03] as well as feature recognition of Sakurai [Sak95, SD96]. The mid-surfaces are extracted from these simple volumes using face-pairing. Then, face-pairs are connected using a union Boolean operation, thus creating a non-manifold surface model. Finally, a ge- 61Chapter 2: Current status of methods and tools for FEA pre-processing Figure 2.13: Divide-and-conquer approach to idealization processes using a maximal volume decomposition (by Woo [Woo14]). ometric operator identifies and removes local perturbations of mid-surfaces which do not correspond to the faces of the original model. The major objective of this approach is the complexity reduction of the initial mid-surface abstraction. It increases the robustness of the face paring algorithm by applying it on simpler volumes. However, the connections between mid-surfaces are based on the topology of the initial solid without any analysis of its shape related to the user’s simulation objectives. Some solids can be topologically identical but differ in their morphology. Consequently, a morphological analysis of their shape is mandatory to identify the sub-domains subjected to dimensional reduction and to understand the interactions between these sub-domains through their interfaces. Here, the idealization processes are still restricted to a purely geometrical operator that does not integrate user’s simulation objectives. Additionally, this method faces difficulties to handle general configurations and connections between idealized sub-domains through mid-surface extension operations. B-Rep decomposition through feature trees As observed in Section 2.2, the feature recognition techniques are a way to extract volume sub-domains from B-Rep solids. They support segmentation processes for detail removal but do not provide construction process structures of these B-Rep solids. To this ens, different approaches have been proposed to decompose an object shape into a feature tree. Shapiro [SV93] and Buchele [BC04] address the B-Rep to CSG conversion as a means to associate a construction tree with a B-Rep model. Buchele [BC04] applies this principle to reverse engineering configurations. CSG tree representations can be categorized into either half-space or bounded solid decompositions. In [SV93, BC04] B-Rep to half-space CSG representation is studied and it has been demonstrated that half- 62Chapter 2: Current status of methods and tools for FEA pre-processing spaces solely derived from a volume boundary cannot always be integrated into a CSG tree forming a valid solid. In Buchele [BC04], Shapiro and Vossler’s approach [SV93] is complemented to generate a CSG representation from scanned objects and to obtain both its geometry and topology. There, additional algorithms must be added to produce complementary half-spaces. Moreover, the meaning of half-space aggregations is not addressed, i.e., there is no connection between the volume faces and primitives that can be used to create it. Li and al. [LLM06, LLM10] introduce a regularity feature tree used to highlight symmetry in a solid that differs from CSG and construction trees. This tree structure is used to highlight symmetry properties in the object but it neither provides a CSG tree nor primitive entities that could serve as basis for idealization applications. Belaziz et al. [BBB00] propose a morphological analysis of solid models based on form features and B-Rep transformations that are able to simplify the shape of an object and enable simplifications and idealizations. Somehow, this method is close to B-Rep to CSG conversion where the CSG operators are defined as a set of shape modifiers instead of Boolean operators. Indeed, the shape modifiers are elementary B-Rep operators that do not convey peculiar shape information and each stage of the morphological analysis produces a single tree structure that may not be adequate for all simplifications and idealizations. All the approaches generating a CSG type tree structure from a B-Rep bring a higher level shape analysis with connections to a higher level monitoring of shape transformations, symmetry properties. . . However, the corresponding framework of B-Rep to CSG conversion must be carefully applied to avoid unresolvable boundary configurations. Furthermore, producing a single tree structure appears too restrictive to cover a wide range of shape transformation requirements. 2.4.3 Conclusion Solid segmentation operators directly provide a volume decomposition of the initial object. A segmentation brings a higher level of geometric information to the initial B-Rep solid. Previous methods have shown the possibility to generate a segmentation or, even construction processes, from an initial CAD model. Therefore, the current operators: • do not always produce a complete segmentation, e.g., not all features are identified, and this segmentation is not necessarily suited for idealization due to algorithms focusing on other application areas; • may be reduced to simple configurations due to a fairly restrictive definition of the geometric areas being removed from the solid. Furthermore, if these operators generate also a construction process, it is restricted to a single process for a component; 63Chapter 2: Current status of methods and tools for FEA pre-processing • could produce a complete segmentation, e.g., divide and conquer approaches, but they do not ensure that the volume partitions are also simple to process and usable for mid-surfacing. A more general approach to volume decomposition should be considered to depart from a too restrictive feature definition while producing volume partitions relevant for idealization purposes. Therefore, the difficulty is to find adequate criteria to enable a segmentation for dimensional reduction and connections operators. The previous sections have presented the main methods and tools for FEA preprocessing and, more specifically, for idealization processes in a context of standalone components. The next section describes the evolution of these operators toward an assembly context. 2.5 Evolution toward assembly pre-processing Currently, the industrial need is to address the simulation of assembly structures. However, few contributions address the automation of assembly pre-processing. Automated simplifications of assembly for collaborative environment like the multi-resolution approach of Kim [KWMN04] or surface simplification of Andujar [ABA02] transform assembly components independently from each other. This is insufficient to pre-process FE assembly models because mechanical joints and interfaces tightening the different components must take part to these pre-treatment (see Chapter 1.4). Group of components transformations In the assembly simplification method of Russ et al. [RDCG12], the authors propose to set component dependencies to remove groups of components having no influence on a simulation, or to replace them by defeatured, equivalent ones. However, the parent/child relationships created from constraint placement of components does not guarantee to obtain the entire neighborhood of a component because these constraints are not necessarily related to the components’ geometric interfaces. As explained in Section 1.3.1, these positioning constraints are not necessarily equivalent to the geometric areas of connections between components. Additionally, DMUs don’t contain components’ location constraints when assemblies are complex, which is the case in the automotive and aeronautic industries to ease design modifications during a PDP. Moreover, part naming identification used in this approach is not sufficient because it locates individual components contained in the assembly, only. Relations with their adjacent components and their associate geometric model are not available, i.e., replacing a bolted junction with an idealized fastener implies the simplification of its nut and its screw as well as the hole needed to connect them in the tightened components. 64Chapter 2: Current status of methods and tools for FEA pre-processing Small areas difficult to mesh (c) (d) (e) (a) (b) Figure 2.14: Assembly interface detection of Jourdes et al. [JBH∗14]: (a) CAD bolted junction with three major plates, (b) some interfaces, (c) cut through of a bolt of this junction, (d) corresponding interfaces, (e) detail of a small geometric area between an interface and the boundary of the component. Interface detection To provide mesh compatibility connectivity, i.e., an interface between two components ensures the same mesh distribution in the interface area of each component, Lou [LPMV10] and Chouadria [CV06] propose to identify and re-mesh contact interfaces. Quadros [QVB∗10] establishes sizing functions to control assembly meshes. Yet, these methods are used directly on already meshed models and address specific PDP configurations where CAD models are not readily available. Clark [CHE08] detects interfaces in CAD assemblies to create non-manifold models before mesh generation but he does not consider the interactions between interfaces and component shape transformation processes. In [BLHF12], it is underlined that if a component is simplified or idealized, its shape transformation has an influence on the transformation of its neighbors, e.g., a component idealized as a concentrated mass impacts its interfaces with its neighboring components. Assembly operators available in commercial software are reduced to, when robust enough, the automated detection of geometric interfaces between components. However, their algorithms use a global proximity tolerance to find face-pairs of components and they don’t produce explicitly the geometric contact areas. From a STEP representation of an assembly model, Jourdes and al. [JBH∗14] describe a GPU approach 65Chapter 2: Current status of methods and tools for FEA pre-processing 1 2 1 2 1 2 1 Idealized with contact V1 Assembly interfaces Component 1 interfaces CAD assembly of two components Idealizable areas of components FEM models Full idealized Mix dimensional with contact Specific Connector 1 2 1 2 1 Idealized with contact V2 1 2 Idealized with contact V3 1 2 Idealized with contact V4 Kinematic connection Figure 2.15: Various configurations of the idealization of a small assembly containing two components. to automatically detect the exact geometric regions of interfaces between components (see Figure 2.14). The results of this technique are used in this thesis as input assembly interfaces data. Yet, obtaining the geometric regions of interfaces is not sufficient, they have to be analyzed to evaluate their suitability with respect to meshing constraints. Figure 2.14e shows the creation of small surfaces, which are difficult to mesh, resulting from the location of an interface close to the boundary of component surfaces. To reach the requirements of assembly idealizations, the current geometric operators have to take into account the role of assembly interfaces between components, with respect to the shape transformations of groups of components. Figure 2.15 shows the idealization of an assembly containing two components. The idealization of ‘component 1’ interacts with the idealization of ‘component 2’, and both interact with the user’s choice regarding the assembly interface transformation. Depending on the simulation objectives, the user may obtain either an idealized model of the two components with contact definition, or a globally idealized model from the fusion of the two component. The user may even apply a specific connector model which does not require any geometry other than its two extreme points. Assemblies bring a new complexity level into idealization processes since the shape of the components and their interactions with their neighbors have to be analyzed before applying geometric operators. 66Chapter 2: Current status of methods and tools for FEA pre-processing 2.6 Conclusion and requirements The research review in this chapter combined with the context described in Chapter 1 shows that CAD/CAE integration is mainly oriented toward the data integration of standalone components, preparations of assembly models under global simulation objectives require an in-depth analysis and corresponding contributions. Regarding standalone component processing, although automated operators exist, they are currently effective on simple configurations only. To process complex models, the engineer interactively modifies the component using shape transformation operators according to his/her a priori expertise and evaluation of the simulation model being created. These specific operators, among which some of them are already available in CAE software, have to be selected and monitored by the engineer. Their applications still require lots of manual interactions to identify the regions to transform and correct unintended geometric perturbations. Because of the diversity of simulation contexts, the preconceived idea of applying a generic geometric operator to every component to perform any simplification or idealization, is not valid and must evolve toward simulation context-dependent operators. The selection of mechanical hypotheses, because of their impact on the DMU model, should also be part of the automation of a mechanical analysis pre-processing. This issue is particularly crucial when performing dimensional reductions on a component. Generating an idealized equivalent model cannot be reduced to the simple application of a dimensional reduction operator. The effects of idealization hypotheses are supposed to have established the connection between the component shape and its simulation objectives. This connection can be made through the identification of geometric areas candidates to idealizations and associated with the connections between idealized subdomains. An analysis of the component shape, subdividing it into idealizable areas and interfaces between them (see Figure 2.15), is a means to enrich the input CAD solid and prepare the geometry input to dimensional reduction operators. The current volume segmentation operators are restricted to configurations focusing on particular application domains. They often produce a single segmentation into sub-volumes and instantiate the same problem on rather simple configurations. Achieving good quality connections between idealized sub-domains in a robust manner is still a bottleneck of many approaches processing CAD solids for FEA, which requires new developments. Regarding assembly model processing, there is currently a real lack in scientific research and software capabilities, both. To reach the objective of large assembly structural simulation, pre-processing the DMU, which conveys the product definition, has also to be automated. Assembly simulation models, not only require the availability of the geometric model of each component, but they must also take into account the kinematics and physics of the entire assembly to reach the simulation objectives. This suggests that the entire assembly must be considered when specifying shape transformations rather than reducing the preparation process to a sequence of individu- 67Chapter 2: Current status of methods and tools for FEA pre-processing ally prepared components that are correctly located in 3D space. As mentioned in Section 1.5.4, to adapt an assembly to FEA requirements, it is mandatory to derive geometric transformations of groups of components from simulation objectives and component functions. As it results from Chapter 1, the knowledge of interface’s geometries and additional functional information on components and their interfaces is a good basis to specify these geometry transformation operators on assemblies. In addition, to perform assembly idealizations, structuring geometric models of components in areas to be idealized and component interfaces, is consistent with the assembly structure, i.e., its components and their interfaces. Such a component geometric structure helps preserving the DMU consistency. 68Chapter 3 Proposed approach to DMU processing for structural assembly simulations This chapter sets the objectives of the proposed approach to DMU pre-processing for the simulation of FE structural assembly models. To obtain an efficient transformation of a DMU into a FEM requires geometric operators that process input geometric models which have been structured and enriched with additional functional information. With respect to the objectives set up, the proposed approach uses this enriched DMU, both at 3D solid and assembly levels, to analyze its geometry and connect it to the simulation hypotheses with the required shape transformations. 3.1 Introduction Chapter 1 has pointed out the industrial context and identified the general problem definition addressed in this thesis. The current practices about model generation for structural mechanical analyses have been described, especially when the resolution is performed using the FE method. Chapter 2 has analyzed the approaches of academia that investigate the automation of FE models generation. The need for shape analysis as a basis of robust geometric transformation operators has been underlined and the lack of research in assembly pre-processing has been pointed out. Figure 3.1 summarizes the manual processes of a DMU transformation for the FEA of assembly structures. The analysis of the ongoing practices has been stated in Section 1.5. A main issue, observed in the aeronautical industry, is the manual and isolated application of geometric transformations on each component of the assembly. An assembly component is considered as a standalone part and the user iterates his, resp. her, global simulation objective on each component as well as on each assembly interface. As a result, the 69Chapter 3: Proposed approach to DMU processing for structural assembly simulations · Pure CAD geometric model; · No contact; · Junction considered as individual component. DMU EXTRATION PREPROCESSING · Manual transformation of individual component (idealization); · Manual definition of contacts between components; SIMULATION x45 x45 DMU FEM CAD/CAE weak link Non standardized processes and tools for all category of assembly component Lessons learned Simulation Objective Simulation Objective Simulation Objective DATA PROCESSES : Processes : Data Figure 3.1: Current process to prepare assembly structures. Each component of the assembly is transformed individually. use of FEA in aeronautical industry is bounded by the time required to set up its associated FEM. Now, the major challenge is the automation of some FEA preparation tasks so that more simulations can be performed on assembly models. 3.2 Main objectives to tackle As stated in Chapter 2: ‘generating an idealized equivalent model cannot be reduced to the simple application of a dimensional reduction operator’. Indeed: 1. Generating simulation models from DMUs requires the selection of the CAD components having an influence on the mechanical behavior the engineer wants to analyze. Setting up the simulation requires, as input, not only the 3D geometric model of component shapes but also their functional information that help selecting the appropriate components (see Section 1.5.4); 2. A DMU assembly is defined by a set of 3D components and by the interactions between them. To automate the preparation of FE assembly models, it is mandatory to take into account the interfaces between components (see Section 1.4.3). An assembly interface, not only contains the geometric information delimiting the contact/interference areas on each component, but contains also the ‘functional’ information characterizing the behavior of the interface, e.g., clamping, friction, . . . ; 70Chapter 3: Proposed approach to DMU processing for structural assembly simulations 3. To generate idealized components, i.e., the dimensional reduction process of 3D volumes into equivalent medial surfaces/lines, two main aspects have to be considered: • A 3D shape is generally complex and requires different idealizations over local areas depending on the morphology of each of these areas (see Section 2.3); • Interfaces between components have an influence on the choice made for these local idealizations. Therefore, the idealization operator has to take into account this information as a constraint, which is not the case of current idealization operators (see Section 2.5). To address the problem of the FEM preparation of large assembly structures, this chapter introduces an analysis-oriented approach to provide enriched DMUs before geometric transformations. The following sections explain the principles and contributions of this approach, which are subsequently detailed in the next chapters: • Section 3.3: This section shows that existing approaches are able to provide a certain level of functional information. Two main approaches have been exploited. The method of Jourdes et al. [JBH∗14] generates assembly interfaces from DMU models and the method of Shahwan et al. [SLF∗12, SLF∗13] provides functional designation of components. These methods can be used in our current preprocessing approach to provide enriched DMUs before geometric transformations take place. Nevertheless, some improvements are proposed to take into account the geometric structure of components required for an idealization process; • Section 3.4: idealization operators necessitate an estimation of the impact of the idealization hypotheses over a component shape, i.e., the identification of areas candidate to idealization. This section sets our objectives to achieve a robust assembly idealization process. They consist in structuring a component’s shape and taking advantage of this structure to perform a morphological analysis to identify areas conforming to the user’s simulation hypotheses. Subsequently, these hypotheses are used to trigger the appropriate idealization operator over each area; • Section 3.5: this section outlines the proposed processes exploiting an enriched DMU to robustly automate the major time-consuming tasks of a DMU preparation. 3.3 Exploiting an enriched DMU Efforts have been made to improve: 71Chapter 3: Proposed approach to DMU processing for structural assembly simulations • the coordination between engineers in charge of structure behavior simulations and designers; • the use of simulation results during a design process. However, as described in Section 1.3, the DMUs automatically extracted from the PLM are not suited for FE simulations. Because of the product structure, DMUs do not contain structural components, only. DMU components have to be filtered during the early phase of FEA pre-processing to avoid unnecessary geometric treatments on components which are considered as details at the assembly level. As explained in Section 1.5.1, this process is based on a qualitative judgment exploiting engineers’ know-how. A way to increase robustness of this extraction process, is to have available more useful information for the engineers. At least, this information must contain the functional properties of components. In addition, considering that the extracted DMU is coherent and contains the exact set of components subjected to shape transformations, the amount of information which can be extracted from the PLM system is not sufficient to set up a robust and automated pre-processing approach to simulations. Even though the extraction of additional meta-data can be improved (see Section 1.5.4), FEM pre-processing requires the exact interface areas between components as well as the functional designation of each component, which are not available in PLM systems, at present. A main objective of this thesis is to prove that a quantitative reasoning can be made from an enriched and structured DMU to help engineers determining the mechanical influence of components under specific simulation objectives. Benefiting from existing methods that identify functional interfaces and functional designation of components in DMUs The method of Jourdes et al. [JBH∗14] presented in Section 2.5, detects geometric interfaces between components. Starting from an input B-Rep model, i.e., a STEP [ISO94, ISO03] representation of an assembly, the algorithm identifies two categories of interfaces as defined in Section 1.3.2: surface and linear contacts, and interferences. The information regarding assembly interfaces are used by Shahwan et al. [SLF∗13] to provide, through a procedural way, functional information linked to DMUs. Even if Product Data Management System (PDMS) technology provides the component with names referring to their designation, this information is usually not suf- ficient to clearly identify the functions of components in an assembly. For example, in AIRBUS’s PLM, a component starting with ‘ASNA 2536’ refers to a screw of type ‘Hi-Lite’ with a 16mm diameter. This component designation can be associated under specific conditions to an ‘elementary function’, e.g., fastening function in the case of ‘ASNA 2536’. However, information about each component designation does not iden- 72Chapter 3: Proposed approach to DMU processing for structural assembly simulations Bolted junction Nut Counter nut Planar Support Planar Support Planar Support Threaded Link Cap-screw Initial Geometry Functional Interfaces Functional Designation Head Tightened Components Thread Shaft Figure 3.2: Structuring a DMU model with functional properties after analyzing the assembly geometric interfaces and assigning a functional designation to each component (from Shahwan et al. [SLF∗12, SLF∗13]). tify its relation with other components inside the scope of a given function, i.e., the geometric model of a component is not structured with respect to its function. How an algorithm can determine which component is attached to another one to form a junction? Which screw is associated with which nut? Additionally, there is a large range of screw shapes in CAD component libraries. How to identify specific areas on these screws through names only? Also, the word screw is not a functional designation; it does not uniquely refer to a function because a screw can be a set screw, a cap screw, . . . Therefore, to determine rigorously the functional designation of components, Shahwan et al. [SLF∗12, SLF∗13] inserted a qualitative reasoning process that can relate geometric interfaces up to the functional designation of components, thus creating a robust and automated connection between 3D geometric entities and functional designations of components. This is a bottom-up approach that fits with our current requirements. The complete description of a DMU functional enrichment can be found in [SLF∗13]. Figure 3.2 shows the result of the functional enrichment of the aeronautical root-joint use-case presented in Figure 1.6. Initially, the DMU was a purely geometric model. Now, it is enriched with the functional designation of components. Using the definition of Shahwan et al. [SLF∗13], a functional designation of a component is an unambiguous denomination that functionally distinguishes one class of components from another. It relates the geometric model of a component with its functional interfaces (derived from the conventional interfaces described in Section 1.3.2) and with the functional 73Chapter 3: Proposed approach to DMU processing for structural assembly simulations DMU 1 DMU 1 2 DMU 1 2 3 C1 C2 C3 C1 C2 C3 CAD + PLM Product Structure 3- Functional designations 1- PLM information 2- Assembly interfaces I1/3 I2/3 S1 S2 S3.1 C2 S3.2 C1 C3 Fct.1 C3 C1 C2 S1 S2 I1/3 S3.1 S3.1 I2/3 Graph of assembly interfaces and functions CAD + assembly interfaces + component functional designation Fct.1 C1 C2 C3 C1 : Plate C2 : Plate C3 : Screw C1 C2 S1 S2 Imprint of interface CAD + assembly interfaces C1 C2 S1 S2 I1/2 Graph of assembly interfaces Interface Figure 3.3: DMU enrichment process with assembly interfaces and component functional designations. interfaces of its functionally related components. The functional designation of a component binds its 3D model and functional interfaces to a symbolic representation of its functions. Regarding screws, illustrative examples of functional designations are: cap screw, locked cap screw, set screw, stop screw, . . . As a result, a component model as well as its geometric model gets structured. As illustrated in Figure 3.3, its B-Rep model contains imprints of its functional interfaces and geometric relationships with functional interfaces of functionally related components. The functional interfaces contain the lowest symbolic information describing the elementary functions of a component and each functional designation expresses uniquely the necessary relations between these elementary functions. Functional analysis and quantitative reasoning This enriched DMUs makes available information required to perform a functional analysis of the assembly being prepared for FEA. This analysis allows us to implement a quantitative reasoning which can be used to increase the robustness of the automation of shape transformations of components and their interfaces during an assembly preparation process. Geometric entities locating functional interfaces combined with the functional designation of each component enable the identification and location of groups of components to meet the requirements specified in Section 1.5.4. In the research work described in the following chapters, the fully enriched functional DMU 74Chapter 3: Proposed approach to DMU processing for structural assembly simulations Simulation Objectives Hypotheses Components Components’ interfaces Shape transformations Shape transformations : Affect : Interact Figure 3.4: Interactions between simulation objectives, hypotheses and shape transformations. with functional interfaces stands as input data to geometric transformations. Need to extend the DMU enrichment to a lower level: the component shape Thanks to a better extraction and functional enrichment of the geometric models of DMU components, new operators are able to identify components and their interfaces that will be subjected to shape transformations. However, this enrichment is not suffi- cient to determine the geometric areas to be idealized or transformed in the assembly (see Section 2.3). Prior to any shape transformation, we propose to extend this functional assembly enrichment up to the component level. This enrichment is driven by the component’s shape and its interaction with the simulation objectives and related hypotheses. The next section highlights the requirements of this enrichment approach. 3.4 Incorporating a morphological analysis during FEA pre-processing According to Chapter 1, component shapes involved in assembly simulation preparation processes interact with simulation objectives, hypotheses, and shape transformations applied to components and their interfaces. Figure 3.4 shows interactions between shape transformations and FEA modeling hypotheses. To be able to specify the shape analysis tools required, the interactions between shape transformations acting on components as well as on assemblies, on the one hand, and FEA hypotheses, on the other hand, should be formalized. The suggested analysis framework’s objective is the reconciliation of simulation hypotheses with geometric transformations. A morphological analysis driven by idealization needs As stated in Chapter 2, a morphological analysis dedicated to assembly components can improve the robustness of a geometric idealization process. 75Chapter 3: Proposed approach to DMU processing for structural assembly simulations The natural way would be to automate the user’s approach. Indeed, during the current generation of FE meshes, as explained in Section 1.5, this morphological analysis phase is conducted by the engineer, on each component, individually. Based on his, resp. her, own experience, the engineer visually analyzes the component shape and selects the areas to preserve, to suppress, or to modify. Troussier [Tro99] highlighted the lack of tools helping engineers to build and validate their models. She proposed to refer to previous case studies and make them available to the engineer when a new model has to be built. This knowledge capitalization-based method helps engineers analyze their models through the comparison of the current simulation target with respect to the previously generated simulation models. Indeed, referring to already pre-processed FEM enforces the capitalization principle set up. However, even if the engineer is able to formalize the simulation hypotheses, one dif- ficulty remains regarding the concrete application of the shape transformations derived from these hypotheses. A visual interpretation of the required geometric transformations, based on past experiences, is feasible for simple components with more or less the same morphology than previous models. A complex geometry contains numerous regions with their own specific simplification hypotheses. These regions can interact with each other, leading the engineer to reach compromises about the adequate model to generate. For example, many variants of mechanical interactions can appear in interface areas between sub-domains generated for idealized components (see Figure 2.15). It can be difficult for the engineer to get a global understanding of all the possible connections between medial surfaces. As long as no precise mechanical rule exists in these connection areas, each person could have his, resp. her, own interpretation of the hypotheses to apply there. When processing assembly configurations, as illustrated in Section 2.5, its assembly interfaces influence the idealization of the components interacting there. In the case of large assembly structures, on top of the huge amount of time required to analyze all the repetitive configurations, an engineer can hardly anticipate all the interactions between components. Such an interactive analysis process, using the engineer’s know-how, does not seem tractable. Beyond potential lessons learned from previous FEA cases and because current automatic analysis tools are not suited to engineers’ needs (see Section 2.4), it is of great interest to develop new automated shape analyzing tools in order to help engineers understand the application of their simplification hypotheses on new shapes. The following objectives derive from this target. 76Chapter 3: Proposed approach to DMU processing for structural assembly simulations 3.4.1 Enriching DMU components with their shape structure as needed for idealization processes A shape analysis-based approach derives from the information available upstream, i.e., the DMU geometry content before FEA pre-processing that reduces to CAD components. Due to the interoperability issue between CAD and CAE software (see Section 1.5.2), the prevailing practice extracts the B-Rep representation of each component. During component design, successive primitive shapes, or form features, are sequentially added into a construction tree describing a specific modeling process (see Section 1.2.2). This tree structure is editable and could be analyzed further to identify, in this construction tree, a shape closer to the FE requirements than the final one in order to reduce the amount of geometric transformations to be applied. Often, this modeling process relies on the technology used to manufacture the solid. From this perspective, a machined metal component design process differs, for example, from a sheet metal component one. This difference appears in CAD systems with different workshops, or design modules, targeting each of these processes. As an example, the solid model of a sheet metal component can be obtained from an initial surface using an offset operator. This surface is close to the medial surface that can be used as an idealized representation of this component. However, directly extracting a simpler shape from a construction tree is not a generic and robust procedure for arbitrary components. Such a procedure: • cannot eliminate all the geometric transformations required to generate a FE model; • is strongly dependent upon the modeling process of each component; • and is specific to each CAD software. Above all and independently of the extracted geometry, it is essential to analyze a component shape before applying it any geometric transformation. To achieve a robust shape processing, the component shape needs to be structured into regions that can be easily connected to the simulation hypotheses. Proposal of a volume segmentation of a solid as a component shape structure Following the recommendations of Section 2.4, we propose to set up a volume segmentation of a 3D solid to structure it as an enriched input model to generate a robust morphological analysis dedicated to mechanical simulations. As stated in the conclusion of Section 2.4, the generic methods for 3D object decomposition segment mesh1 models only. Volume decompositions of B-Rep models are 1In the computer graphics context. 77Chapter 3: Proposed approach to DMU processing for structural assembly simulations restricted to specific applications that do not cover the FEA needs. Here, the objective is set on a proposal of a robust segmentation of a B-Rep solids to enrich them. Because the robust generation of quality connections between idealized sub-domains is still a bottleneck of many approaches that process CAD solids for FEA, the proposed segmentation should incorporate the determination of interfaces between the volumes resulting from the segmentation. The proposed method is based on the generation of a construction graph from a B-Rep shape. This contribution is detailed in Chapter 4. 3.4.2 An automated DMU analysis dedicated to a mechanically consistent idealization process An analysis can cover multiple purposes: physical prediction, experimental correlation,. . . The proposed analysis framework is oriented toward the geometric issues about the idealization of assemblies for FEA. Section 2.3 has revealed that a major difficulty encountered by automated methods originates from their lack of identification of the geometric extent where simplification and idealization operators should be applied. Using the new structure of a component shape, our objective is placed on a morphological analysis process able to characterize the idealization transformations that can take place on a component shape. Therefore, this process should incorporate, during the pre-processing of DMU models, a set of operators that analyze the initial CAD geometry in order to connect it to simulation hypotheses and determine the geometric extent of these hypotheses. Chapter 5 is dedicated to this contribution. The objectives of this morphological analysis enumerate: • The identification of regions considered as details with respect to the simulation objectives. The DMU adapted to FEA should contain only the relevant geometric regions which have an influence on the mechanical behavior of the structure; • The identification of relevant regions for idealization compared to regions regarded as volumes. The morphology of a component has to be analyzed in order to determine the thin regions to be transformed into mid-surfaces and the long and slender regions to be transformed into beams. Also, this morphological analysis has to provide the engineer with a segmentation of components into volume sub-domains which have to be expanded into the whole assembly; • The characterization of interfaces between idealizable regions. These interfaces contain significant information regarding the interaction between idealizable regions. They are used to connect medial surfaces among each other. 78Chapter 3: Proposed approach to DMU processing for structural assembly simulations Finally, the DMU enrichment process is completed with assembly information as well as information about the shape structure of each component. Consequently, this enriched DMU is geometrically structured. It is now the purpose the next section to carry on setting up objectives to achieve a robust pre-processing from DMU to FEM. 3.5 Process proposal to automate and robustly generate FEMs from an enriched DMU Figure 3.5 summarizes the corresponding bottom-up approach proposed in this thesis. The first phase uses the methods Jourdes et al. [JBH∗14] and Shahwan et al. [SLF∗12, SLF∗13] to enrich the DMU with assembly interfaces and functional designations of components as recommended in Section 1.5.4. The initial CAD solids representing components are also enhanced with a volume decomposition as suggested in Section 2.4 to prepare a morphological analysis required to process the idealization hypotheses. The second phase analyses this newly enriched DMU to segment it in accordance with the engineer’s simulation objectives (see Section 3.4), i.e., to identify areas that can be idealized or removed when they are regarded as details. This results in the generation of a so-called contextualized DMU. Providing the engineer with a new contextualized DMU does not completely fulfill his, rep. her, current needs to create geometric models for structural analysis. Consequently, the proposed scheme should not only develop and validate methods and tools to structure and analyze a DMU up to its component level, but also contain processes to effectively generate FE assembly models. In the third phase, the functional and morphological analyses lead to the definition of the assembly transformation process as planed in the second phase, i.e., the transformation of groups of components including dimensional reduction operations. Exploiting the contextualized DMU, it is proposed to develop a two level adaption process of a DMU for FEA as follows: • One process is dedicated to standalone geometric component idealization. The objective of this new operator is the exploitation of the morphological analysis and hence, to provide the engineer with a robust and innovative approach to 3D shape idealization; • Another process extending the idealization operator to assembly idealization. This operator is a generalization of the standalone operator adapted to assembly transformation requirements. To implement this process, we set up a generic methodology taking into account the simulation requirements, the functional assembly analysis and assembly interfaces. 79Chapter 3: Proposed approach to DMU processing for structural assembly simulations 1- PLM information DMU EXTRATION Contextualized DMU for mechanical simulation Enriched DMU Adapted DMU for FEA DMU ENRICHMENT 4- Volume segmentation of 3D Solid 2- Assembly interfaces 3- Functional designations ... FEM Lessons learned DATA PROCESSES Analysis-based Pre-processing SIMULATION 1 : Processes : Data Phase 1 Phase 2 Phase 3 Proposed Approach Settings of geometric process transformations Definition of assembly process transformations ANALYSIS DMU TRANSFORMATION DMU DMU 1 2 3 4 Morphological analysis Functional assembly analysis Operators Library Simulation Objective Contributions Figure 3.5: Proposed approach to generate a FEM of an assembly structure from a DMU. 80Chapter 3: Proposed approach to DMU processing for structural assembly simulations 3.5.1 A new approach to the idealization of a standalone component When components have to be fully idealized, their pre-processing requires the development of a robust idealization process containing a dimensional reduction operator associated with a robust one that connects medial surfaces. As shown in Section 2.3, existing approaches face two issues: • The extraction of a mid-surface/medial line from an idealized sub-domain. Current dimensional reduction operators focus directly on the generation of midsurface/medial line without having completely evaluated the idealization hypotheses and determined the sub-regions associated to these hypotheses; • The connection of the set of extracted mid-surfaces/medial lines. Current operators encounter difficulties to generate consistent idealized models in connections areas, i.e., regions which usually do not satisfy the idealization conditions. To cover these issues, we propose to analyze the morphology of the shape before applying a dimensional reduction operator. Therefore, this operator focuses on the extraction of medial surfaces only in the sub-domains morphologically identified as plate/shell models and on the extraction of medial lines in the sub-domains morphologically identified as beam models. Simultaneously, this morphological analysis is used to provide information on internal interfaces between sub-domains to be idealized. We propose to exploit this new information within the idealization operator to produce consistent geometric models, i.e., on-purpose idealization of sub-domains with on-purpose connections between them. This process is detailed in Section 5.5. 3.5.2 Extension to assembly pre-processing using the morphological analysis and component interfaces The second process required addresses the transformation of assembly models. The proposed operators have to be applicable to volume sub-domains, which can originate from components or from a group of components. Evolving the concept of details in the context of assembly structures Section 2.2 has shown that the relationship between detail removal and idealization processes has not been investigated. The definition of details stated in [LF05, RAF11] addresses essentially volume domains and refers to the concept of discretization error that can be evaluated with a posteriori error estimators. Assemblies add another complexity to the evaluation of details. It is related to the existence of interfaces between components. As illustrated in Section 1.5.4, interfaces 81Chapter 3: Proposed approach to DMU processing for structural assembly simulations are subjected to hypotheses to define their simulation model and Table 1.2 points out the diversity of mechanical models that can be expressed with simulation entities. Recently, Bellec [BLNF07] described some aspects of this problem. Yet, comparing the respective influences of rigid versus contact interface models is similar to the evaluation of idealization transformations: this is also a complex issue. The concept of detail, apart from referring to the real physical behavior of a product, is difficult to characterize for assembly idealization. The structural engineer’s knowhow is crucial to identify and remove them with interactive shape transformations. Benefiting from the morphological analysis of components’ shapes, another objective of this thesis is to provide the user with tools that show areas that cannot be regarded as details (see Sections 5.3.1 and 5.5.2). This way, the concept of details evolves from standalone component to assembly level pre-processing. Automated transformations of groups of components As explained in Section 1.5.4, the transformation of groups of components, e.g., junctions, by pre-defined FE simplified geometry, e.g., fasteners, is a top requirement to reduce the FEM preparation time. Focusing on these specific configurations, the main issue remains the robust identifi- cation of the components and assembly interfaces to be transformed. Another objective of this thesis is also to provide a robust operator to identify and transform configurations of groups of components involved in the same assembly function, which is detailed in Chapter 6. From the analysis of DMU transformation requirements for FE assembly model preparation [BLHF12], the proposed method relies on a qualitative reasoning process based on the enriched DMU as input. From this enriched model, it is shown that further enrichment is needed to reach a level of product functions where simulation objectives can be used to specify new geometric operators that can be robustly applied to automate component and assembly interface shape transformations. To prove the validity of this approach, Section 6.3 presents a template-based operator to automate shape transformations of bolted junctions. The method anticipates the mesh generation constraints around the bolts, which also minimizes the engineer’s involvement. 3.6 Conclusion This chapter has introduced the main principles and objectives of the proposed analysisoriented approach of DMU pre-processing for the simulation of FE structural assembly models. This approach covers: • The enrichment of the input geometry, both at 3D solid and assembly levels. 82Chapter 3: Proposed approach to DMU processing for structural assembly simulations It is critical to provide a volume decomposition of the geometric model of each component in order to access and fully exploit their shape. This structure is a good starting point for the identification of areas of interest for idealization hypotheses. At the assembly level, the DMU is enriched with geometric interfaces between its components, i.e., contacts and interferences, and with the functional designation of components; • The development of an analysis framework for the simulation of mechanical structures. From the enriched DMU model, the analysis framework can be used to specify geometric operators that can be robustly applied to automate component and interface shape transformations during an assembly preparation process. In accordance with the context of structural simulations, this framework evaluates the conditions of application of idealization hypotheses. It provides the engineer with the operators dedicated to shape adaption after idealizable volume subdomains have been identified. Also, after the areas considered as details have been identified and information about sub-domains interfaces have been added, the user’s modeling rules can be applied in connection areas; • The specification of geometric operators for the idealization of B-rep shapes and operators transforming groups of components, such as bolted junctions, bene- fiting from the previously structured DMU. Through the development of such operators, the proposed approach can be sequenced and demonstrated on aeronautical use-cases. The next chapters are organized in accordance with the proposed approach, as described in the current one. Chapter 4 details the B-Rep volume decomposition using the extraction of generative construction processes. Chapter 5 describes the concepts of the FEA framework using a construction graph to analyze the morphology of components and derive idealized equivalent models. Chapter 6 extends this approach to a methodology for assembly idealization and introduces a template-based operator to transform groups of components. 83Chapter 3: Proposed approach to DMU processing for structural assembly simulations 84Chapter 4 Extraction of generative processes from B-Rep shapes to structure components up to assemblies Following the global description of the proposed approach to robustly process DMUs for structural assembly simulation, this chapter exposes the principles of the geometric enrichment of components using a construction graph. This enrichment method extracts generative processes from a given B-Rep shape as a high-level shape description and represents it as a graph while containing all non trivial construction trees. Advantageously, the proposed approach is primitivebased and provides a powerful geometric structure including simple primitives and geometric interfaces between them. This high-level object description is fitted to idealizations of primitives and to robust connections between them and remains compatible with an assembly structure containing components and geometric interfaces. 4.1 Introduction Based on the analysis of DMU transformation requirements for FE assembly model preparation in Chapter 1 as well as the analysis of prior research work in Chapter 2, two procedures are essential to generate the mechanical model for the FEA of thin structures: • The identification of regions supporting geometric transformations such as simplifications or idealizations. In Section 1.4.2, the analysis of thin mechanical shell structures introduces a modeling hypothesis stating that there is no normal stress in the thickness direction. This hypothesis is derived from the shape of 85Chapter 4: Extraction of generative processes from B-Rep shapes the object where its thin volume is represented by an equivalent medial surface. The idealization process connects this hypothesis with the object shape. Section 2.3 illustrates that idealization operators require a shape analysis to check the idealization hypothesis on the shape structure and to delimit the regions to be idealized; • The determination of interface areas between regions to be idealized. Section 2.3 showed that the interface areas contain the key information to robustly connect idealized regions. In addition to idealizable areas, the determination of interfaces is also essential to produce fully idealized models of components. The proposed pre-processing approach, described in Chapter 3, is based on the enrichment of the input DMU data. More precisely, the B-rep representation of each CAD component has to be geometrically structured to decompose the complexity of their initial shape into simpler ones. At the component level, we propose to create a 3D solid decomposition into elementary volume sub-domains. The objective of this decomposition is to provide an efficient enrichment of the component shape input to apply the idealization hypotheses. This chapter is dedicated to a shape decomposition method using the extraction of a construction graph from B-Rep models [BLHF14b, BLHF14a]. Section 4.2 justifies the extraction of generative construction processes suited for idealization processes1. Section 4.3 sets the modeling context and the hypotheses of the proposed approach. Section 4.4 describes how to obtain generative processes of CAD components, starting from the identification of volume primitives from a B-Rep object to the removal process of these primitives. Finally, Section 4.5 defines the criteria to select the generative processes generating a construction graph for idealization purposes. This construction graph will be used in Chapter 5 to derive idealized models. In a next step, the component perspective is extended to address large assembly models. Consequently, the segmentation approach is analyzed with respect to CAD assembly representation in Section 4.7. 4.2 Motivation to seek generative processes This section presents the benefits of modeling processes to structure a B-Rep component. It shows the limits of CAD construction trees in mechanical design and explains why it is mandatory to set-up generative processes adapted to idealization processes. 1 generative processes represent ordered sequences of processes emphasizing the shape evolution of the B-Rep representation of a CAD component. 86Chapter 4: Extraction of generative processes from B-Rep shapes 4.2.1 Advantages and limits of present CAD construction tree As observed in Section 1.2.3, a mechanical part is progressively designed in a CAD software using successive form features. This initial generation of the component shape can be regarded as the task where the component shape structure is generated. Usually, this object structure is described by a binary construction tree containing the elementary features, or primitives, generating the object. This construction tree is very efficient to produce a parameterized model of a CAD object. Effectively, the user can easily update the shape of the object when modifying parameters defined within a user-selected feature and then, re-processing the subset of the construction tree located after this primitive. As illustrated in Section 2.4, Robinson et al. [RAM∗06] show that a construction tree with adapted features for FEA can be used to easily generate idealized models. 1 3 (b) (a) St B B 6 T 8 11 15 17 18 20 B T B T T T 23 33 34 T B T B 2 4 - 5 7 9 - 10 16 19 21 12 ® 14 22 24 ® 32 Figure 4.1: An example of a shape generation process: (a) final object obtained after 34 modeling steps and viewed from top (T) and bottom (B), (b) some intermediate shapes obtained after the i th modeling step. The letter T or B appearing with step number indicates whether the object is viewed from top or bottom. 87Chapter 4: Extraction of generative processes from B-Rep shapes However, the construction tree produced during the design phase may not be suited for the shape decomposition taking place at other stages of a PDP, e.g., during process planning and FEA. Three main issues are preventing the use of current CAD construction trees from FEM pre-processing: • The complexity of the final object shape and feature dependencies. The concept of feature-based design eases the generation of an object shape by adding progressively, and one by one, simple form features. This way, the user starts from a simple solid, i.e., a primitive, and adds or removes volumes using pre-defined features (extrusion, revolution, sweeping, . . . ) one after the other until he, resp. she, has reached the desired shape of the object. As a consequence of this principle “from simple to complicated”, the resulting construction tree can be complex and contains numerous features. Because, the user inserts one form feature at a time, the construction tree is necessarily binary, i.e., each tree node contains one form feature and the object shape obtained after combining this form feature with the object resulting from the previous tree node. As an example, Figure 4.1 illustrates this configuration with a rather complex component where the user’s entire modeling process consists of 37 steps, some of them containing multiple contours producing simultaneously several features. Two views, defined as top and bottom in Figure 4.1b, show the major details of the object shape. Figure 4.1a depicts some of the 34 steps involving either extrusion or revolution operations and incorporating either material addition or removal as complementary effects when shaping this object. The parent/child dependencies between form features further increase the complexity of this construction process. The suppression or modification of a parent feature is not always possible due to geometric inconsistencies generated in subsequent tree steps when parent/child dependencies cannot be maintained or when the object boundary cannot produce a solid. This is particularly inconvenient in the context of FEM pre-processing which aims at eliminating detail features to simplify component shapes; • The non-uniqueness and user dependence. Construction trees are not unique, i.e., different users often generate different construction trees for the same final shape. The choice of the sequence of features is made by the designer and depends on his, resp. her, own interpretation of the shape structure of the final object. In current industrial practices, specific modeling rules limit the differences in construction tree generation but they are not dedicated to FEM requirements as explained in Section 3.3; • The construction tree availability. Construction trees contain information which is very specific to each CAD system and each software has its own data structures to represent this construction scheme. Most of the time, this information is lost when transferring objects across CAD systems or even across the 88Chapter 4: Extraction of generative processes from B-Rep shapes different phases of a PDP. Practically, when using STEP neutral format [ISO94, ISO03], definition of construction tree structures associated with parametric modeling are not preserved. Indeed, to edit and to modify a shape, the parametric relations taking part to the construction tree would need also to be exported. This is difficult to obtain, e.g., even during the upgrade of CATIA software [CAT14] from V4 to V5, the transfer was not fully automatic and some information in construction trees was lost. As it appears in Figure 4.1, a shape generative process can be rather complex and, even if it is available, there is no straightforward use or transformation of this process to idealize this object (even though its shape contains stiffeners and thin areas that can be modeled with plate or shell elements rather than volume ones). With respect to the idealization objectives, it appears mandatory to set-up another generative process that could incorporate features or primitives with their shapes being close enough to that of stiffeners and thin wall areas. 4.2.2 A new approach to structure a component shape: construction graph generation Construction trees are important because an object submitted to a FEA preparation process can be subjected to different simplifications at different levels of its construction process. One key issue of these trees is their use of primitives that are available in common industrial CAD software. Another problem lies in the storage of the shape evolution from the initial simple primitive shape to the final object. This principle ‘from simple to complicated’ matches the objective of using the tree contents for idealization and simplification purposes. Indeed, obtaining simpler shapes through construction processes could already reduce the number of geometric transformations. However, because the CAD construction tree is presently unique, not always available and complicated to modify, its use is difficult. The proposed approach here, structuring the shape of a component, consists in producing generative processes of its B-Rep model that contain sets of volume primitives so that their shapes are convenient for FE simulation. The benefits of extracting new generative processes, as ordered sequences of processes emphasizing the shape evolution of the B-Rep representation of a CAD component, are: • To propose a compact shape decomposition adapted to the idealization objectives. Extraction of compact generative processes aims at reducing their complexity while getting a maximum of information about their intrinsic form features. The proposed geometric structure decomposes an object into volume sub-domains which are independent from each other and close enough to regions that can be idealized. This segmentation method differs from the divide and conquer approaches of [Woo14] because generative processes contain volume prim- 89Chapter 4: Extraction of generative processes from B-Rep shapes itives having particular geometric properties, e.g., extruded, revolved or swept features. The advantage of these primitives is to ease the validation of the idealization criteria. Indeed, one of their main dimensional characteristics is readily available. For example, the extrusion distance, revolve angle or sweep curve, reduces the primitive analysis to the 2D sketch of the feature. Moreover, interfaces between primitives are identified during the extraction of each generative process to extend the analysis of individual primitives through the whole object and to enable the use of the engineer’s FE connection requirements between idealizable regions; • To offer the user series of construction processes. In a CAD system, a feature tree is the unique available definition of the component’s construction but is only one among various construction processes possible. Furthermore, in a simulation context, it is difficult to get an adequate representation of the simulation model which is best matching the simulation requirements because the engineer’s know-how takes part to the simulation model generation (see Section 3.4). Consequently, the engineer may need to refer to several construction processes to meet the simulation objectives as well as his, resp. her, shape transformation requirements. Providing the engineer with several construction processes helps him, resp. her, generate easily a simulation model. The engineer will be able to navigate shape construction processes and obtain a simpler one within a construction tree that meets the idealization requirements in the best possible way; • To produce a shape parameterization independent from any construction tree. The construction tree is a well-known concept for a user. Parent/child dependencies between features, generated through sketches and their reference planes, ease the interactive design process for the user but creates geometric dependencies difficult to understand for the user. Performing modifications of a complex shape remains difficult to understand for the user, e.g., the cross influence between features located far away in the construction tree is not easy to anticipate. Considering the generation of a construction graph, the proposed approach does not refer to parent/child dependencies between features, i.e., features are geometrically independent each other. The morphological analysis required (see Section 3.4) does not refer to a need for a parameterized representation of components, i.e., each primitive does not need to refer to dependencies between sketch planes. Components can be modified in a CAD software and their new geometry can be processed again to generate a construction graph without referring to the aforementioned dependencies. As a conclusion, one can state that enabling shape navigation using primitive features similar to that of CAD software is an efficient complement to algorithmic approaches reviewed in Chapter 2 and construction trees. More globally, construction graphs can support both efficiently. 90Chapter 4: Extraction of generative processes from B-Rep shapes Shape modeling processes Segmentation CAD Mesh Idealization M M-1 M-2 Pi(M/M ) -1 Pi(M /M ) -1 -2 Pi(M ) -2 Figure 4.2: An example of shape analysis and generative construction graph. To this end, this chapter proposes to extract a construction graph from B-Rep CAD models so that the corresponding generative processes are useful for mechanical analysis, particularly when idealization processes are necessary. The graph is extracted using a primitive removal operator that simplifies progressively the object’s shape. One could says that the principle is to go ‘backward over time’. This characteristic of construction trees is consistent with the objective of simplification and idealization because the shapes obtained after these operations should get simpler. Figure 4.2 illustrates the extraction of a shape modeling process of a CAD component. Primitives Pi are extracted from a sequence of B-Rep objects Mi which become simpler over time. The set of primitives Pi generates a segmented representation of the initial object which is used to derive idealized FE models. The following sections detail the whole process of extraction of generative processes from B-Rep shapes. 4.3 Shape modeling context and process hypotheses Before describing the principles of the extraction of generative processes from a BRep shape, this section sets the modeling context and the hypotheses of the proposed approach. 4.3.1 Shape modeling context As a first step, the focus is placed on B-Rep mechanical components being designed using solid modelers. Looking at feature-based modeling functions in industrial CAD systems, they all contain extrusion and revolve operations which are combined with addition or removal of volume domains (see Figure 4.3a). The most common version of the extrusion, as available in all CAD software, is defined with an extrusion direction 91Chapter 4: Extraction of generative processes from B-Rep shapes variant with material removal material blend as part of addition extrusion contour blend variable radius V H Extrusion Contour (a) (b) (c) (d) Slanted extrusion Cutting Plane Figure 4.3: (a) Set of basic volume modeling operators, (b) sketch defining an extrusion primitive in (a), (c) higher level volume primitive (slanted extrusion), (d) reference primitive and its first ‘cut’ transformation to generate the object in (c). orthogonal to a plane containing the primitive contour. Such an extrusion, as well as the revolution, are defined here as the reference primitives. These feature-based B-Rep operations can be seen as equivalent to regularized Boolean operations as available also in common hybrid CAD modelers, i.e., same primitive shapes combined with union or subtraction operators. Modelers also offer other primitives to model solids, e.g., draft surfaces, stiffeners, or free-form surfaces from multiple sections. Even though we don’t address these primitives here, it is not a limitation of our method. Indeed, draft surfaces, stiffeners, and similar features can be modeled with a set of reference primitives when extending our method to extrusion operations with material removal and revolutions. Appendix B illustrates the simple features and boolean operations available in CAD software and show that it can mainly be reduced to additive/removal extrusion/revolution features in order to cover the present software capabilities . Figure 4.3c illustrates some examples, e.g., an extrusion feature where the extrusion direction is not orthogonal to the sketching plane used for its definition. However, the resulting shape can be decomposed into an extrusion orthogonal to a sketching plane and ‘cuts’ (see Figure 4.3d) if the generation of a slanted extrusion is not available or not used straightforwardly. Indeed, these construction processes are equivalent with respect to the resulting shape. Another category of form features available from B-Rep CAD modelers are blending radii. Generally, they have no simple equivalence with extrusions and revolutions. Generated from B-Rep edges, they can be classified into two categories: 1- constant radius blends that can produce cylindrical, toroidal or spherical surfaces; 92Chapter 4: Extraction of generative processes from B-Rep shapes 2- constant radius blends attached to curvilinear edges and variable radius blends. Category 1 blends include extrusion and revolution primitives and can be incorporated in their corresponding sketch (see Figure 4.3a). This family of objects is part of the current approach. Category 2 blends are not yet addressed and are left for future work. Prior work in this field [VSR02, ZM02, LMTS∗05] can be used to derive M from the initial object MI to be analyzed, possibly with user’s interactions. In summary, all reference primitives considered here are generated from a sketching step in a plane defining at least one closed contour. The contour is composed of line segments and arcs of circles, (see Figure 4.3b). This is a consequence of the previous hypothesis reducing the shapes addressed to closed surfaces bounded by planes, cylinders, cones, spheres, tori, and excluding free-form shapes in the definition of the object boundary. This is not really restrictive for a wide range of mechanical components except for blending radii. The object M to be analyzed for shape decomposition is assumed to be free of blending radii and chamfers that cannot be incorporated into sketched contours. The generative processes are therefore concentrated on extrusion primitives, in a first place, in order to reduce the complexity of the proposed approach. Further hypotheses are stated in the following sections. 4.3.2 Generative process hypotheses Given a target object M to be analyzed, let us first consider the object independently of the modeling context stated above. M is obtained through a set of primitives combined together by adding or removing material. Combinations of primitives thus create interactions between their bounding surfaces, which, in turn, produce intersection curves that form edges of the B-Rep M. Consequently, edges of M contain traces of generative processes that produced its primitives. Hence, following Leyton’s approach [Ley01], these edges can be seen as memory of generation processes where primitives are sequentially combined. Current CAD modelers are based on strictly sequential processes because the user can hardly generate simultaneous primitives without looking at intermediate results to see how they combine/interact together. Consequently, B-Rep operators in CAD modelers are only binary operators and, during a design process, the user-selected one combines the latest primitive generated to the existing shape of M at the stage t of this generative process. Additionally, CAD modelers providing regularized Boolean operators reduce them to binary operators, even though they are n-ary ones, as classically defined in the CSG approaches [Man88]. Here, the proposed approach does not make any restriction on the amount of primitives possibly generated ‘in parallel’, i.e., the arity of the combination operators is n ≥ 2. The generated processes benefit from this hypothesis by compacting the construction trees nodes. This property is illustrated in the result Section 4.6 of this chapter in Figure 4.22. 93Chapter 4: Extraction of generative processes from B-Rep shapes d (a) (b) Top Bottom (c) IG (face 2) contour edge (convex) lateral face (transparent) Fb2 contour edge (concave) Fb1 (transparent) IG (face 1) fictive lateral edge lateral edge Attachment contour( , ) Fb1 Fb2 Figure 4.4: a) Entities involved in an extrusion primitive. Visible extrusion feature with its two identical base faces F b1 and F b2. (b) Visible extrusion feature with its two different base faces F b1 and F b2. (c) Visible extrusion feature with a unique base face F b1 (detail of Figure 4.1a - 34B). Hypothesis 1: Maximal primitives The number of possible generative processes producing M can be arbitrary large, e.g., even a cube can be obtained from an arbitrary large number of extrusions of arbitrary small extent combined together with a union operator. Therefore, the concept of maximal primitives is introduced so that the number of primitives is finite and as small as possible for generating M. A valid primitive Pi identified at a stage t using a base face Fb1 is said to be maximal when no other valid primitive Pj at that stage having F� b1 as base face can be entirely inserted in Pi (see Section 4.4.2 and Figure 4.4a): ∀Pj , Pj �⊂ Pi. Fb1 is a maximal face as defined at Section 4.3.3. Maximal primitives imply that the contour of a sketch can be arbitrary complex, which is not the case in current engineering practice, where the use of simple primitives eases the interactive modeling process, the parameterization, and geometric constraint assignments to contours. The concept of maximal primitive is analog to the concept of maximal volume used in [WS02, Woo03, Woo14]. Yet, this concept is no used in feature recognition techniques [JG00]. Even if making use of maximal primitives considerably reduces the number of possible generative processes, they are far from being unique for M. Hypothesis 2: Additive processes 94Chapter 4: Extraction of generative processes from B-Rep shapes (a) (b) Extrusion contour Extrusion Revolution Mid-surface Mid-surface d Direction of extrusion Revolution contour Angle of revolution Figure 4.5: Illustrations of two additive primitives: (a) an extrusion primitive and (b) a revolution one. The mid-surfaces of both primitives lie inside their respective volumes. We therefore make the further hypothesis that the generative processes we are looking for are principally of type additive, i.e., they are purely based on a regularized Boolean union operator when combining primitives at each stage t of generative modeling processes. This hypothesis is particularly advantageous when intending to tailor a set of generative processes that best fit the needs of idealization processes. Indeed, idealized structures, such as mid-surfaces, lie inside such primitives, and connections between primitives locate also the connections between their idealized representatives. Figure 4.5 illustrates an extrusion and a revolution primitives. With both of them, the 3D solid of the primitive includes its mid-surface. Therefore, the idealized representation of M can be essentially derived from each Pi and its connections, independently of the other primitives in case of additive combinations. Figure 4.6 gives an example where M can be decomposed into two primitives combined with a union (b). M in Figure 4.6, (b) can thus be idealized directly from these two primitives and their interface. On the contrary, when allowing material removal, idealization transformations are more complex to process, while the resulting volume shapes are identical. Figure 4.6c shows two primitives which, combined by Boolean subtraction, result also in object (a). However, computing an idealization of (a) by combining idealizations of its primitives in (c) is not possible. Performing the idealization of M from its primitives strengthens this process compared to previous work on idealization [CSKL04, Rez96, SSM∗10] of solids presented in Section 2.3.1 for two reasons. Firstly, each Pi and its connections bound the 3D location of and the connections with other idealized primitives. Secondly, different categories of connections can be defined, which is important because idealization processes still rely on the user’s know-how to process connections significantly differing from reference ones. The next Chapter 5 explains in details how to connect mid-surfaces using a taxonomy of connections between extrusion primitives. Hypothesis 3: Non trivial variants of generative processes To further reduce the number of possible generative processes, the processes described should be non trivial variants of processes already identified. For example, the 95Chapter 4: Extraction of generative processes from B-Rep shapes (a) (b) (c) + - OR Figure 4.6: a) Simple shape with idealizable sub-domains, (b) Primitives to obtain (a) with an additive process, (c) Primitives to obtain (a) with a removal process. same rectangular block can be extruded with three different face contours and directions but they create the same volume. Two primitives generating the same shape are considered as the same non-trivial primitive. If the resulting shape of two processes at the jth-level of a construction is the same then, these two processes are said equivalent and are reduced to a single one and the object shape at the (j−1)th-level is also unique. These equivalent processes can be detected when comparing the geometric properties of the contours generating this same shape. Other similar observations will be addressed in the following sections when describing the criteria to select meaningful generative processes. The above hypotheses aim at reducing the number of generative processes producing the same object M while containing primitives suited to idealization transformations, independently of the design process initially set up by engineers. Conclusion The overall approach can be synthesized through the process flow of Figure 4.7. The input STEP file contains the B-Rep model M. A set of generative processes is extracted that form sets of construction trees, possibly producing a graph. To this end, application dependent criteria are used to identify one or more construction trees depending on the application needs. Here, we focus on criteria related to idealization for FEA. 4.3.3 Intrinsic boundary decomposition using maximal entities In order to extract generative processes from the B-Rep decomposition of M, it is important to have a decomposition of M, i.e., topology description, that is intrinsic to 96Chapter 4: Extraction of generative processes from B-Rep shapes Selection of generative process(es) Application dependent criteria STEP file (input Model) Generation of generative process graph Shape transformations (idealization) Figure 4.7: Pipeline producing and exploiting generative shape processes. its shape. A B-Rep decomposition of an object is however not unique (and thus not suitable), because it is subjected to two influences: • Its modeling process, whether it is addressed forward during a design process or backward as in the present work. Indeed, each operation involving a primitive splits/joins boundary faces and edges of the solid. When joining adjacent faces or edges, their corresponding surfaces or curves can be identical. Their decomposition is thus not unique. However, CAD modelers may not merge the corresponding entities, thus producing a boundary decomposition that is not changing the object shape (see Figure 4.8a). For the proposed approach purposes, such configurations of faces and edges must lead to a merging process so that the object boundary decomposition is unique for a given shape, i.e., it is intrinsic to the object shape; • The necessary topological properties to setup a consistent paving of an object boundary, i.e., the boundary decomposition must be a CW-complex. Consequently, curved surfaces need to be partitioned. As an example, a cylinder is decomposed into two half cylinders in most CAD modelers or is described with a self connected patch sewed along a generatrix (see Figure 4.8b). In either case, the edge(s) connecting the cylindrical patches are adjacent to the same cylindrical surface and are not meaningful from a shape point of view. Hence, for the proposed approach purposes, they must not participate to the intrinsic boundary decomposition of the object. Following these observations, the concepts of maximal faces and edges introduced by [FCF∗08] is used here as a means to produce an intrinsic and unique boundary decomposition for a given object M. Maximal faces are identified first. For each face of M, a maximal face F is obtained by repeatedly merging an adjacent face Fa sharing a common edge with F when Fa is a surface of same type and same parameters than F, i.e., same underlying surface. F is maximal when no more face Fa can be merged with F. Indeed, maximal faces coincide with ‘c-faces’ defined in [Sil81] that have been proved to uniquely defined M. Similarly, for each edge of M, a maximal edge E with 97Chapter 4: Extraction of generative processes from B-Rep shapes (a) (b) Couples of faces that can be merged Figure 4.8: Examples of configurations where faces must be merged to produce a shapeintrinsic boundary decomposition: (a) face decomposition due to the modeling process, (b) face decomposition due to topological requirements. adjacent faces F1 and F2 is obtained by repeatedly merging an adjacent edge Ea when Ea is also adjacent to F1 and F2. Again, E is maximal when no more edge Ea can be merged with E. As a consequence of these merging processes, it is possible to end up with closed edges having no vertex or with closed faces having no edge. An example for the first case is obtained when generating the maximal face of the cylinder in Figure 4.8b. A sphere described with a single face without any edge and vertex is an example for the second case. Because of maximal edges without vertices and faces without edges, merging operations are performed topologically only, i.e., the object’s B-Rep representation is left unchanged. Maximal faces and edges are generated not only for the initial model M but also after the removal of each primitive when identifying the graph of generative processes. Consequently, maximal primitives (see Hypothesis 1) are based on maximal faces and edges even if not explicitly mentioned throughout this document. Using the concept of maximal faces and edges the final object decomposition is independent of the sequence of modeling operators. 4.4 Generative processes Having define the modeling hypotheses and context in the previous Section 4.3, this section presents the principles of the construction of generative processes from B-Rep object. It explains how the primitives are identified and how to remove them from an object M. 4.4.1 Overall principle to obtain generative processes Preliminary phase As stated in Section 4.3.1, a preliminary step of the method is to transform it into a blending radii-free object M. To this end, defeaturing functions available in most CAD 98Chapter 4: Extraction of generative processes from B-Rep shapes Identification of extrusion primitives Pi Going back over time Generating M from primitives Object M M®M’ Removal of primitives from M’ M’ empty? No Yes End Set of construction trees producing M Set of extrusion primitives Pi Construction graph of primitives Figure 4.9: Overall scheme to obtain generative processes. Object M End Object M-1 Object M-2 Removal of Pi Identification of extrusion primitives Pi in M Identification of extrusion primitives Pi in M-1 Identification of extrusion primitives Pi in M-2 Pi Removal of Pi Figure 4.10: An example illustrating the successive identification and removal of primitives. systems are applied. This operation is a consequence of the modeling context defined in Section 4.3.1. Even though these functions may not be sufficient and robust enough, this is the current working configuration. In contrast to blending radii, most chamfers are included in the present approach because they can be part of extrusion primitives and hence, included in the sketched contours used to define extrusion primitives. Even if CAD software provide specific functions for chamfers, they are devoted to the design context but basic operators of extrusion with material addition or removal could produce the same result, in general. This analysis regarding chamfers shows the effect of the concept of maximal primitives (see Hypothesis 1). Main phase Starting with the solid M, the generative processes are obtained through two phases: 99Chapter 4: Extraction of generative processes from B-Rep shapes • M is processed by iterative identification and removal of primitives. The objective of this phase is to ‘go back in time’ until reaching root primitives for generative processes. The result of this phase is a set of primitives; • Based on hypotheses of Section 4.3.2, a set of generative processes is produced using the primitives obtained at the end of the first phase to meet the requirements of an application: here idealization (see Chapter 5). Finally, the decomposition D of M into extrusion primitives is not limited to a single construction tree but it produces a construction graph GD iteratively generated from M. GD contains all possible non trivial construction trees of M (see Hypothesis 3). The process termination holds whenever M is effectively decomposable into a set of extrusion primitives. Otherwise, D is only partial and its termination produces either one or a set of volume partitions describing the most simplest objects D can reach. Figure 4.9 summarizes the overall scheme just described previously. When generating GD, we refer to M = M0 and evolutions M−j of it backward at the jth step of D. Figure 4.10 illustrates the major steps of the extraction of a generative process graph, i.e., from the primitive identification up to its removal from M, and will be further explained in Sections 4.4.2 and 4.4.3. 4.4.2 Extrusion primitives, visibility and attachment In order to identify extrusion primitives Pi in M = M0 and evolution M−j of it, backward at the jth step of the generation of the generative process graph, it is mandatory to define its geometric parameters as well as the hypotheses taken in the present work (see Figure 4.4). First of all, let us notice that a ‘reference primitive’ Pi is never appearing entirely in M or M−j unless it is isolated like a root of a construction tree, i.e., Pi = M or Pi = M−j . Apart from these particular cases, Pi are only partly visible, i.e., not all faces of Pi are exactly matching faces of M−j . For simplicity, we refer to such Pi as ‘visible primitives’. Pi is the memory of a generative process that took place between M−j and M−(j+1). Extracting Pi significantly differs compared to feature recognition approaches [Rez96, LGT01, WS02, SSM∗10, JG00]. In feature recognition approaches, Pi is identified through validity constraints with its neighboring attachment in M, i.e., faces and edges around Pi. These constraints limits the number of possible primitives by looking to the best interpretation of some visible boundaries of the object M. Here, identifying visible primitives enables the generation of reference ones having simpler contours. Only the visible part of the primitive is used to identify the primitive in M, without restricting the primitive to the visible boundaries of M. The proposed identi- fication process of Pi is more general, it does not integrate any validity constraint on 100Chapter 4: Extraction of generative processes from B-Rep shapes the attachment of Pi with M. This constraint released, this process enables the identi- fication of a greater number of primitives which can be compared with each other not only through their attachment to M but also through their intrinsic shape complexity. Definition of the primitive The parameters involved in a reference extrusion Pi are the two base faces, F b1 and F b2, that are planar and contain the same sketched contour where the extrusion takes place. Considering extrusions that add volume to a pre-existing object, the edges of F bi are called contour edges which are all convex. Indeed, Pi being standalone primitive, all its contour edges are convex. A convex edge is such that the outward normals of its adjacent faces define an angle α where: 0 < α < π. When Pi belongs to M−j , the contour edges along which Pi is attached to M−j can be either convex or concave depending on the neighborhood of Pi in M−j (see Figure 4.4a). In the direction d of the extrusion, all the edges are straight line segments parallel to each other and orthogonal to F bi. These edges are named lateral edges. Faces adjacent to F bi are called lateral faces. They are bounded by four edges, two of them being lateral edges. Lateral edges can be fictive lateral edges when a lateral face coincides with a face of M−j adjacent to Pi (see Figure 4.4a). When lateral faces of Pi coincide with adjacent faces in M−j , there cannot be edges separating Pi from M−(j+1) because of the definition of maximal faces. Such a configuration refers to fictive base edges (see Figure 4.11 with the definition of primitive P1). Principle of primitive identification: Visibility The visibility of Pi depends on its insertion in M−j and sets the conditions to identify Pi in ∂M−j 2. An extrusion primitive Pi can be visible in different ways depending on its insertion in a current object M−j . The simplest visibility is obtained when Pi’s base faces F bi in M−j exist and when at least one lateral edge connects F bi in M−j (see Figure 4.4a and 4.11(step1)). More generally, the contour of F b1 and F b2 may differ from each other (see Figure 4.4b) or the primitive may have only one base face F b1 visible in M−j together with one existing lateral edge that defines the minimal extrusion distance of F b1 (see Figure 4.4c). Our two hypotheses on extrusion visibility thus state as follows: • First, at least one base face F bi is visible in M−j , i.e., the contour of either F b1 or F b2 coincides with a subset of the attachment contour of Pi in M−j ; • Second, one lateral edge exists that connects F bi in M−j . This edge is shared by two lateral faces and one of its extreme vertices is shared by F bi. 2∂M−j is the boundary of the volume object M, i.e., the B-Rep representation 101Chapter 4: Extraction of generative processes from B-Rep shapes d d P1 P2 Primitive to Remove Interface Identical Faces Volume to remove from Primitive Reduced Primitive Included in Solid Simplified Solid 1 - Find Extrusion Primitives 2 - Keep included Primitives 3 - Find Interfaces 4 - Remove Primitives from Solid Fb1 contour edge P1 P2 P1 P1 Not Included in Solid Figure 4.11: An example illustrating the major steps to identify a primitive Pi and remove it from the current model M−j . Pi is entirely defined by F bi and the extrusion length obtained the maximum length of the generatrix of Pi extracted from its lateral faces partly or entirely visible in M−j . Notice that the lateral edges mentioned may not be maximal edges when lateral faces are cylindrical because maximal faces may remove all B-Rep edges along a cylindrical area. These conditions of definition of extrusion distance restricts the range of extrusion primitives addressed compared to the use of the longest lateral segment existing in the lateral faces attached to F bi. However, it is a first step enabling to address a fair set of mechanical components and validate the major concepts of the proposed approach. This generalization is left for future work. Figure 4.4b, c give examples involving two or one visible base faces, respectively. Attachment An extrusion primitive Pi is attached to M−j in accordance to its visibility in M−j . The attachment defines a geometric interface, IG, between Pi and M−(j+1), i.e., IG = Pi ∩ M−(j+1). This interface can be a surface or a volume or both, i.e., a nonmanifold model. One of the simplest attachments occurs when Pi has its base faces F b1 and F b2 visible. This means that Pi is connected to M−(j+1) through lateral faces only. Consequently, IG is a surface defined by the set of lateral faces not visible in Pi. Figure 4.4a illustrates such a type of interface (IG contains two faces depicted in yellow). Simple examples of attachment IG between Pi and M−(j+1) are given in Figure 4.4. 102Chapter 4: Extraction of generative processes from B-Rep shapes (b) Pi M-(j+1) IG S e1 e2 (a) Pi M-(j+1) IG Volume Interface Figure 4.12: Example of geometric interface IG between Pi and M−(j+1): (a) surface type, (b) volume type. M-j Valid Primitives Primitive (Pi) Non Valid Primitive Direction of extrusion (a) (b) Fb1 Figure 4.13: Collection of primitives identified from M−j : (a) Valid primitives included in M−j , (b) invalid primitive because it is not fully included in M−j . Green edges identify the contour of the base face of the primitive. 4.4a involves a surface interface and 4.4b illustrates a volume one. Let us notice that the interface between Pi and M−(j+1) in 4.4b contains also a surface interface located at the bottom of the primitive that is not highlighted. However, as we will see in Section 4.5, all possible variants of IG must be evaluated to process the acceptable ones. In a first step, Pi can be translated directly into an algorithm to identify them (procedure f ind visible extrusion of algorithm 1). The visibility of Pi does not refer to its neighboring faces in M−j . Next, they are subjected to validity conditions described in the following section. 4.4.3 Primitive removal operator to go back in time The purpose is now to describe the removal operator that produces a new model M−(j+1) anterior to M−j . This removal operator is defined as a binary operator with Pi and M−j as operands and M−(j+1) as result. In the context of a generative process, M−j relates to a step j and M−(j+1) to a step (j + 1). 103Chapter 4: Extraction of generative processes from B-Rep shapes Characterization of interfaces In order to be able to generate M−(j+1) once Pi is identified, it is necessary to reconstruct faces adjacent to Pi in M−j so that M−(j+1) defines a volume. To this end, the faces of M−j adjacent to Pi and IG must be characterized. Here, Pi is considered to be adjacent to other subsets of primitives through one edge at least. The removal operator depends on the type of IG. Due to manifold property of M, two main categories of interfaces have been identified: 1- IG is of surface type. In this category, the removal operator will have to create lateral faces and/or the extension of F b2 so that the extended face coincides with F b1. Indeed, this category needs to be subdivided into two sub categories: a- IG contains lateral faces of Pi only (see Figure 4.4a) or IG contains also an extension of F b2 and edges of this extension are concave edges in M−(j+1); b- IG may contains lateral faces of Pi but it contains an extension of F b2 and the edges of this extension are fictive base edges in M−j . These edges would be convex edges in M−(j+1), (see P1 in Figure 4.11); 2- IG contains at least one volume sub-domain. In addition, considering that F b1 at least is visible and Pi is also visible (see Section 4.4.2), the attachment contour may not be entirely available to form one or more edge loops (see Figure 4.4a). Also, IG can contain more than one connected component when Pi is resembling a handle connected to M−(j+1), which produces more than one edge loop to describe the attachment of Pi to M−(j+1) in IG. Validity Whatever the category of interface, once Pi is identified and its parameters are set (contour and extrusion distance), it is necessary to validate it prior to define its interface (step 2 of Figure 4.11). Let Pi designates the volume of the reference primitive, i.e., the entire extrusion Pi. To ensure that Pi is indeed a primitive of M−j , the necessary condition is formally expressed using regularized Boolean operators between these two volumes: (M−j ∪∗ Pi) −∗ M−j = φ. (4.1) This equation states that Pi intersects M−j only along the edge loops forming its attachment to M−(j+1), i.e., Pi does not cross the boundary of M−j at other location than its attachment. The regularized Boolean subtraction states that limit configurations producing common points, curve segments or surface areas between Pi and M−j at other locations than the attachment of Pi are acceptable. This condition strongly reduces the number of primitives over time. Figure 4.13 illustrates the list of 9 primitives identified from an object M−j . 8 primitives in 4.13a satisfy the validity criterion as they are included in M−j . The primitive in 4.13b is not fully included in M−j and is 104Chapter 4: Extraction of generative processes from B-Rep shapes IG ( type 1a surface ) (a) (b) (c) IG ( type 1b surface ) IG ( type 2 volume ) Pi Pi Pi Fictive edge e1 e2 Figure 4.14: Illustration of the removal of Pi with three different interface types: (a) type 1a, (b) type 1b, (c) type 2. removed from the set. Another example in Figure 4.11 at step 2 shows that primitives P2 and P3 can be discarded. Removal of Pi The next step is the generation of M−(j+1) once Pi has been identified and removed from M−j . Depending of the type of IG, some faces of Pi may be added to ensure that M−(j+1) is a volume (see Figure 4.11 steps 3 and 4). For each category of interface between Pi and M−j , the removal operation is described as follow: • Type 1a: If IG is of type 1a, then the faces adjacent to the contour edges of F b1 are orthogonal to F b1. These faces are either planar or cylindrical. IG contains the faces extending these faces, Fa1 , to form the lateral faces of Pi that were ‘hidden in M−j ’. Edges of the attachment of Pi belonging to lateral faces of Pi can be lateral edges (either real or fictive ones) or arbitrary ones. Lateral edges bound faces in Fa1 , arbitrary edges bound the extension of the partly visible lateral faces of Pi, they belong to: Fa2 . Then, IG may contain the extension of F b2 called Fa3 such that: F b2 ∪ Fa3 = F b1. Then: ∂M−(j+1) = (∂M−j − ∂Pi) ∪ (Fa1 ∪ Fa2 ∪ Fa3 ), (4.2) where ∂M−j is the set of connected faces bounding M−j , ∂Pi is the set of connected faces bounding the visible part of Pi. ∂M−(j+1) defines a closed, orientable surface, without self intersection. M−(j+1) is therefore a volume. Figure 4.14 a and Figure 4.15 illustrate this process for interface of type 1a; • Type 1b: If IG is of type 1b, IG contains a set of faces extending lateral faces of Pi: Fa1 . To reduce the description of the various configurations, let us focus on the key aspect related to the extension of F b2 contained in IG. If this extension 105Chapter 4: Extraction of generative processes from B-Rep shapes Fb1 Partly visible base face Fb2 Partly visible lateral face of Pi Visible lateral faces of Pi Fa1 Fa3 Fa2 Lateral edge ¶M_ j (¶M_ j - ¶Pi) ¶M_ (j+1) Figure 4.15: Illustration of the removal of Pi for interface of surface type 1a and generation of ∂M−(j+1) with the extension of lateral and base faces. can be defined like Fa3 above, it has to be observed that fictive edges of this extension in M−j are replaced by convex edges in M−(j+1), i.e., edges of the same type (convex) as their corresponding edges in F b1 (see Figure 4.11 step 3 left image). Without going into details, these fictive edges can be removed to simplify the contour of Pi since they bring unnecessary complexity to Pi and does not affect the complexity of M−(j+1). In addition to simplify progressively the object’s shape, reducing the complexity of primitives’ contours is a way to obtain primitives having a form as simple as possible. The corresponding effect is illustrated on Figure 4.11 steps 3 and 4 and on Figure 4.14b. This contour simplification can influence the contents of the sets Fa1 and Fa3 above but it has no impact on the integrity of the volume M−(j+1) obtained; • Type 2: If IG belongs to category 2, it contains at least one volume sub-domain. Here again the diversity of configurations can be rather large and it is not intended to give a detailed description of this category. A first condition to generate a volume interface relates to surfaces adjacent to Pi. If S is the extension of such a surface and S ∩∗ Pi �= φ, S may contribute to the generation of a volume sub-domain. Then, each of these surfaces has to be processed. To this end, all the edges attaching Pi in M−(j+1) and bounding the same surface in M−(j+1) are grouped together since they form a subset of the contour of faces possibly contributing to a volume sub-domain. These groups are named Ea. Such an example of edge grouping is given in Figure 4.14b where e1 and e2 are grouped because of their adjacency between Pi and the same cylindrical surface. Ea, together with other sets of edges are used to identify loops in S that define a volume sub-domain of IG that must satisfy validity conditions not described here for sake of conciseness. Figure 4.16 illustrates the identification of a volume interface, S divides Pi into two volume sub-domains and generates a volume interface. There may be several valid volume sub-domains defining alternative sets of faces to replace the visible part of Pi, ∂Pi, in ∂M−j by sets of faces that promote either the 106Chapter 4: Extraction of generative processes from B-Rep shapes Fb1 ¶M_ j (¶M_ j - ¶Pi) Pi S + Volume interface type 2 Surface interface type 1a ¶M_ (j+1) = Ea1 Ea2 Ea3 Figure 4.16: Illustration of the removal of Pi containing a volume interface of type 2. extension of surfaces adjacent to Pi or the imprint of Pi in M−(j+1) with the use of faces belonging to the hidden part of Pi in M−j . All the variants are processed to evaluate their possible contribution to the generative process graph. If, in a general setting, there may be several variants of IG to define M−(j+1), these variants always produce a realizable volume, which differs from the half-space decomposition approaches studied in [SV93, BC04] where complement to the halfspaces derived from their initial boundary were needed to produce a realizable volume. 4.5 Extracting the generative process graph Having defined the primitive removal operator, the purpose is now to incorporate constraints on variants of IG so that a meaningful set of models M−j , j > 0, can be generated to produce a generative process graph. 4.5.1 Filtering out the generative processes As mentioned earlier, the principle of the proposed approach is to ‘go back in time’ from model M to single primitives forming the roots of possible construction trees. The main process to select primitives to be removed from M−j is based on a simplification criterion. Primitive selection based on a shape simplicity concept 107Chapter 4: Extraction of generative processes from B-Rep shapes nj = 8 F1 F2 F5 F3 F4 F8 F6 F1 F3 F2 F4 F4 F7 n(j+1) = 4 1 n(j+1) = 7 2 n(j+1) = 6 3 F1 F6 F5 F2 F3 F7 F1 F2 F3 F4 F5 F6 Volume Interface P1 P2 P3 M-j M-(j+1) M-(j+1) M-(j+1) 2 3 1 δj1 = 4 δj1 = 1 δj1 = 2 Figure 4.17: Illustration of the simplicity concept to filtering out the generative processes. The number of maximal faces of is greatly reduced with P1 than P2 and P3. Any acceptable primitive removal at step j of the graph generation must produce a transformation of M−j into k objects M−(j+1)k using IGk , one of the variants of IG, such that M−(j+1)k is simpler than M−j . This simplicity concept is a necessary condition for the graph generation to converge toward a set of construction trees having a single primitive as root. Consequently, the simplicity concept applied to the transition between M−j and M−(j+1)k is sufficient to ensure the convergence of the graph generation process. The shape simplification occurring between M−j and M−(j+1)k can be defined as follows. First of all, it has to be considered that ∂M−j and ∂M−(j+1)k contain maximal faces and edges. In fact, after Pi is removed and replaced by IGk to produce M−(j+1)k , its boundary decomposition is re-evaluated to contain maximal faces and edges only. Then, let nj be the number of (maximal) faces in M−j and n(j+1)k be the same quantity for M−(j+1)k , the quantity δjk: δjk = nj − n(j+1)k (4.3) characterizes the shape simplification under the variant IGk if: δjk ≥ 0. (4.4) This condition is justified because it enforces a ‘diminishing number of maximal faces over time’, which is an intrinsic quantity to each shape. Figure 4.17 illustrates the simplicity criterion between three primitives P1, P2, and P3 to be removed from a solid M−j . M−j , the initial solid, contains nj = 8 (maximal) faces. When removing P1 from M−j , the resulting solid M−(j+1)1 contains n(j+1)1 = 4 (maximal) faces. Identically, the resulting solids from P2 and P3 contains respectively n(j+1)2 = 7 and n(j+1)3 = 6 (maximal) faces. As a result, the primitive P1 is selected because the quantity δj1 = 4 is greater than δj2 and δj3. By removing P1, the resulting object M−j is simpler than with P2 or P3. 108Chapter 4: Extraction of generative processes from B-Rep shapes 4.5.2 Generative process graph algorithm Having defined the condition to evolve backward in the generative process graph, the graph generation is summarized with algorithm 1. Algorithm 1 Extract generative process graph 1: procedure Extract graph � The main procedure to extract generative processes of a solid M 2: inputM 3: node list ← root; current node ← root; 4: arc list ← nil; current arc ← nil; node list(0) = M 5: while size(node list) > 0 do � Stop when all solids M−j reach a terminal primitive root 6: current node = last element of list(node list) 7: M−j = get solid(current node) 8: conf ig list = P rocess variant(M−j ) 9: compare conf ig(get all conf ig(graph), conf ig list) � Compare new variants with the existing graph nodes 10: for each conf ig in conf ig list do 11: M−(j+1) = remove primitives(M−j , conf ig) � Remove the identified primitives from M−j 12: node = generate node(M−(j+1), conf ig) 13: add node(graph, node) 14: arc = generate arc(node, current node) 15: add arc(graph, arc) 16: append(node list, node) 17: end for 18: remove element from list(node list, current node) 19: end while 20: end procedure 21: procedure conf ig list = P rocess variant(M−j ) � Process each variant M−j to go ’backward in time’ 22: initialize primitive list(prim list) 23: ext list = f ind extrusion(M−j ) 24: for each Pi in ext list do 25: Pi = simplif y prim contour(Pi, M−j ) 26: interf list = generate geom interfaces(Pi, M−j ) 27: interf list = discard complex(interf list, Pi, M−j ) 28: if size(interf list) = 0 then 29: remove from list(Pi, ext list); 30: end if 31: append(prim list, interf list(i)) 32: end for 33: sort primitive(prim list) 34: conf ig list = generate independent ext(prim list, M−j , ) 35: end procedure 36: procedure ext list = f ind extrusion(M−j ) � Find sets of primitives to be removed from M−j 37: ext list = f ind visible extrusions(M−j ); 38: ext list = remove ext outside model(M−j , ext list); � Reject primitives not totally included in M−j 39: ext list = remove ext included ext(ext list); � Process only maximal primitives 40: end procedure The main procedure Extract graph of the algorithm 1 processes the node list containing the current variants of the model at the current step ‘backward in time’ using the procedure P rocess variant and compares the new variants to the existing graph nodes using compare conf ig. If variants are identical, graph nodes are merged, which creates cycles. Then, Extract graph adds a tree structure to a given variant corresponding to the new simpler variants derived from M−j . The graph is completed when there is no more variant to process, i.e., node list is empty. Here, the purpose is to remove (using remove primitives) the largest possible amount of primitives Pi whose interfaces IGk are not overlapping each other, i.e., ∀(i, j, k, l), i �= j, IGk ∈ Pi, IGl ∈ Pj , 109Chapter 4: Extraction of generative processes from B-Rep shapes P1 P2 P3 (a) (b) Criterion of Maximal Primitive Criterion of Independence dependent edges independent edges Figure 4.18: Selection of primitives: (a) Maximal primitive criterion not valid for P2 and P3 because they are included in P1, (b) two dependent primitives with common edges in red. IGl ∩ IGk = φ, otherwise δjk would not be meaningful. Selecting the largest possible amount of Pi and assigning them to a graph node is mandatory to produce a compact graph. Each such node expresses the fact that all its Pi could be removed, one by one, in an arbitrary order, which avoids describing trivial ordering changes. The primitive removal operator, described in Section 4.4.3, generates not only simpler solids’ shapes but also simplify the primitives’ contours (using simplify prim contour). Both simpli- fication effects reduce considerably the complexity of the extracted generative processes compared to the initial construction tree of the CAD component. To process each variant M−j of M, P rocess variant starts with the identification of valid visible extrusion primitives in M−j using f ind extrusion (see Sections 4.4.2 and 4.4.3 respectively). However, to produce maximal primitives (see Hypothesis 1), all valid primitives which can be included into others (because their contour or their extrusion distance is smaller than the others) are removed (remove ext included ext). Figure 4.18a shows two primitives P2 and P3 included in a maximal primitive (see Hypothesis 1) P1. Once valid maximal primitives (see Hypothesis 1) have been identified, processing the current variant M−j carries on with contour simplification: simplify prim contour, if it does not impact the shape complexity of M−(j+1) (see Section 4.4.3). Then, all the valid geometric interfaces IGk of each primitive are generated with generate geom interf aces (see Section 4.4.3) and interfaces IGk increasing the shape complexity are discarded with discard complex to ensure the convergence (see Section 4.5.1). Sets of independent primitives are ordered to ease the user’s navigation in the graph. As illustrated in Figure 4.18, two primitives are independent if there is no geometric intersection between them. 4.6 Results of generative process graph extractions The previous process described in Section 4.5 has been applied to a set of components whose shapes are compatible with extrusion processes to stay consistent with 110Chapter 4: Extraction of generative processes from B-Rep shapes (c) (d) (a) T T T B T T B T T T (b) Figure 4.19: Extraction of generative processes for four different components: a, b, c, d. Orange sub-domains highlight the set of visible primitives removed at each step of the graph generation. Construction graph reduces to a tree for each of these components: (a) T and B indicate Top and Bottom views to locate easily the primitives removed. Other components use a single view. 111Chapter 4: Extraction of generative processes from B-Rep shapes algorithm 1 though they are industrial components. The results have been obtained automatically using algorithm 1 implemented using Python and bindings with Open Cascade (OCC) library [CAS14]. The complexity of Algorithm 1 is O(n2). Regarding time, most consuming operations refer to the procedure f ind extrusion which uses boolean operations to verify the validity of each primitive. Therefore, the practical performance of the algorithm is dependent on the robustness and complexity of the boolean operators. Statistics given are the amount of calls to a generic Boolean type operator available in the OCC [CAS14] library, the total number of visible primitives (f ind visible extrusions), nv, and the final number of Pi in the graph, np. Results on industrial shapes Figure 4.19 shows the generative processes extracted from four different and rather simple components. They are characterized by triples (nB; nv; np), (2183; 220; 8), (9353; 240; 31), (8246; 225; 15), (1544; 132; 6), for (a), (b), (c) and (d), respectively. The graph structure reduces to a tree one for each of them. It shows that merging all extrusions in parallel into a single node can be achieved and results into a compact representation. These results also show the need for a constraint that we can formalize as follows: configurations produced by generate independent ext must be such that each variant M−(j+1)k generated from M−j must contain a unique connected component as it is with M. However, this has not been implemented yet. This continuity constraint expresses the fact that M is a continuous medium and its design process follows this concept too. Figure 4.21 illustrates this constraint on the construction graph of a simple solid. The object M−11 is composed of 5 solids which represent a non continuum domain. Consequently, any of its transformation stages must be so to ensure that any simplified model, i.e., any graph node, can stand as basis for an idealization process. Then, it is up to the idealizations and their hypotheses to remove such a constraint, e.g., when replacing a primitive by kinematic boundary conditions to express a rigid body behavior attached to one or more primitives in the graph. Figure 4.20 shows the graph extracted from the component analyzed in Figure 4.1. It is characterized by (111789; 1440; 62). Two variants appear at step 4 and lead to the same intermediate shape at step 8. It effectively produces a graph structure. It can be observed that the construction histories are easier to understand for a user than the one effectively used to model the object (see Figure 4.1). Clearly, the extrusion primitives better meet the requirements of an idealization process and they are also better suited to dimension modification processes as mentioned in Section 4.1. The current implementation of Algorithm 1 uses high-level operators, e.g., boolean operations, rather than dedicated ones. This implementation limits the time reduction which could be achieved compared to the interactive transformations. Issues also lies in the robustness of CAD boolean operators which use quite complex modeling techniques. In a future work, instead of boolean operators, specific operators can be developed for an efficient implementation. 112Chapter 4: Extraction of generative processes from B-Rep shapes M M-1 M-2 M-3 M-3 M-8 M-42 M-52 M-62 M-72 M-7 1 M-6 1 M-5 1 M-4 1 L2 L1 Figure 4.20: Construction graph GD of a component. Orange sub-domains indicate the removed primitives Pi at each node of GD. Labels M−jk indicate the step number j when ‘going back over time’ and the existence of variants k, if any. Arrows described the successive steps of D. Arcs of GD are obtained by reversing these arrows to produce construction trees. Steps M−61 and M−62 differ because of distinct lengths L1 and L2. 113Chapter 4: Extraction of generative processes from B-Rep shapes M M-1 M 2 -11 Non continuous medium ( 5 solids) Continuous medium ( 1 solid) Figure 4.21: Illustration of the continuity constraint with two construction processes of an object M. The generated object M−11 is composed of 5 independent solids which represent a non continuum domain. Object M−12 contains one solid which represents a continuum domain. Equivalence between a graph representation and a set of construction trees Also, GD is a compact representation of a set of construction processes. Figure 4.22 illustrates the equivalence between a set of construction trees and the graph representation GD. There, the set of primitives removed from M−j to obtain M−(j+1) is characterized by the edge α−j,−(j+1) of GD that connects these two models. The cardinality of this set of primitives is nj . If these nj primitives are related to different sketching planes, they must be attached to nj different steps of a construction tree in a CAD software. Without any complementary criterion, the ordering of these nj primitives can be achieved in nj ! different ways involving nj additional nodes and edges in GD to represent the corresponding construction tree. Here, it is compacted into a single graph edge α−j,−(j+1) in GD between M−j and M−(j+1). Furthermore, the description of this set of tree structures requires the expansion of α−j,−(j+1) into the nj ! construction tree structures ending up to M−j . This modification of GD generates new cycles in GD between M−j and M−(j+1). Indeed, the graph structure of GD is a much more compact structure than the construction tree of CAD software. All the previous results and observations show that GD is a promising basis for getting a better insight about a shape structure and evaluating its adequacy for idealizations. 4.7 Extension of the component segmentation to assembly structure segmentation This section explains how the generative processes used for single B-Rep component can be adapted to assembly structures of several B-Rep components. 114Chapter 4: Extraction of generative processes from B-Rep shapes M-j M-(j+1) Contruction Trees Graph GD Compact representation of nj primitives in one edge α-j,-(j+1) 1 - 4 5 7 6 8 9 9 7 - 8 1 - 6 7 1 - 6 7 9 8 ... !nj ordering possibilities corresponding the nj primitives Figure 4.22: A set of CAD construction trees forming a graph derived from two consecutive construction graph nodes. 115Chapter 4: Extraction of generative processes from B-Rep shapes P7 P3 P17 P1 P2 P4 P6 P8 P9 P10 P11 P12 P13 P14 P15 P16 P18 P5 Interface Graph between primitives C1 C2 C3 C4 Interface Graph between components (a) Component (b) Assembly Figure 4.23: Illustration of the compatibility between the component segmentation (a) and assembly structure segmentation (b). Equivalence between generative processes of components and assembly structures As explained in 1.3.1, a CAD assembly structure contains a set of volume B-Rep components located in a global reference frame. The method of Jourdes et al. [JBH∗14] enriches the assembly with geometric interfaces between components. Shahwan et al. [SLF∗12, SLF∗13] further enrich the results of Jourdes et al. with functional designation of components. As a result, the final assembly model is composed of a set of 3D solid components connected to each other through functional interfaces. An equivalence can be made between this enriched assembly structure (see Figure 4.23b) and generative processes of components. Indeed, a solid decomposition of each component can be derived from its generative processes expressed with GD. It provides intrinsic structures of components made of 3D solid primitives linked by geometric interfaces (see Figure 4.23a). These structures are compatible with the assembly structure also described using 3D solids and interfaces. The decomposition of each component, GD, can be integrated into an assembly graph structure. Figure 4.24a illustrates an assembly of two components C1 and C2 connected by one assembly interface I1,2. In 4.24b, each component is subdivided into two primitives (P1,1, P1,2) and (P2,1, P2,2), respectively, linked by geometric interfaces I11,12 and I21,22 , respectively. Now, the assembly structure can be represented by a graph GA where nodes represent the assembly components and edges contains functional assembly interfaces. Each solid decomposition of a component Ci also constitutes a graph structure GDi which can be nested into the nodes of GA, see 4.24c, in a first place. 116Chapter 4: Extraction of generative processes from B-Rep shapes Graph of the final enriched assembly model P1.1 P1.2 C1 C2 I1/2 I1/2 P1.1 P1.2 P2.2 P2.1 I1.1/1.2 P2.1 P2.2 I1/2 I2.1/2.2 I1.1/1.2 Assembly structure Components segmentation (a) (b) (c) Figure 4.24: Insertion of the interface graphs between primitives obtained from component segmentations into the graph of assembly interfaces between components GA: (a) The assembly structure with its components and assembly interfaces between these components, (b) the components segmented into primitives and interfaces between these primitives forming GDi , (c) the graph of the final enriched assembly model. Advantages for the shape analysis of components and assemblies The compatibility of the component segmentation with the assembly graph structure is a great benefit for the analysis algorithms dedicated to the determination of the simulation modeling hypotheses. Considering sub-domains and interfaces as input data to a general framework enabling the description of standalone components as well as assemblies, the analysis algorithms can be applied at the level of a standalone component as well as at the assembly level. This property extends the capabilities of the proposed FEA pre-processing methods and tools from the level of standalone components to an assembly level and contributes to the generation of FE analyses of assembly structures. However, it has to be pointed out that the nesting mechanism of GA and GDi has been briefly sketched and a detailed study is required to process configurations in which component interfaces are not exactly nested into component primitive faces, e.g., interface I1,2 that covers faces of P2,1 and P2,2. Additionally, interfaces used for illustration are all of type surface, whereas interfaces between primitives can be of types surface or volume and geometric interfaces between components can be of type contact or interference. If, in both cases, this leads to common surfaces or volumes, the detailed study of these configurations is left to future work to obtain a thorough validation. 4.8 Conclusion This chapter has described a new approach to decompose a B-Rep shape into volume sub-domains corresponding to primitive shapes, in order to obtain a description that is intrinsic to this B-Rep shape while standing for a set of modeling actions that will be used to identify idealizable sub-domains. 117Chapter 4: Extraction of generative processes from B-Rep shapes Construction trees and shape generation processes are common approaches to model mechanical components. Here, it has been shown that construction trees can be extracted from the B-Rep model of a component. Starting with a B-Rep object free of blends, the proposed approach processes it by iteratively identifying and removing a set of volume extrusion primitives from the current shape. The objective of this phase is to ‘go back in time’ until reaching root primitives of generative processes. As a result, a set of non-trivial generative processes (construction trees) is produced using the primitives obtained at the end of this first phase. It has been shown that construction trees are structured through a graph to represent the non trivial collection of generative processes that produce the input B-Rep model. This graph contains non trivial construction trees in the sense that neither variants of extrusion directions producing the same primitive are encoded nor are the combinatorial variants describing the binary construction trees of CAD software, i.e., material addition operations that can be conducted in parallel are grouped into a single graph node to avoid the description of combinatorial combinations when primitives are added sequentially as in CAD software. Thus, each node in the construction graph can be associated with simple algorithms to generate the trivial construction variants of the input object. The proposed method includes criteria which generate primitives with simple shapes and which ensure that the shape of intermediate objects is simplified after each primitive suppression. These properties guarantee the algorithm to be convergent. Finally, a graph of generative processes of a B-Rep component is a promising basis to gain a better insight of a shape structure and to evaluate its adequacy for idealizations. It has been illustrated that this process can also be extended to the assembly context with the nesting of the primitive-interface structure with respect to the componentinterface one. The next Chapter 5 describes how a construction graph can be efficiently used in an idealization process. 118Chapter 5 Performing idealizations from construction graphs Benefiting from the enrichment of a component shape with its construction graph, this chapter details the proposed morphological approach and the idealization process to generate idealized representations of a component shape’ primitives. Based on this automated decomposition, each primitive is analyzed in order to define whether it can be idealized or not. Subsequently, geometric interfaces between primitives are taken into account to determine more precisely the idealizable sub-domains. These interfaces are further used to process the connections between the idealized sub-domains generated from these primitives to finally produce the idealized model of the initial object. Also it is described how the idealization process can be extended to assembly models. 5.1 Introduction According to Section 1.5.4, shape transformations taking place during an assembly simulation preparation process interact with simulation objectives, hypotheses, and shape transformations applied to standalone components and to their assembly interfaces. Section 2.3 underlines the value of a shape analysis prior to the application of these transformations to characterize their relationship with geometry adaption and FEA modeling hypotheses, especially in the scope of idealization processes which need to identify the candidate regions for idealization. As explained in Chapter 2, prior research work has concentrated on identifying idealizable areas rather than producing simple connections between sub-domains [LNP07b, SSM∗10, MAR12, RAF11]. Recent approaches [CSKL04, Woo14] subdivide the input CAD model into simpler sub-domains. However, these segmentation algorithms do not aim at verifying the mechanical hypotheses of idealization processes and the identified features found do not necessarily produce appropriate solids for dimensional reduction operators. The idealization process proposed in this chapter benefits from the shape 119Chapter 5: Performing idealizations from construction graphs IP2/P3 C1 IP2/P3 Graph of volume primitives of C1 P1 P2 P3 IP1/P3 P3 P1 P2 IP1/P3 Component C1 Medial Surfaces e1 e2 e3 P1 P2 P3 Idealizable subdomains Non idealizable subdomains e1 e2 e3 Thickness Morphological analysis of component Full idealized CAD model Construction graph Figure 5.1: From a construction graph of a B-Rep shape to a full idealized model for FEA. structure produced by construction graphs generated from a component as a result of the process described in Chapter 4 and containing volumes extrusion primitives which can be already suited to idealization processes. This construction graph directly offers the engineer different segmentation alternatives to test various idealization configurations. Starting from a construction graph of a B-Rep model, this chapter describes a morphological analysis approach to formalize the modeling hypotheses for the idealization of a CAD component. This formalization leads to the generation of a new geometric structure of a component that is now dedicated to the idealization process. Figure 5.1 illustrates the overall process which is then described in the following sections. Section 5.2 explains the advantages of applying a morphological analysis on a component shape and states the main categories of morphology that needs to be located. Section 5.3 describes the general algorithm proposed to analyze the morphology of a B-Rep shape from its generative processes containing extrusion primitives. The algorithm evaluates each primitive morphology and process interfaces between these primitives to extend the morphological analysis to the whole object. Section 5.4 studies the influence of external boundary conditions and of assembly interfaces on the new component structure. There, the objective is to determine the adequacy of the proposed approach with an assembly structure. Section 5.5 illustrates the process to derive idealized models from a set of extrusion primitives and geometric interfaces. 5.2 The morphological analysis: a filtering approach to idealization processes A common industrial principle of the FEA of mechanical structures is to process the analysis step-by-step using a top-down approach. To simulate large assembly struc- 120Chapter 5: Performing idealizations from construction graphs tures, i.e., aeronautical assemblies, a first complete idealized mesh model is generated to evaluate the global behavior of the structure. Then, new local models are set up to refine the global analysis in critical areas. At each step of this methodology, the main purpose of the pre-processing task is to generate, as quickly as possible, a well suited FE model. In the industry, guidelines have been formalized to help engineers defining these FE models and correctly applying the required shape transformations. Although these guidelines are available for various simulation objectives, there are still difficulties about: • The model accuracy needed to capture the mechanical phenomenon to be evaluated. An over-defined model would require too many resources and thus, would delay the results. In case of large assemblies, the rules set in the guidelines are very generic to make sure a global model can be simulated in practice. This way, the FEM is not really optimized. As an example, the mesh size is set to a constant value across all geometric regions of the components and is not refined in strategic areas. Section 1.4 points out current engineering practices to generate large FE assembly models; • The application of modeling rules. The engineer in charge of the FEM creation has difficulties when identifying the regions potentially influencing or not the mechanical behavior of the structure, prior to the FEA computation results. In addition, evaluating the geometric extent of the regions to be idealized as well as determining the cross influences of geometric areas on each other are difficult tasks for the engineer because they can be performed only mentally, which is even harder for 3D objects. Section 2.3 highlights the lack of idealization operators to delimit their conditions of application. In case of large assembly data preparation, automated tools are required (due to model size and time consuming tasks). These tools have to produce simulation models in line with the two previous challenges (model accuracy and rule-based modeling). In the following sections, we introduce approaches to deal with such challenges using morphological analysis tools. These tools support the engineers when comparing the manufactured shape of a component with the simplification and idealization hypotheses needed to meet some simulation hypotheses. 5.2.1 Morphological analysis objectives for idealization processes based on a construction graph As introduced in Chapter 3, a major objective of this thesis aims at improving the robustness of FE pre-processing using a shape analysis of each component before the application of shape transformation operators. For a simulation engineer, the purpose is to understand the shape, support the localization of transformations, and to build 121Chapter 5: Performing idealizations from construction graphs the different areas to transform in the initial CAD models. This scheme contributes to an a priori approach that illustrates the application of modeling hypotheses, especially the idealization hypotheses. In addition, this approach is purely morphological, i.e., it does not depend on discretization parameters like FE sizes. Morphological categories identified in solid models The first requirement of the idealization process is to identify which geometric regions of the object contain proportions representing a predefined mechanical behavior. In the scope of structural simulations, the predefined categories representing a specific mechanical behavior correspond to the major finite element families listed in Table 1.1. These categories can be listed as follows: • Beam: a geometric sub-domain of an object with two dimensions being signifi- cantly smaller than the third one. These two dimensions define the beam cross section; • Plate and shell: a geometric sub-domain of an object with one dimension being significantly smaller than the two others. This dimension defines the thickness of the plate or the shell, respectively; • 3D thick domain: a geometric sub-domain that does not benefit from any of the previous morphological properties and that should be modeled with a general 3D general continuum medium behavior. From a geometric perspective, the principle of the idealization process corresponds to a dimensional reduction of the manifold representing a sub-domain of the solid object, as defined in 1.2.1. A detailed description of the idealization hypotheses is available in Section 1.4.2. To construct fully idealized models derived from an object M, geometric connection operations between idealized sub-domains must be performed. As stated in Chapter 2, automated geometric transformations do not produce accurate results in connections areas. The main issue lies in the determination of the geometric boundaries where the connection operators must process the idealized sub-domains (see Figure 2.8). Currently, the engineer applies these operators manually to define the direction and the extent of the connection areas (see Figure 1.15 illustrating a manual process to connect medial surfaces). The proposed idealization process benefits from an enriched initial component model with a volume segmentation into sub-domains and into interfaces between them that contain new geometric information to make the connections operators robust. This segmentation gives the engineer, a visual understanding of the impact of simulation hypotheses on the component geometry. Additionally, the proposed analysis framework identifies the regions that can be regarded as details, independently of the resolution method. These regions represents areas having no significant mechanical influence with respect to the morphological category of their neighboring regions. 122Chapter 5: Performing idealizations from construction graphs In case of assembly structures, the categories presented above are still valid at the difference that the identified geometric domains are not anymore a sub-domain restricted to a single solid. Indeed, a group of components can be considered as a unique beam, plate, or shell element. In this case, if the interfaces between a group of connected sub-domains from different components have been defined as mechanically rigid connections by the engineer, this group of sub-domains can also be identified as a unique continuum, hence distinguishing components is not necessary anymore. The effects of assembly interfaces are detailed in Section 5.4. Adequacy of generative construction graphs with respect to the morphological analysis A shape decomposition is a frequent approach to analyze and structure objects for FEA requirements. Section 2.4 has highlighted the limits of current morphological analysis methods and underlined the need of more robust automatic techniques adapted to FE model generation. The proposed approach uses the generative construction graph GD to perform a morphological analysis of CAD components adapted to FEA requirements. Section 4.2 has addressed the advantages of construction graphs to structure the shape of a component. It proposes a compact shape decomposition of primitives containing a fair amount of information that is intrinsic to this shape. The geometric structure of GD with sub-domains and associated interfaces is close to the structure described in Section 5.2.1 with regions candidate to idealization. GD also offers various construction processes which enable the engineer to construct and study various simulation models that can be derived from the same component using different construction processes. Generating the idealization of an object M from a set of primitives, obtained from its construction graph GD, is more robust compared to the prior idealization methods for three reasons: • The information contained in GD is intrinsic to the definition of primitives. Each maximal primitive Pi ∈ GD and its associated interfaces determine both the 3D location of the idealized representation of Pi and its connections with its neighboring idealized primitives; • The effective use of connections between sub-domains to be idealized. A taxonomy of categories of connections can be defined. This classification determines the most suitable geometric operator to process each connection. Currently, the idealization process still relies on the engineer’s expertise to manage complex connections whereas a CAD or a CAE software is bound to much simpler connections; • Shape modification processes of components. When a component shape modification is performed, only the impacted primitives have to be revised in its 123Chapter 5: Performing idealizations from construction graphs · Global morphological analysis of each primitive Pi using extrusion distance · Morphological analysis of the extrusion contour of each primitive Pi and determination of the geometric sub-domains Dj(Pi)to be idealized in Pi Morphological analysis of the primitives Pi 01 step Extension of the morphological analysis of primitives Pi to the whole object 02 step · Categorization of the interfaces IG between primitives Pi · Segmentation of the primitives Pi in new partitions P’ based on the interfaces IG typology · Merge or make independent the new partitions P’ with the primitives Pi to obtain a segmentation which is most suited for idealization Idealization of the primitives Pi and processing connections · Generation of mid-surface and mid-lines of the primitives Pi to be idealized · Connections of the idealized models of the primitives Pi 03 step Figure 5.2: Global description of an idealization process. construction graph. Therefore, the idealization process can be locally updated and does not have to be restarted over from its shape decomposition. The next section details the structure of the proposed idealization process from the exploitation of the geometric information provided with the construction graph GD of a component to the generation of its idealized models. 5.2.2 Structure of the idealization process The idealization process of an object M is based on morphological analysis operations, idealization transformations and connections between idealized sub-domains. Its objective is to produce the idealized model, denoted by MI , of the initial object M. Figure 5.2 contains a comprehensive chart illustrating the various steps of the proposed idealization process. In a first step, the decomposition of M into primitives Pi, described in the graph GD, leads to a first morphological analysis of each primitive Pi. For each extrusion primitive, this morphological analysis determines whether Pi has a morphology of type plate, shell, or a morphology of type thick 3D solid (see Section 5.2 describing the morphology categories). This first step is described in Section 5.3.1. Then, a second morphological analysis is applied to each primitive Pi to determine whether or not it can be subdivided into sub-domains Dij , morphologically different from the one assigned to Pi. This analysis is performed using the 2D extrusion contour 124Chapter 5: Performing idealizations from construction graphs of the primitive Pi. The resulting decomposition of Pi generates new interfaces IG, integrated in GD. In a second phase, using a typology of connections between the different categories of idealizations, the interfaces IG between Dij are used to propagate and/or update the boundary of the primitives Pi. This step, described in Section 5.3.3, results in the decomposition of the object M into sub-domains with a morphology of type beam, plate/shell, or 3D thick solid. The third phase consists in generating the idealization of each primitive Pi and then, it connects the idealized domains of Pi using the typology and the location of the interfaces IG. During this operation, the morphology of each Pi, combined with the typology of each of its interfaces IG, is used to identify regions to be considered as details, independently from any FE size. The generation of an idealized model is described in Section 5.5. Overall, the different phases illustrated in Figure 5.2 fit into an automated process. The engineer can be involved in it to select some connection model between idealized sub-domains or some boundary adjustment category of idealized sub-domain depending on its types of connections. The next sections detail each step of the proposed idealization process. 5.3 Applying idealization hypotheses from a construction graph The purpose of this section is to illustrate how the construction graph GD of an object M obtained with the algorithm described at Section 4.5.2 can be used in shape idealization processes. In fact, idealization processes are high level operations that interact with the concept of detail because the idealization of sub-domains, i.e., Pi obtained from GD, triggers their dimensional reduction, which, in turn, influences the shape of areas around IGs, the geometric interfaces between these sub-domains. Here, the proposed approach is purely morphological, i.e., it does not depend on discretization parameters like FE sizes. It is divided into two steps. Firstly, each Pi of GD is evaluated with respect to an idealization criterion. Secondly, according to IGs between Pis, the ‘idealizability’ of each Pi is propagated in GD through the construction graph up to the shape of M. As a result, an engineer can evaluate effective idealizable areas. Also, it will be shown how variants of construction trees in GD can influence an idealization process. Because the idealization process of an object is strongly depending on the engineer’s know-how, it is the principle of the proposed approach to give the engineer access to the whole range of idealization variants. Finally, some shape details will appear subsequently to the idealization process when the engineer will define FE sizes to mesh the idealized representation of M. 125Chapter 5: Performing idealizations from construction graphs e Thickness Max Diameter d d (a) (b) e Figure 5.3: Determination of the idealization direction of extrusion primitives using a 2D MAT applied to their contour. (a) Configuration with an extrusion distance (i.e., thickness d = e) much smaller than the maximal diameter obtained with the 2D MAT on the extrusion contour, the idealization direction corresponds to the extrusion direction (b) Configuration with an extrusion distance much larger than the maximal diameter obtained with the 2D MAT on the extrusion contour, the idealization direction is included in the extrusion contour. 5.3.1 Evaluation of the morphology of primitives to support idealization Global morphological analysis of each primitive Pi In a first step, each primitive Pi extracted from GD is subjected to a morphological analysis to evaluate its adequacy for idealization transformation into a plate or a shell. Because the primitives are all extrusions and add material, analyzing their morphology can be performed with a MAT [MAR12, RAM∗06, SSM∗10]. A MAT is particularly suited to extrusion primitives having constant thickness since it can be applied in 2D. Furthermore, it can be used to decide whether or not subdomains of Pi can be assigned a plate or shell mechanical behavior. In the present case, the extrusion primitives obtained lead to two distinct configurations (see Figure 5.3). Figure 5.3a shows a configuration with a thin extrusion, i.e., the maximal diameter Φ obtained with the MAT from Pi’s contour is much larger than Pi’s thickness defined by the extrusion distance d. Then, the idealized representation of Pi would be a surface parallel to the base face having Pi’s contour. Figure 5.3b shows a configuration where the morphology of Pi leads to an idealization that would be based on the content of the MAT because d is much larger than Φ. To idealize a sub-domain in mechanics [TWKW59], a commonly accepted reference proportion used to decide whether a sub-domain is idealizable or not is a ratio of ten between the in-plane dimensions of the sub-domain and its thickness, i.e., xr = 10. Here, this can be formalized with the morphological analysis of Pi obtained from the MAT using: x = max((max Φ/d),(d/max Φ)). Consequently, the ratio x is applicable for all morphologies of extrusion primitives. 126Chapter 5: Performing idealizations from construction graphs d: extrusion distance d = 2 (b) (c) d < max Ø x = d > max Ø x d = max Ø max Ø Ratio x MAT 2D d = 35 0 3 xu xr 1.5 < <25 x 1.5 = 25 = 16.6 d = 1.5 (a) Max Diameter Ø = 25 Max Diameter Ø = 10 d = 25 Max Diameter Ø = 25 Primitive Morphology (d) 35 > 10 x 10 = 35 = 3.5 2 < 10 x 2 = 10 = 5 25 = 25 x 25 = 25 = 1 10 Max Diameter Ø = 10 d Table 5.1: Categorization of the morphology of a primitive using a 2D MAT applied to the contour of extrusion primitives. Violet indicates sub-domains that cannot be idealized as plates or shells (see component d), green ones can be idealized (see component a) and yellow ones can be subjected to user decision (see component b and c). 127Chapter 5: Performing idealizations from construction graphs Because idealization processes are heavily know-how dependent, using this reference ratio as unique threshold does not seem sufficient to help an engineer analyze sub-domains, at least because xr does take precisely into account the morphology of Pi’s contour. To let the engineer tune the morphological analysis and decide when Pi can/cannot be idealized a second, user-defined threshold, xu < xr, is introduced that lies in the interval ]0, xr[. Figure 5.3b illustrates a configuration where the morphological analysis does not produce a ratio x > xr though a user might idealize Pi as a plate. Let xu = 3 be this user-defined value, Table 5.1 shows the application of the 2D MAT to the contours of four extrusion primitives. This table indicates the three categories made available to the engineer to visualize the morphology of Pi. Primitives with a ratio of x > xr, e.g., primitive (a), are considered to be idealizable and are colored in green. Primitives with a ratio of xu < x < xr, e.g., primitives (b) and (c), are subjected to a user’s decision for idealization and are colored in yellow. Finally, primitives with a ratio x < xu, e.g., primitive(d), indicate sub-domains that cannot be idealized and are colored in violet. Figure 5.4 illustrates the evaluation of the morphology of the primitives of a component prior to its idealization. The component has been initially segmented into 15 extrusion primitives using the algorithm presented at Section 4.5.2. Then, the 2D MAT has been applied to the extrusion contour of each primitive to determine the maximal diameter Φ. Finally, this diameter is compared to the extrusion distance of the corresponding primitive to determine the ratio x. Three morphological evaluations are presented in Figure 5.4 that correspond to different values of the thresholds xu and xr, which are set to (a) (3, 10), (b) (6, 10), and (c) (2, 10), respectively. Figure 5.5 shows the result of the interactive analysis the user can perform from the graphs GD obtained with the components analyzed in Figures 5.5a, b, c, and d. It has to be mentioned that the morphological analysis is applied to GD rather than to a single construction tree structure so that the engineer can evaluate the influence of D with respect to the idealization processes. However, the result obtained on the component in Figure 4.20 shows that the variants in GD have no influence with respect to the morphological analysis criterion, in the present case. Consequently, Figure 5.5 displays the morphological analysis obtained from the variant M−j2 in Figure 4.20. Results on components in 5.5a, c also show the clear use of this criterion because some nonidealizable sub-domains (see indications in Figure 5.5 regarding violet sub-domains) are indeed well proportioned to be idealized with beams. Now, considering the morphological classification of sub-domains stated in Section 5.2.1, this first morphological analysis of Pi acts as a necessary condition for Pi to fall into a category of: • plate/shell but Pi can contain sub-domains Dij where Φ can get small enough to produce a beam-like shape embedded in Pi, see Figure 5.6a. In any case, Pi 128Chapter 5: Performing idealizations from construction graphs Ø 0 3 10 x Ø Ø Ø Max Diameter Ø Extrusion distance d xr xu v 0 6 x xr xu Ratio x 0 2 x xr xu 10 10 Ø Ø Ø Ø Ø Evaluation of sub domain for idealization Identification of Maximal diameter in 2D sketch using MAT Segmentation from Construction Graph d < max Ø x = d > max Ø x d = max Ø max Ø d (a) (b) (c) Figure 5.4: Example of the morphological evaluation of the extrusion primitives extracted during the construction process of a component. Violet indicates sub-domains that cannot be idealized as plates or shells, green ones can be idealized and yellow ones are up to the user’s decision. 129Chapter 5: Performing idealizations from construction graphs T T B B sub domains idealizable as beams (a) (c) T B 0 3 10 x xr xu T B (b) (d) Figure 5.5: Idealization analysis of components. T and B indicate Top and Bottom views of the component, respectively. The decomposition of a is shown in Figure 4.20 and decompositions of b, c and d are shown in Figure 4.19. Violet indicates sub-domains that cannot be idealized as plates or shells, green ones can be idealized and yellow ones can be subjected to user’s decisions. 130Chapter 5: Performing idealizations from construction graphs Beam local sub-domain to idealize as beam local sub-domain to consider as detail local sub-domain to idealize as beam Beam Primitive Pi Dij Dik d local Ø ≈ d << max Ø max Ø local Ø local Ø ≈ d << max Ø d d local Ø << d ≈ max Ø Detail (a) (b) (c) Figure 5.6: Illustration of primitives’ configurations containing embedded sub-domains Dik which can be idealized as beams or considered as details. (a) Primitive morphologically identified as a plate which contains a beam sub-domain, (b) primitive morphologically identified as a plate which contains a volume sub-domain to be considered as a detail, (c) primitive morphologically identified as a thick domain which contains a beam sub-domain cannot contain a sub-domain of type 3D thick domain because the dominant sub-domain Dij is morphologically a plate/shell. If there exists a sub-domain Dik, adjacent to Dij , such that x < xu, i.e., it is morphologically thick, Dik is not mechanically of type 3D thick domain because it is adjacent to Dij . Indeed, Dik can be regarded as a detail compared to Dij since the thickness of Dij will be part of the dimensional reduction process. Figure 5.6b shows an example of such configuration; • 3D thick domain because Pi contains at least one dominant sub-domain Dij of this category. However, it does not mean that Pi does not contain other subdomains Dik that can be morphologically of type plate/shell or beam. Figure 5.6b illustrates a beam embedded in a 3D thick domain. Indeed, all green sub-domains and yellow ones validated by the engineer can proceed with the next step of the morphological analysis. Similarly, violet sub-domains cannot be readily classified as non idealizable. Such configurations show that the classification described in Section 5.2.1 has to take into account the relative position of sub-domains Dij of Pi and they are clearly calling for complementary criteria that are part of the next morphological analysis where Pi needs to be decomposed into sub-domains Dij to refine its morphology using information from its MAT. 131Chapter 5: Performing idealizations from construction graphs Determination of geometric sub-domains Dij to be idealized in Pi Then, in a second step, another morphological analysis determines in each primitive Pi if some of its areas, i.e., sub-domains Dij , can be associated with beams and, therefore, admit further dimensional reduction. Indeed, the previous ratio x determines only one morphological characteristic of a sub-domain Dij , i.e., the dominant one, of Pi because the location of the MAT, where x is defined, is not necessarily reduced to a point. For example, Figure 5.7 illustrates a configuration where x holds along a medial edge of the MAT of the extrusion contour. Similarly to the detail removal using MAT conducted by Armstrong et al. [Arm94], a new ratio y is introduced to compare the length of the medial edge to the maximal disk diameter along this local medial edge. The parameter y is representative of a local elongation of Pi in its contour plane and distinguishes the morphology of type beam located inside a morphology of type plate or shell when the starting configuration is of type similar to Figure 5.3a. If Pi is similar to Figure 5.3b, then the dominant Dij is of type beam if x appears punctually or of type plate/shell if x appears along a medial edge of the MAT of the extrusion contour of Pi. Appendix C provides two Tables C.1 and C.2 with 18 morphological configurations associated with a MAT medial edge of a primitive Pi. The two tables differ according to whether the idealization direction of Pi corresponds to the extrusion direction, see Table C.1 (type similar to Figure 5.3a), or whether the idealization direction of Pi is included in the extrusion contour, see Table C.2 (type similar to Figure 5.3b). The reference ratio xr and user ratio xu are used to specify, in each table, the intervals of morphology differentiating beams, plates or shells and 3D thick domains. Therefore, nine configurations are presented in Table C.1 illustrating the elongation of the extrusion contour of Pi. Table C.1 allows both the elongation of the extrusion distance and the elongation of the extrusion contour, this produces also nine configurations. These tables illustrates 18 morphological possible configurations when the medial edge represents a straight line with a constant radius for the inscribed circles of the MAT. Other configurations can be found when the medial edge is a circle, or more generally, a curve or when the radius is changing along the medial edge. These configurations have not been studied in detail and are left for future work. L1 L2 max Ø x = L1 / max Ø y = L2 / max Ø xu = 10 (user threshold) L1 < xu . max Ø L2 > xu . max Ø Figure 5.7: Example of a beam morphology associated with a MAT medial edge of a primitive Pi. 132Chapter 5: Performing idealizations from construction graphs Tables C.1, C.2 of Appendix C represents a morphological taxonomy associated with one segment of the MAT of Pi. Because the extrusion contour of Pi consists in line segments and arcs of circles, the associated MAT has straight and curvilinear medial edges which can be categorized as follows: 1. Medial edges with one of their end point located on the extrusion contour of Pi, the other one being connected to another medial edge; 2. Medial edges with their two end points connected to other medial edges. In the special case of a segment having no end point, e.g., when the extrusion contour is a circular ring, its MAT reduces to a closed circle and falls into this category. Segments of category 1 are deleted and the morphological analysis focuses on the segments of category 2 which are noted Sij . On each of these edges, the ratio y includes a maximum located at an isolated point or it is constant along the entire edge. ymax represents this maximum and is assigned to the corresponding medial edge, Sij . The set of edges Sij is automatically classified using the taxonomy of Tables C.1, C.2 or some of them can be specified by the engineer wherever yu < y < yr. This is the interactive part left to the engineer to take into account his, resp. her, know-how. Pi is segmented based on the changes in the morphological classification of the edges Sij . This decomposition generates a set of sub-domains Dij of each primitive Pi. These sub-domains Dij are inserted with their respective morphological status and their geometric interfaces in the graph GD. Figure 5.8 summarizes the different phases of the morphological analysis of each extrusion primitive Pi extracted from the construction graph GD of on object M. Because each sub-domain Dij is part of only one primitive Pi, it can also be considered as a new primitive Pk. To reduce the complexity in the following process, the sub-domains Dij are regarded as independent primitives Pk. These results are already helpful for an engineer but it is up to him, or her, to evaluate the mechanical effect of IGs between primitives Pi. To support the engineer in processing the stiffening effects of IGs, the morphological analysis is extended by a second step described as follows. 5.3.2 Processing connections between ‘idealizable’ sub-domains Dij The morphological analysis of standalone primitives Pi is the first application of GD. Also, the decomposition obtained can be used to take into account the stiffening effect of interfaces IG between Pi or, more generally, between Dij , when Pi 1 are iteratively 1From now on Pi designates sub-domains that correspond to primitives Pi obtain from the segmentation of M or one subdivision domain Dkj of a primitive Pk decomposed after the morphological analysis described at Section 5.3.1. 133Chapter 5: Performing idealizations from construction graphs For each extrusion primitivePi MAT 2D on extrusion contour of Pi Input: set of primitives Pi from construction graph GD of an object M Output: set of sub-domains Dij of morphology type beams, plates or shells and 3D thick domains Determination of Pi global morphology (x =max((max Φ / d), (d / max Φ)) For each medial edge Sij of the MAT 2D of category 2 Determination of the morphology associated with Sij (y = LSij / max Φ ) If morphology (Sij) ≠ morphology (Pi) and Sij not a detail Segmentation of Pi in sub-domains Dij and insertion in GD Primitive Pi Figure 5.8: Synthesis of the process to evaluate the morphology of primitives Pi. merged together along their IG up to obtain the whole object M. As a result, new sub-domains will be derived from the primitives Pi and the morphological analysis will be available on M as a whole, which will be easier to understand for the engineer. To this end, a taxonomy of connections between extrusion sub-domains is mandatory. Taxonomy of connections between extrusion sub-domains to be idealized This taxonomy, in case of ”plate sub-domain connections”, is summarized in Figure 5.9a. It refers to parallel and orthogonal configurations for simplicity but these configurations can be extended to process a larger range of angles, i.e., if Figure 5.9 refers to interfaces IG of surface type, these configurations can be extended to interfaces IG of volume type when the sub-domains S1 and S2 are rotated w.r.t. each other. More specifically, it can be noticed that the configuration where IG is orthogonal to the medial surfaces of S1 and S2 both is lacking of robust solutions [Rez96, SSM∗10] and other connections can require deviation from medial surface location to improve the mesh quality. Figure 5.18c illustrates such configurations and further details are given in Section 5.5.2. Figure 5.9 describes all the valid configurations of IG between two sub-domains S1 and S2 when a thickness parameter can be attached to each Pi, which is presently the case with extrusion primitives. Figure 5.9a depicts the four valid configurations: named type (1), (2), (3), (4). These configurations can be structured into two groups: type (1) and type (4) form the 134Chapter 5: Performing idealizations from construction graphs (b) S2 S2 S1 S1 S2 S1 S2 S1 (c) Fb1S1 Fb1S2 Fb1S2 IG IG Orthogonal to S1 and Parallel to S2: Orthogonal: Parallel: Parallel: S1 // S2 Orthogonal: Medial Surface S1 vs Medial Surface S2 Interface IG vs Medial Surface S1 & S2 S1 S1 S1 S1 S2 S2 S2 S2 IG IG IG IG IG ^ S1 IG // S2 type (1) I'G I'G I'G I'G IG ^ S1 IG ^ S2 IG // S1 IG // S2 type (1) e1 e2 e1 e1 e1 e2 e2 e2 e1+e2 e1+e2 e1+e2 I'G I'G S'1 S'2 S'3 type (4) e2 e1 e2 e1 e2 I'G I'G S''21 S''22 S'21 S'22 type (2) (a) type (2) type (3) type (4) type (2) type (2) type (2) type (1) type (4) C2 C1 C1 C2 Figure 5.9: (a) Taxonomy of connections between extrusion sub-domains Pi. (b) Decomposition of configurations type(1) and type(4) into sub-domains Pi showing that the decomposition produced reduces to configurations type (2) only. (c) example configurations of types (1) and (4) where S1 and S2 have arbitrary angular positions that generate volume interfaces IG where base faces Fb1S1 and Fb1S2 are intersection free in configuration type (1) and Fb1S2 only is intersection free in configuration type (4). 135Chapter 5: Performing idealizations from construction graphs group C1, and (2) and type (3) form the group C2. Figure 5.9b illustrates the effect of the decomposition of configurations type (1) and type (4) that produces configurations (2) only. Reduced set of configurations using the taxonomy of connections Configuration type (1) of C1 is such that the thicknesses e1 and e2 of S1 and S2 respectively, are influenced by IG, i.e., their overlapping area acts as a thickness increase that stiffens each of them. This stiffening effect can be important to be incorporated into a FE model as a thickness variation to better fit the real behavior of the corresponding structure. Their overlapping area can be assigned to either S1 or S2 or form an independent sub-domain with a thickness (e1 + e2). If S1 and S2 are rotated w.r.t. each other and generate a volume IG, the overlapping area still exists but behaves with a varying thickness. Whatever the solution chosen to represent mechanically this area, the sub-domains S1 and S2 get modified and need to be decomposed. The extent of S2 is reduced to produce S� 2 now bounded by I� G. Similarly, the extent of S1 is reduced to S� 1 now bounded by another interface I� G. A new sub-domain S� 3 is created that contains IG and relates to the thickness (e1 + e2) (see Figure 5.9b). Indeed, with this new decomposition IG is no longer of interest and the new interfaces I� G between the sub-domains S� i produce configurations of type (2) only. Similarly, configuration (4) is such that S2 can be stiffened by S1 depending on the thickness of S1 and/or the 2D shape of IG (see examples in Figure 5.11). In this case, the stiffening effect on S2 can partition S2 into smaller sub-domains and its IG produces a configuration of type (2) with interfaces I� G when S2 is cut by S1. The corresponding decomposition is illustrated in Figure 5.9b and Figure 5.10. This time, IG is still contributing to the decomposition of S1 and S2 but S2 can be decomposed in several ways (S� 21 , S� 22 ) or (S�� 21 , S�� 22 ) producing interfaces I� G. Whatever, the decomposition selected to represent mechanically this area, the key point is that I� G located on the resulting decomposition are all of same type that corresponds to configuration (2). Configuration (1) reduces the areas of S1 and S2 of constant thicknesses e1 and e2, which can influence their ‘idealizability’. Configuration (4) reduces the area of S2 of thickness e2 but it is not reducing that of S1, which influences the ‘idealizability’ of S2 only. As a result, it can be observed that processing configurations in C1 produce new configurations that always belong to C2. Now, considering configurations in C2, none of them is producing stiffening effects similar to C1. Consequently, the set of configurations in Figure 5.9a is a closed set under the decomposition process producing the interfaces I� G. More precisely, there is no additional processing needed for C2 and processing all configurations in C1 produces configurations in C2, which outlines the algorithm for processing iteratively interfaces between Pi and shows that the algorithm always terminates. Figure 5.9a and b refers to interfaces IG of surface type. Indeed, GD can produce 136Chapter 5: Performing idealizations from construction graphs interfaces of volume type between Pi. This is equivalent to configurations where S1 and S2 departs from parallel or orthogonal settings as depicted in Figure 5.9. Such general configurations can fit into either set C1 or C2 as follows. In the 2D representations of Figure 5.9a, b, the outlines of S1 and S2 define the base faces Fb1 and Fb2 of each Pi. What distinguishes C1 from C2 is the fact that configurations (1) and (4) each, contains at least S2 such that one of its base face (Fb1S2 in Figure 5.9c) does not intersect S1 and this observation applies also for S1 in configuration (1) (Fb1S1 in Figure 5.9c). When configurations differ from orthogonal and parallel ones, a first subset of configurations can be classified into one of the four configurations using the distinction observed, i.e., if a base face of either S1 or S2 does not intersect a base face of its connected subdomain, this configuration belongs to C1 and if this property holds for sub-domains S1 and S2 both, the corresponding configuration is of type (1). Some other configurations of type (4) exist but are not detailed here since the purpose of the above analysis is to show how the reference configurations of Figure 5.9a can be extended. The completeness of configurations has not been entirely investigated yet. 5.3.3 Extending morphological analyses of Pi to the whole object M Now, the purpose is to use the stiffening influence of some connections as analyzed in Section 5.3.2 to process all the IG between Pi, to be able to propagate and update the ‘idealizability’ of each Pi when merging Pis. This process ends up with a new subdivision of some Pi as described in the previous section and a decomposition of M into a new set of sub-domains Pi 2, each of them having an evaluation of its ‘idealizability’ so that the engineer can evaluate more easily the sub-domains he, or she, wants to effectively idealize. The corresponding algorithm can be synthesized as follows (see algorithm 2). The principle of this algorithm is to classify IG between two Pi such that if IG belongs to C1 (configurations 1 and 4 in algorithm 2), it must be processed to produce new interface(s) I� G and new sub-domains that must be evaluated for idealization (procedure Propagate morphology analysis). Depending on the connection configuration between the two primitives Pi, one of them or both are cut along the contour of IG to produce the new sub-domains. Then, the MAT is applied to these new sub-domains to update their morphology parameter (procedure MA morphology analysis) that reflects the effect of the corresponding merging operation taking place between the two Pi along IG that stiffens some areas of the two primitives Pi involved. The algorithm terminates when all configurations of C1 have been processed. Among the key features of this algorithm, it has to be observed that the influence 2Here again like in Section 5.3.1, Pi designates also the set of sub-domains Dkj that can result from the decomposition of a primitive Pk when merging it with some other Pl sharing an interface with Pk. 137Chapter 5: Performing idealizations from construction graphs Algorithm 2 Global morphological analysis 1: procedure P ropagate morphologyanalysis(GD, xu) � The main procedure to extend morphological analyses of sub-domains to the whole object 2: for each P in list prims(GD) do 3: if P.x > xu then � If the primitive has to be idealized 4: for each IG in list interfaces prim(P) do 5: P ngh = Get connectedprimitive(P, IG) 6: if IG.conf ig = 1 or IG.conf ig = 4 then 7: interV ol = Get interfaceV ol(P, P ngh, IG) 8: Pr = Remove interfaceV ol(P, interV ol) � Update the primitive by removing the volume resulting from interfaces with neighbors 9: for i = 1 to Card(Pr) do � New morphological analysis of the partitions Pr 10: P� = Extract partition(i, Pr) 11: P� .x = MA morpho analysis(P� ) 12: P ngh.x = MA morph analysis(P ngh) 13: if IG.conf ig = 1 then 14: if P ngh.x > xu then 15: P rngh = Remove interV ol(P ngh, interV ol) 16: interV ol.x = MA morph analysis(interV ol) 17: for j = 1 to Card(P rngh) do 18: P� ngh = Extract partition(j, P rngh) 19: P� ngh.x = MA morpho analysis(P� ngh) 20: if interV ol.x < xu then � If the interVolume is ‘idealizable’ 21: Merge(P, P ngh, interV ol) � Merge the intervolume either with P or P ngh 22: end if 23: end for 24: else � If the interVolume is not ‘idealizable’ 25: P = P� 26: Merge(P ngh, interV ol) � Merge the interVolume with the neighboring primitive which is non ‘idealizable’ 27: end if 28: Remove prim(P ngh, list prims(GD)) 29: end if 30: if P� .x < xu then � if a partition is not ‘idealizable’ 31: Merge(P ngh, P� ) � Merge the partition with the non ‘idealizable’ primitive neighbor 32: end if 33: end for 34: end if 35: end for 36: end if 37: end for 38: end procedure 39: procedure MA morphology analysis(Pi) � Procedure using the 2D MAT on the extrusion contour of a primitive 40: Cont = Get Contour(Pi) 41: listof pts = Discretize contour(Cont) 42: vor = V oronoi(listof pts) � MAT generated using Voronoi diagram of a set of points 43: maxR = Get max radius of inscribed Circles(vor) 44: x = Set primitive idealizableT ype(maxR, Pi) 45: return x 46: end procedure 138Chapter 5: Performing idealizations from construction graphs Init CAD model Modeling processes Evaluation of sub domains morphology Same results of morphology analysis IG (4) (4) IG A B Modeling processes OR New Sub domains Volume Interface I’G New Sub domains Volume Interface I’G Figure 5.10: Propagation of the morphology analysis of each Pi to the whole object M. A and B illustrates two different sets of primitives decomposing M and numbers in brackets refer to the configuration category of interfaces (see Section 5.3.2.) of the primitive neighbor Pngh of Pi, is taken into account with the update of Pi that becomes Pr. Indeed, Pr can contain several volume partitions, when Card(Pr) > 1, depending on the shapes of Pi and Pngh. Each partition P� of Pr may exhibit a different morphology than that of Pi, which is a more precise idealization indication for the engineer. In case of configuration 1, the overlapping area between Pngh and Pi must be analyzed too, as well as its influence over Pngh that becomes Prngh . Here again, Prngh may exhibit several partitions, i.e., Card(Prngh ≥ 1), and the morphology of each partition P� ngh must be analyzed. If the common volume of P� ngh and P� is not idealizable, it is merged with either of the stiffest sub-domains Pngh or Pi to preserve the sub-domain the most suited for idealization. In case a partition P� of Pr is not idealizable in configuration 4, this partition can be merged with Pngh if it has a similar morphological status. Figure 5.10 illustrates this approach with two modeling processes of a simple component. Both processes contain two primitives to be idealized by plate elements and interacting with a surface interface of type (4). The stiffening effect of one primitive on the other creates three sub-domains with interfaces I� G of type (2). The sub-domain in violet, interacting with both sub-domains to be idealized, can be merged with each of the other sub-domains to create a fully idealized geometry or it can be modeled with a specific connection defined by the user. Full examples of the extension of the morphological analysis to the whole object M using the interfaces IG between the primitives of GD, are given in Figure 5.11. Figures 5.11a, b and c show the sub-domain decomposition obtained after processing the interfaces IG between primitives Pi of each object M. The same figures illustrate also the update of the morphology criterion on each of these sub-domains when they are iteratively merged through algorithm 2 to form their initial object M. Areas A and B show the stiffening effect of configurations of category (1) on the morphology 139Chapter 5: Performing idealizations from construction graphs (a) (b) T T B B (c) C A D B T T B B Figure 5.11: Propagation of the morphology analysis on Pi to the whole object M. a, b and c illustrate the influence of the morphology analysis propagation. The analyzed sub-domains are iteratively connected together to form the initial object. T and B indicate the top and bottom views ob the same object, respectively. of sub-domains of M. Areas C and D are examples of the subdivision produced with configurations of type (4) and the stiffening effects obtained that are characterized by changes in the morphology criterion values. After applying algorithm 2, one can notice that every sub-domain strictly bounded by one interface IG of C2 or by one interface I� G produced by this algorithm gives a precise idealization information about an area of M. Areas exhibiting connections of type (1) on one or two opposite faces of a sub-domain give also precise information, which is the case for examples of Figure 5.11. However, if there are more piled up configurations of type (1), further analysis is required and will be addressed in the future. Conclusion This section has shown how a CAD component can be prepared for idealization. The initial B-Rep geometry has been segmented into a set of extrusion primitives Pi using its construction graph GD. Using a taxonomy of geometric interfaces, a morphological analysis has been applied to these primitives to identify the ‘idealizable’ areas over the whole object. As a result, the geometric model is partitioned into volume subdomains which can be either idealized by shell or beams or not idealized at all. At that stage, only a segmentation of a standalone component has been analyzed. Neither the compatibility with an assembly structure nor the influence of external boundary conditions have been addressed yet, as this is the purpose of the next section. 140Chapter 5: Performing idealizations from construction graphs C1 C2 I1/2 Assembly interface F Boundary condition: Load F =500N Boundary condition: rigid condition to wall C3 C3 = C1 U C2 F Assembly interface considered as a rigid connection between C1 and C2 Figure 5.12: Influence of an assembly interface modeling hypothesis over the transformations of two components 5.4 Influence of external boundary conditions and assembly interfaces As explained in Section 4.7, an assembly model is composed of a set of 3D solid components linked to each other through functional interfaces. A boundary condition that is external to the assembly, as defined in Section 1.4.2, also acts as an interface between the object and its external environment. Figure 5.12 illustrates two types of boundary conditions, a load acting on component C1 and a rigid connection with the environment on C2. These areas are defined by the user and are represented as a geometric region on its B-Rep surface which is equivalent to an assembly interface, except that the interface is only represented on one component. Each component of the assembly can be segmented using respective construction graphs. However, the segmentation of components generates new geometric interfaces between primitives which can be influenced by the assembly interfaces. Therefore, this section aims at studying the role of assembly interfaces and boundary conditions in the idealization process. They can be analyzed either before the morphological analysis as input data or after the segmentation of components. Impact of the interface modeling hypotheses Depending on the simulation objectives, the engineer decides if he, resp. she, wants to apply some mechanical behavior over some assembly interfaces (see Table 1.2). This first choice highly influences the components’ idealization. As it is highlighted in Section 6.2.1, the engineer may decide not to apply any mechanical behavior at a common interface between two components, e.g., with the definition of rigid connections between their two mesh areas to simulate a continuous medium between components at this interface. This modeling hypothesis at assembly interfaces influences directly the geometric transformations of components. As illustrated in Figure 5.12, a set of components connected by rigid connections can be seen as a unique component after merging them. Therefore, to reduce the complexity of the FEA pre-processing, the 141Chapter 5: Performing idealizations from construction graphs morphological analysis can be applied to this unique component instead of applying it to each component individually. In case the engineer wants to assign a mechanical behavior to interfaces between components, these interfaces ought to appear in the final idealized model. Now, defining when this assignment can take place during the pre-processing enumerates: a) at the end of the idealization process, i.e., once the components have been idealized; b) during the segmentation process, i.e., during the construction graph generation of each component or during their morphological analysis. These two options are addressed in the following parts of this section. In this section, only the geometric aspect of assembly interfaces is addressed. The transfer of meta-information, e.g., friction coefficient, contact pressure is not discussed here. Applying assembly interfaces information after components’ idealization In a first step, this part of the section studies the consequences of integrating assembly interfaces at the end of the idealization process, i.e., option (a) mentioned above. These assembly interfaces represent information that is geometrically defined by the interactions between components. These interactions have a physical meaning because the contacts between components exist physically in the final product though a physical contact may not be always represented in a DMU as a common area between the boundaries of two components [SLF∗13] (see Section 1.3.2). For sake of simplicity, let us consider that physical contacts are simply mapped to common areas between components. Then, the assembly interfaces are initially prescribed in contrast with geometric interfaces, IG between primitives of a component that have been created during its segmentation process and aim at facilitating the geometric transformations performed during the idealization process of a component. One can observe that these assembly interfaces have to be maintained during the dimensional reduction operations of each component. However, these interfaces can hardly be transferred using the only information provided by the idealized models. For example, Figure 5.13b shows that a projection of the assembly interface on the idealized model of C1 could generate narrow areas which would be difficult to mesh and, on that of C2, could produce areas that fall outside the idealized model of C2. This assembly interface has been defined on the initial solid models of C1 and C2. If this link is lost during the idealization process, a geometric operator, i.e., like the projection operator just discussed, has to recover this information to adapt this interface on the idealized assembly model. Therefore, to obtain robust transformations of assembly interfaces, these interfaces have to be preserved during the dimensional reduction processes of each 142Chapter 5: Performing idealizations from construction graphs Assembly interface Generation of narrow mesh areas C1 C2 CAD + assembly interfaces Idealized representation Projection of assembly interface (a) (b) Non-correspondance between interface and idealized surfaces Figure 5.13: Illustration of the inconsistencies between an assembly interface defined between initial CAD components C1 and C2 (a) and its projection onto their idealized representations (b). component. Each of these interfaces, as a portion of the initial solid boundary of a component, has a corresponding representation in its idealized model. This equivalent image would have to be obtained through the transformations applied to the initial solid model of this component to obtain its idealized representation. Integration of assembly interfaces during the idealization process In a second step, this part of the section addresses the option (b) mentioned above. As stated in Section 5.4, assembly interfaces have to be generated before the dimensional reduction of assembly components. Now, the purpose is to determine at which stage of the proposed morphological analysis process the interfaces should be integrated. This analysis incorporates the segmentation process, which prepares a component shape for idealization and is dedicated to a standalone 3D solid. This approach can be extended to an assembly model from two perspectives described as follows: b1) The assembly interfaces and boundary conditions can be used to monitor the definition of specific primitives, e.g., primitives containing the whole assembly interface. Figure 5.14a illustrates such a decomposition with two components C1 and C2 fitting the assembly interface with only one primitive in both segmentations. The benefit of this approach lies in avoiding splitting assembly interfaces across various component primitives, which would divide this assembly interface representation across all the primitives boundaries; b2) The segmentation process is performed independently of the assembly interfaces. Then, they are introduced as additional information when transforming the set of primitives derived from each component and these interfaces are incorporated into the idealized representation of each component. In this case, the intrinsic 143Chapter 5: Performing idealizations from construction graphs Constrained extrusion face Integration of external interfaces after segmentation (cat. b2) (a) (b) External assembly interface Internal segmentation interface Integration of external interfaces before segmentation (cat. b1) P1.1 P1.2 I1/2 I1/2 I1/2 I1.1/1.2 I2.1/2.2 I2.1/2.2 I1.1/1.2 P1.1 P1.2 P2.2 P2.2 P2.1 P2.1 C1 C2 C1 C2 Figure 5.14: Two possible schemes to incorporate assembly interfaces during the segmentation process of components C1 and C2: (a) The assembly interface is used to identify extrusion primitives of C1 and C2 containing entirely the assembly interface in one extrusion contour, (b) The assembly interface is integrated after the segmentation of components C1 and C2 and propagated on each of their primitives. property of the proposed segmentation approach is preserved and the assembly interfaces are propagated as external parameters on every primitive they are respectively related to. Figure 5.14b shows the imprints of the assembly interface on the primitives derived from the segmentation of components C1 and C2. As a conclusion of the previous analyses, the choice of the idealization process made in this thesis falls into category (b2). Though a fully detailed analysis would bring complementary arguments to each of the previous categories, it appears that category (a) is less robust than category (b), which is an important point when looking for an automation of assembly pre-processing. Within category (b), (b1) leads to a solution that is not intrinsic to the shape of a component, which is the case for (b2). With the current level of analysis, there is no strong argument favoring (b1) over (b2) and (b2) is chosen to keep the level of standalone component pre-preprocessing decoupled from that of the assembly level. Therefore, assembly interfaces are not constraining the extraction of each the construction graph GD for each component. During the segmentation process, assembly interfaces are propagated only. Rigid connections only are assembly interfaces that can be processed prior the component segmentation process without interfering with it. Indeed, the first part of this section has shown that these interfaces lead to merge the components they connect. Consequently, the rigid interfaces only can be removed from the initial CAD components after the corresponding components have been merged, which simplifies the geometry to be analyzed. From a mechanical point of view, this operation is equivalent to extending the continuum medium describing each compo- 144Chapter 5: Performing idealizations from construction graphs nent because their material parameters and other mechanical parameters are strictly identical. Then, the other assembly interfaces and external boundary conditions, where a mechanical behavior has to be represented, are propagated through the segmentation process and taken into account during the dimensional reduction process. Chapter 6 carries on with the analysis of interactions between simulation objectives, hypotheses and shape transformations for assembly pre-processing. This helps structuring the preparation process of an assembly in terms of methodology and scope of shape transformation operators. Section 6.4 shows an example of automated idealization of an aeronautical assembly using the idealization process presented in this chapter which is also taken into account in the methodology set in Chapter 6. Now that the roles of assembly interfaces and external boundary conditions have been clarified, the next section focuses on the dimensional reduction of a set of extrusion primitives connected through geometric interfaces, IG. The objective is to set up a robust idealization operator enabling the dimensional reduction of extrusion primitives and performing idealized connections between medial surfaces through the analysis of the interface graph GI of an object M. 5.5 Idealization processes Having decomposed a assembly component M into extrusion primitives Pi, the last phase of the idealization process consists in the generation and connection of idealized models of each primitive Pi. Now, the interfaces IG between Pis are precisely identified and can be used to monitor the required deviations regarding medial surfaces. These deviations are needed to improve the idealization process and to take into account the engineer’s know-how when preparing a FE model (see discussions of Chapter 2). Based on the morphological analysis described in Sections 5.2 and 5.3, each Pi has a shape which can be classified in idealization categories of type plate, shell, beam or 3D thick solid. Therefore, depending on Pi’s morphological category, a dimensional reduction operator can be used to generate its idealized representation. The geometric model of the idealized Pi is: • A planar medial surface when Pi has been identified as a plate. This surface corresponds to the extrusion contour offset by half the thickness of this plate along the extrusion direction; • A medial surface when the primitive has been identified as a shell (see the detailed taxonomy in Tables C.1, C.2). This medial surface is generated as the extrusion of the medial line extracted from the extrusion contour of Pi. This medial line can be generated by applying the 2D MAT to the extrusion contour, as proposed 145Chapter 5: Performing idealizations from construction graphs P1 P2 P3 Solid Primitives P1 P2 P3 Initial CAD model Interfaces P7 P3 Interface of type (2) Interface of type (4) P17 P1 P2 P4 P6 P8 P9 P10 P11 P12 P13 P14 P15 P16 P18 P5 Interface Graph GI Figure 5.15: Illustration of an interface graph containing IGs derived from the segmentation process of M producing GD. by Robinson et al. [RAM∗06]. The shell thickness varies in accordance with the diameter of circles inscribed in the extrusion contour; • A medial line when the primitive has been identified as a beam. This line is generated through the extrusion of the point representing the barycenter of the extrusion contour if the beam direction is aligned with the extrusion direction. If the beam direction is orthogonal to the extrusion direction, the medial line corresponds to the medial line of the extrusion contour, offset by half of the extrusion distance; • The volume domain of Pi when Pi is identified as a 3D thick solid; since every Pi reduces to an extrusion primitive. Now that each primitive Pi is idealized individually, the purpose of the following section is to show how the medial surfaces can be robustly connected based on the taxonomy of interfaces IG illustrated in Figure 5.9. 5.5.1 Linking interfaces to extrusion information From the construction graph GD and the geometric interfaces IG between its primitives Pi, the interface graph GI can be derived. Figure 5.15 illustrates GI for a set of extrusion primitives extracted from one component of the aeronautical use-case presented in Figure 1.6. In GI , each node, named Ni, is a primitive Pi ∈ GD and each arc is a geometric interface IG between any two primitives Pi and Pj , as IG has appeared during the segmentation process of M. In a first step, GI is enriched with the imprints of the 146Chapter 5: Performing idealizations from construction graphs boundary of each IG on each primitive Pi and Pj that defines this IG. The jth boundary of an IG w.r.t. a primitive Pi is noted Cj(Pi). A direct relationship can be established between Cj(Pi) and the information related to the extrusion property of Pi. The interface boundary Cj(Pi) is classified in accordance with its location over ∂Pi. To this end, each node Ni of GI representing Pi is subdivided into the subsets: Ni(Fb1), Ni(Fb2), Ni(Fl), that designates its base face Fb1, its base face Fb2 and its lateral faces Fl, respectively. Then, Cj(Pi) is assigned to the appropriate subset of Ni. As an example, if Cj(Pi) has its contours located solely on the base face F b1 of Pi, its is assigned to Ni(Fb1), or if Cj(Pi) belongs to one at least of the lateral faces Fl, it is assigned to Ni(Fl). Figure 5.16 illustrates the enrichment of the interface graph GI of a simple model containing three primitives P1, P2 and P3. For example, the boundary C1(P1), resulting from the interaction between P1 and P3, is assigned to F b1 of P1. Reciprocally, the equivalent C1(P3) refers to a lateral face of P3. The following step determines potential interactions of Cj(Pi)s over Pi. When a pair of Cj(Pi)s shares a common geometric area, i.e., their boolean intersection is not null: Cj(Pi) ∩ Ck(Pi) �= φ, (5.1) the resulting intersection produces common points or curve segments that are de- fined as an interface between the pair of interface boundaries (Cj(Pi), Ck(Pi)) and the nth interface is noted IDnCj/Ck . Three interfaces between Cj(Pi) have been identified on the example of Figure 5.16, e.g., ID1C1/C2 represents the common edge interaction between C1(P1) and C2(P1). These new relations between Cj(Pi)s form a graph structure GID where the nodes represent the boundary Cj(Pi) and the arcs define the interface IDnCj/Ck . The graph structure GID related to a primitive Pi is strictly nested into the i th node of GI . More globally, the graph structure GID is nested into GI . The graph structures GID derived from the relations between the boundaries of interfaces IG of each Pi can be ‘merged’ with the interface graph GI . Let us call GS this graph (see Figure 5.16d). 5.5.2 Analysis of GS to generate idealized models Using GS, algorithms may be applied to identify specific configurations of connections between idealized primitives. These algorithms are derived from the current industrial practices of idealized FEM generation from B-Rep models. Specific configurations of interface connections can be identified automatically from GS while allowing the engineer to locally modify the proposed results based on his, resp. her, simulation hypotheses. So, nodes in GS can be either of type Cj(Pi) if there exists a path between Cj(Pi)s in Pi or they are Pis if there is no such path. Arcs are built up on either IGs or IDnCj/Ck depending on the type of node derived from Cj(Pi) and Pi. To generate a fully idealized model, i.e., a model where the medial surfaces are 147Chapter 5: Performing idealizations from construction graphs Fb1 FLs Fb2 Fb1 Fb2 Fb2 Fb1 Fb2 FLs Fb2 FLs C1 (P1) C2 (P1) C1 (P2) C2 (3) C2 (P1) C1 (P3) IP1/ P3 IP1/ P2 IP2/ P3 ID1 C1/C2 ID3 C1/C2 ID2 C1/C2 IP1/ P3 IP1/ P2 IP2/ P3 P1 P3 P2 Pi Primitive Base Faces Lateral Faces Interface between primitives Interface boundary on primitive ID Cm/Cn Cm (Pi) I Pi/Pj FLs Fb1,2 Interface between contours Fb1 FLs Fb1 FLs Fb1 FLs Fb2 P3 P2 P1 P1 P3 P2 IP1/ P2 IP2/ P3 IP1/ P3 Graph GI Fb1 FLs Fb2 Fb1 Fb2 Fb2 Fb1 Fb2 FLs Fb2 FLs C1 (P1) C2 (P1) C1 (P2) C2 (P1) C2 (3) C1 (P3) ID1 C1/C2 ID2 C1/C2 ID3 C1/C2 Fb1 FLs Fb1 FLs Fb1 FLs Fb2 P P3 P2 1 Graph GID of P1 Graph GID of P2 Graph GID of P3 Graph GS (a) (b) (c) (d) Figure 5.16: Enrichment of the graph GI with the decomposition of each node into subsets Ni(Fb1), Ni(Fb2), Ni(Fl). Illustration of an interface cycle between primitives P1, P2 and P3 built from GI and GID. (a) Initial primitives segmentation, (b) GI graph, (c) GID for P1, P2 and P3, (d) GS graph. 148Chapter 5: Performing idealizations from construction graphs connected, three algorithms have been developed to identify respectively: • interface cycles; • groups of parallel medial surfaces; • and L-shaped primitives configurations. The locations of medial surfaces are described here with orthogonal or parallel properties for sake of simplicity. Therefore, each of them can be generalized to arbitrary angular positions as described in Section 5.3.2. Each algorithm is now briefly described. Interface cycles Cycles of interfaces are of particular interest to robustly generate connections among idealized sub-domains. To shorten their description, the focus is placed on a common configuration where all the interfaces between primitives are of type (4). To define a cycle of interfaces of type (4), it is mandatory, in a first step, to identify a cycle in GI from connections between Pi. In a second step, the structure of connections inside each Pi, as defined in GID, must contain themselves a path between their interface boundaries Cj(Pi)s that extends the cycle in GI to a cycle in GS = GI∪GID. An example of such a cycle is illustrated in Figure 5.16. This level of description of interfaces among sub-domains indicates dependencies between boundaries of medial surfaces. Indeed, such a cycle is a key information to the surface extension operator to connect the set of medial surfaces simultaneously. The medial surfaces perpendicular to their interfaces IG (of P3 in Figure 5.16) have to be extended not only to the medial surfaces parallel to their interfaces (of P1 and P2 in Figure 5.16), but they have also to be extended in accordance with the extrusion directions of their adjacent primitives. For example, to generate a fully idealized model of the three primitives of Figure 5.16, the corner point of the medial surface of P3, corresponding to the ID3C1/C2 edge, has to be extended to intersect the medial surface of P1 as well as to intersect the medial surface of P2. As described, the information available in an interface cycle enables a precise and robust generation of connections among idealized sub-domains. Interface cycles appear as one category of specific idealization processes because they appear frequently in mechanical products and they fall into one category of connection types in the taxonomy of Figure 5.9. Groups of parallel medial surfaces Connections of parallel medial surfaces can be handled with medial surface repositioning (see P1 and P2 on Figure 5.17a) corresponding to the adjustment of the material thickness on both sides of the idealized surface to generate a mechanical model consistent with the shape of M. This is a current practice in linear analysis that has been advantageously implemented using the relative position of extrusion primitives. 149Chapter 5: Performing idealizations from construction graphs P2 P1 Offset of parallel medial surfaces (a) (b) P2 P1 Offset of L-shaped medial surfaces Figure 5.17: Examples of medial surface positioning improvement. (a) Offset of parallel medial surfaces, (b) offset of L-shaped medial surfaces. These groups of parallel medial surfaces can be identified in the graph GI as the set of connected paths containing edges of type (2) only. Figure 5.18a shows two groups of parallel medial surfaces extracted from GI presented in Figure 5.16. As a default processing of these paths, the corresponding parallel medial surfaces are offset to a common average position of the medial surfaces and weighted by their respective areas. However, the user can also snap a medial surface to the outer or inner skins of an extrusion primitive whenever this prescription is compatible with all the primitives involved in the path. Alternatively, he, or she, may even specify a particular offset position. Surfaces are offset to the reference plane as long as the surface remains within the limits of the original volume of the component M. This restriction avoids generating interferences between the set of parallel primitives and the other neighboring primitives. For example, in Figure 5.18, the resulting medial surface of the group of parallel primitives, containing P1 and P2, cannot intersect the volumes of its perpendicular primitives such as P3. This simple process points out the importance to categorize the interfaces between primitives. Like interface cycles, groups of parallel medial surfaces refer to the taxonomy of Figure 5.9 where they fall into the type (2) category. L-shaped primitives configurations When processing an interface of type (4) in GI , if an interface boundary Cj(Pi) is located either on or close to the boundary of the primitive Pj which is parallel to the 150Chapter 5: Performing idealizations from construction graphs Independent Medial Surfaces P1 P2 P3 P1 P2 P3 Aligned Medial Surfaces (c) d2 d1 d3 P4 Primitives in connection through only interfaces of type (4) Border primitives P7 Interface of type (2) P17 P1 P2 P6 P8 P9 P10 P13 P14 P15 Group of parallel medial surfaces (a) (b) Interface Graph GI P11 P5 P12 P16 P18 P11 P7 P3 Interface of type (2) Interface of type (4) P17 P1 P2 P4 P6 P8 P9 P10 P11 P12 P13 P14 P15 P16 P18 P5 P3 P2 P10 P14 P12 P1 P15 P6 P9 P13 P8 P4 P5 P16 P18 P17 P7 P3 Figure 5.18: Example of identification of a group of parallel medial surfaces and border primitives configurations from the interface graph GI : (a) extraction of type (2) subgraphs from GI , (b) set of L-shaped primtives extracted from GI , (c) initial and idealized configurations of M when groups of parallel primitives and L-shaped configurations have been processed. 151Chapter 5: Performing idealizations from construction graphs Fb2 Fb1 Fb2 FLs IP1/ P2 P1 C1 (P1) P2 IP1/ P2 Fb1 C1 (P1) Fs C1 (P2) P1 (surface) P2 (volume) Figure 5.19: Example of a volume detail configuration lying on an idealized primitive. interface (see P1 and P2 on Figure 5.17b or P2 and P3 in Figure 5.18c), the medial surfaces needs to be relocated to avoid meshing narrow areas along one of the subdomain boundaries (here P3 is moved according to d3). This relocation is mandatory because Cj(Pi) being on or close to the boundary of Pj , the distance between the idealized representation of Pi and the boundary of Pj is of the order of magnitude of the thickness of Pi. Because Pi is idealized, it means that the dimension of FEs is much larger than the thickness of Pi, hence meshing the areas between Cj(Pi) and the boundary of Pj would necessarily result in badly shaped FEs. The corresponding configurations are designated as L-shaped because Pi and Pj are locally orthogonal or close to orthogonal. If this configuration refers to mesh generation issues, which have not been addressed yet, L-shaped configurations where a subset of Cj(Pi) coincides with the boundary of a connected primitive (see P2 in Figure 5.18c) can be processed unambiguously without mesh generation parameters, as justified above. Processing configurations where Cj(Pi) is only close to a primitive contour requires mesh parameters handling and is left for future work. Primitives connected through interfaces of type (4) only, as illustrated in Figure 5.18b, are part of L-shaped configurations if they have at least a primitive contour Cj(Pi) close to Pj boundary. In Figure 5.18b, only P11, which is located in the middle of P9 and P14, is not considered as an L-shaped primitive. L-shaped con- figurations can be processed using the precise location of IG so that the repositioning operated can stay into IG to ensure the consistency of the idealized model. Identification criteria of Pi details The relationships between extrusion information and primitive interfaces may also be combined to analyze more precisely the morphology of standalone primitives, such as small protrusions that can be considered as details. As an example, Figure 5.19 shows the interaction between a primitive P1, which can be idealized as a plate and a primitive P2, morphologically not idealizable. The enriched interface graph with GID indicates that the boundary C1(P1) lies on a base face, F b1, of P1 whose boundary is used to idealize P1. Then, if the morphological analysis of F b1 is such that: F = (F b1−∗FC1(P1) ) shows that F is still idealizable, this means that P2 has no morphological influence 152Chapter 5: Performing idealizations from construction graphs relatively to P1, even though P2 is not idealizable. As a result, P2 may be considered as a detail of P1 and processed accordingly when generating the mesh of P1, P2. This simple example illustrates how further analyses can be derived from the graph structures GID and GI . Identifying details using the morphological properties of primitives is a way to be independent from the FE mesh size. With the proposed idealization process, a volume can be considered as a detail with respect to the surrounding geometry before the operation of dimensional reduction. This is an a priori approach satisfying the requirement of Section 2.2.1 which stated that a skin detail cannot be directly identified in an idealized model. Though the criterion described above is not generic, it is illustrative of the ability of the graph structures GID and GI to serve as basis of other criteria to cover a much wider range of configurations where skin and topological details could be identified with respect to the idealization processes. A completely structured approach regarding these categories of details is part of future work. 5.5.3 Generation of idealized models To illustrate the benefits of the interface graph analyses of GID and GI , which have been used to identify specific configurations with the algorithms described in Section 5.5.2, an operator has been developed to connect the medial surfaces. Once the groups of parallel medial surfaces have been correctly aligned, the medial surfaces involved in interfaces of type (4) are connected using an extension operator. Because the precise locations of the interfaces between primitives Pi and Pj are known through their geometric imprint Cj(Pi) on these primitives, the surface requiring extension is bounded by the imprint of Cj(Pi) on the adjacent medial surface. The availability of detailed interface information in GI and GID increases the robustness of the connection operator and prevents the generation of inconsistent surfaces located outside interface areas. Connection operator Firstly, the connection operator determines the imprints Cj(Pi) on the corresponding medial surface of the primitive Pj . This image of Cj(Pi) on the medial surface of the neighbor primitive Pi is noted Img(Cj(Pi)). Figure 5.20a shows, in red, three interface boundaries on the medial surfaces Img(Cj(Pi)) of the three primitives P1, P2, P3. When adjusting Cj(Pi), the medial surface boundary of Pi is also transferred on Img(Cj(Pi)). Such regions, in green in Figure 5.20a, are noted ImgMS(Cj(Pi)). The next step extends the medial surfaces involved in interfaces of type (4). The medial surfaces are extended from Img(Cj(Pi)) to ImgMS(Cj(Pi)) (the red lines to the green lines in Figure 5.20a). The extensions of the medial surfaces split Img(Cj(Pi)) into two or more sub-regions. The sub-regions which contains edges coincident with edges of the primitive medial surface are removed to avoid small mesh areas. 153Chapter 5: Performing idealizations from construction graphs Imprint of interface on primitive: C1(P2) Medial surface boundary Image of C1(P2) on medial surface: Img(C1(P2)) P2 P1 P2 P1 P2 P1 P2 P1 P2 P1 P2 P1 Image of the medial surface boundary of P2 on the medial surface of P1 : ImgMS(C1(P2)) Suppression of non-connected areas of Img(C1(P2)) Extension of medial surface Offset of the medial surface of P2 Offset of medial surface image Idealization without offset Idealization with offset (b) C1 (P8) : interface boundary on primitive P2 P8 P3 C2 (P2): interface boundary on primitive ImgMS (C2 (P2) ): medial Surface representation on Interface Boundary Representation of geometric interfaces on Medial Surfaces (a) Img(C1(P2)): image of C1 (P8) on the medial Surface Figure 5.20: (a) Representation of the interface imprints on primitives and on medial surfaces. (b)Connection process of two primitives P1 and P2 with and without offsetting their medial surfaces. 154Chapter 5: Performing idealizations from construction graphs Mesh Aligned Medial Surfaces Independent Medial Surfaces with Interface boundary Connected Medial Surfaces · CAD Medial Surfaces · Interfaces Figure 5.21: Idealization process of a component that takes advantage of its interface graph structures GID and GI . It must be noticed that the regions ImgMS(Cj(Pi)) lie into Pj and can be obtained easily as a translation of Cj(Pi). Therefore, it is not comparable to a general purpose projection operator where existence and uniqueness of a solution is a weakness. Here, ImgMS(Cj(Pi)) always exists and is uniquely defined from Cj(Pi) based on the properties of extrusion primitives. The interface cycles previously detected are used to identify intersection points within ImgMS(Cj(Pi)). Figure 5.20a illustrates the interaction between the three ImgMS(Cj(Pi)) corresponding to the intersection between the three green lines. When processing L-shaped primitives, in case the medial surface of Pi is shifted to the boundary of Pj , the corresponding images Img(Cj(Pi)) and ImgMS(Cj(Pi)) are also shifted with the same direction and amplitude. This update of these images preserves the connectivity of the idealized model when extending medial surfaces. Figure 5.20b illustrates the connection process of two primitives P1 and P2 with and without moving the medial surface of the primitive P2. This figure shows how the connection between the idealized representations of P1 and P2 can preserve the connectivity of the idealized model. Results of idealized models As shown in Figure 5.18b and c, the repositioning of medial surfaces inside P1, P2 and P3 improves their connections and the overall idealized model. Figure 5.21 illustrates the idealization of this component. Firstly, the medial surface of each primitive is generated. Then, the groups of parallel medial surfaces are aligned before the generation of a fully connected idealized model. Finally, the complete idealization process is illustrated in Figure 5.22. The initial CAD model is segmented using the construction graph generation of Chapter 4 to produce GD. It produces a set of volume primitives Pi with interfaces between them resulting in the graph structures GI and GID. A morphological analysis is applied on each Pi as described in Section 5.3.1. Here, the user has applied a threshold ratio 155Chapter 5: Performing idealizations from construction graphs Idealized Mesh model Init CAD model Segmented model Interfaces Dimensional reduction Analysis of interfaces Final CAD idealized model Morphological analysis Figure 5.22: Illustration of the successive phases of the idealization process (please read from the left to the right on each of the two rows forming the entire sequence). xu = 2 and an idealization ratio xr = 10. Using these values, all the primitives are considered to be idealized as surfaces and lines. The final CAD idealized model is generated with the algorithms proposed in Section 5.5.3 and exported to a CAE mesh environment (see Chapter 6). 5.6 Conclusion In this chapter, an analysis framework dedicated to assembly idealization has been presented. This process exploits construction graphs of components that produce their segmentation into primitives. Morphological criteria have been proposed to evaluate each primitive with respect to their idealization process. The benefits of generative process graphs have been evaluated in the context of idealization processes as needed for FEA. A morphological analysis forms the basis of an analysis of ’idealizability’ of primitives. This analysis takes advantage of geometric interfaces between primitives to assess stiffening effects that potentially propagate across the primitives when they are iteratively merged to regenerate the initial component and to locate idealizable subdomains over this component. Although the idealization concentrates on shell and plates, it has been observed that the morphological analysis can be extended to derive beam idealizations from primitives. This morphological analysis also supports the characterization of geometric details in relation to local and to idealizable regions of a component, independently of any nu- 156Chapter 5: Performing idealizations from construction graphs merical method used to compute solution fields. Overall, the construction graph allows an engineer to access non trivial variants of the shape decomposition into primitives, which can be useful to evaluate different idealizations of a component. Finally, this decomposition produces an accurate description into sub-domains and into geometric interfaces which can be used to apply dimensional reduction operators. These operators are effectively robust because interfaces between primitives are precisely defined and they combine with the primitives to bound their idealized representations and monitor the connections of the idealized model. The principle of component segmentation appears also to be compatible with the more general needs to process assembly models. Indeed, components are sub-domains of assemblies and interfaces are also required explicitly to be able to let the engineer assign them specific mechanical behavior as needed to meet the simulation objectives. The proposed idealization process can now take part to the methodology dedicated to the adaption of a DMU to FE assembly models, as described in the next chapter. 157Chapter 5: Performing idealizations from construction graphs 158Chapter 6 Toward a methodology to adapt an enriched DMU to FE assembly models Having detailed the idealization process as a high-level operator taking benefits from a robust shape enrichment, this chapter extends the approach toward a methodology to adapt an enriched DMU to FE assembly models. Shape transformations resulting from user-specified hypotheses are analyzed to extract preprocessing tasks dependencies. These dependencies lead to the specification of a model preparation methodology that addresses the shape transformation categories specific to assemblies. To prove the efficiency of the proposed methodology, corresponding operators have been developed and applied to an industrial DMU. The obtained results point out a reduction in preparation times compared to purely interactive processes. This time saved enables the automation of simulation processes of large assemblies. 6.1 Introduction Chapter 3 set the objectives of a new approach to efficiently adapt CAD assembly models derived from DMUs as required for FE assembly models. Chapters 4 and 5 significantly contributed to solve two issues regarding the proposed approach. The first challenge addresses the internal structure of CAD components that has to be improved to provide the engineer with a robust segmentation that can be used as basis for a morphological analysis. The second challenge deals with the implementation of a robust idealization process automating the tedious tasks of dimensional reduction operations and particularly the treatment of connections between idealized areas. Then, the proposed algorithms have been specified to enable the transformations of solid primitives as well as their associated interfaces. The set of solid primitives can result either from a component segmentation or an assembly structure decomposed 159Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models into components in turn decomposed into solid primitives. Thus, the method allows an engineer to transform components’ shapes while integrating the semantics of assembly interfaces. This chapter goes even further to widen the scope of shape transformations at the assembly level and evolve toward a methodology of assembly pre-processing. The aim is to enforce the ability of the proposed approach to challenge the current practices to generate large assembly simulation models. The analysis of dependencies among component shape transformations applied to assemblies will help us to formalize this methodology. Thanks to the geometric interfaces of components and functional information expressed as functional designations of components obtained with the method of Shahwan et al. [SLF∗13] summarized in Section 3.3, new enriched DMU are now available to engineers. Thanks also to the component segmentation into solid primitives and their interfaces that can be used to idealize sub-domains as described in Chapter 5, the models input to FEA pre-processing contains much more information available to automate the geometric transformations required to meet the simulation objectives. The method described in Section 6.1 of this chapter uses this enriched DMU as input to structure the interactions between shape transformations, leading to a methodology which structures the assembly preparation process. To prove the validity of the proposed methodology, Sections 6.3 and 6.4 illustrate it with two test cases of an industrial assembly structure (see Figure 1.6) to create a simplified volume model and an idealized surface model. To this end, new developments are presented that are based on operators that perform shape transformations using functional information to efficiently automate the pre-processing. Section 6.3 develops the concept of template-based transformation operators to efficiently transform groups of components. This operator is illustratively applied to an industrial aeronautical use-case with transformations of bolted junctions. Section 6.4 deploys the methodology using the idealization algorithms of Chapter 5 to generate a fully idealized assembly model. Finally, the software platform developed in this thesis is presented at the end of Section 6.4. 6.2 A general methodology to assembly adaptions for FEA Chapter 1 pointed out the impact of interactions between components on assembly transformation. The idealization of components is not the only time consuming task during assembly preparation. When setting up large structural assembly simulations, processing contacts between components as well as transforming entire groups of components are also tedious tasks for the engineer. The conclusion of Section 1.5.4 showed that the shape transformations taking place during an assembly simulation preparation process interact with simulation objectives, hypotheses and functions attached to components and to their interfaces. Therefore, to reduce the amount of time spent 160Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models on assembly pre-processing, the purpose is now to analyze and structure the interactions between shape transformations. This leads to a methodology that structures the assembly preparation process. 6.2.1 From simulation objectives to shape transformations How do shape transformations emerge from simulation objectives and how do they interact between themselves? This is to be analyzed in the following section. However, the intention is not to detail interactions but to focus on issues that help to structure the shape transformations. Transformation criteria related to time that may influence simulation objectives are not relevant, i.e., manual operations that have been performed to save time are irrelevant. Indeed, the purpose is to structure shape transformations to save time and improve the efficiency of preparation processes. 6.2.1.1 Observation areas From the simulation objectives, the structural engineer derives hypotheses that address components and/or interfaces among them, hence the concept of observation area. Even if this engineer has to produce an efficient simplified model of the assembly to meet performance requirements, anyhow he/she must be able to claim that his/her result is correct and accurate enough in critical observations areas that are consistent with the simulation objectives. Therefore, the mechanical model set up in these areas must remain as close as possible to the real behavior of the assembly. Thus, the geometric transformations performed in these areas must be addressed in a first place. As an example, in Figure 6.1, the simulation objective is to observe displacements in the identified region (circled area) due to the effects of local loading configurations, the section of the domain being complex. A possible engineers hypothesis can be to model precisely the 3D deformation in the observation area with a volume model and a fine mesh and set up a coarse mesh or even idealized sub-domains outside the area of interest. To explicit this hypothesis over the domain, the circled area should be delimited before meshing the whole object. During a preparation process, setting up observation areas and thus, subdividing an assembly into sub-domains, independently of the component boundaries and their interfaces, acts as a prominent task. 6.2.1.2 Entire idealization of components Idealizations have inherently a strong impact on shape transformations because of their dimensional reduction. Applied to a standalone component, idealization is meaningful to transform 3D domains up to 1D ones. In the context of assemblies, to meet simulation objectives, performances, and reduce the number of unknowns, the engineer 161Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models Observation Area Geometric transformation Surface model Mixed dimensional model (Line + Surface) Fine Mesh Coarse Mesh Idealized element Load F Figure 6.1: Setting up an observation area consistent with simulation objectives. A B Analytical model Global Idealization A + B Figure 6.2: Entire idealization of two components. can idealize a component up to a point (0D), e.g., a concentrated mass, or even replace it by a pre-defined solution field, e.g., a rigid body behavior or a spring-damper field. When analytical models are available, some groups of components, like the bolts in Figure 6.3a, do not appear geometrically in the FE assembly. The planar flange connected by the bolts forming the major interface is used as location of a section in the FE assembly model to determine resulting internal forces and moments in that section. Then, the analytical model is independent of the FE one and it is fed with these parameters to determine the pre-stress parameters of the bolts. Figure 6.3b illustrates the complete idealization of pulleys as boundary conditions. This time, an analytical model has been used prior to the FE assembly model. Such categories of idealizations can be also applied to a set of connected components (see Figure 6.2). In either case, such transformations have a strong impact on the interfaces between the idealized components and their neighboring ones. Consequently, interfaces between idealized components can no longer be subjected to other hypotheses, e.g., contact and/or friction. Again, this observation highlights the prominence of idealization transformations over interfaces ones. 6.2.1.3 Processing Interfaces Interfaces between components are the location of specific hypotheses (see Table 1.2) since they characterize junctions between components. Naturally, they interact with hypotheses and shape transformations applied to the components they connect. Let 162Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models Idealization of groups of components Idealization of component as BCs (a) (b) Figure 6.3: (a) Transformation of groups of components as analytical models and, (b) idealization of components as BCs (courtesy of ANTECIM). A C B ABC Interface Idealization (a) (b) (c) A B C A B C Figure 6.4: Influence of interfaces over shape transformations of components. us consider the example of Figure 6.4. In a first place, a simulation objective can be stated as: modeling the deformation of the assembly with relative movements of plates A, B, C under friction. Under this objective, hypotheses are derived that require modeling interfaces (A, C) and (B, C) with contact and friction. Then, even if A, B and C, as standalone components, can be candidate to idealization transformations, these idealizations cannot be idealized further because the interfaces would need to be removed, which is incompatible with the hypotheses. In a second place, another simulation objective can be stated as: modeling the deformation of the assembly where the junctions between plates A, B, C are perfect, i.e., they behave like a continuous medium. There, plates A, B, C can still be idealized as standalone components but the hypothesis on interfaces enables merging the three domains (Figure 6.4b) and idealizing further the components to obtain an even simpler model with variable thickness (see Figure 6.4c). Thus, there are priorities between shape transformations deriving from the hypotheses applied to interfaces. Indeed, this indicates that hypotheses and shape transformations addressing the interfaces should take place before those addressing components as standalone objects. Effectively, interfaces are part of component boundaries; hence 163Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models their transformations modify these boundaries. It is more efficient to evolve the shape of interfaces alone first and to process component shapes, as isolated domains, afterwards. As explained in Section 5.4, once the role of interfaces has been defined in the assembly according to the user’s simulation objectives and the corresponding transformations have been performed, each individual component can be transformed on its own to take into account these interfaces as external boundary conditions during its idealization/simplification process. 6.2.2 Structuring dependencies between shape transformations as contribution to a methodology of assembly preparation Section 6.2.1 has analyzed the relationships between simulation objectives, hypotheses, and shape transformations of assemblies. One outcome of this section structures the dependencies between hypotheses and shape transformations that address an assembly at different levels. The purpose is now to exploit these dependencies to organize the various steps of an assembly simulation preparation process so that it appears as linear as possible to be efficiently automatized. Dependencies of geometric transformations of components and interfaces upon simulation hypotheses Section 6.2.1.1 has shown the dependency of observation areas upon the simulation objectives. Defining observation areas acts as a partitioning operation of an assembly, independently of its components boundaries. Section 6.2.1.2 introduced the concept of entire idealization of components and pre-defined solutions fields. Indeed, the shape transformations derived from Section 6.2.1.2 cover also sub-domains over the assembly that can be designated as ‘areas of weak interest’. There, the assembly interfaces contained in these areas are superseded by the transformations of Section 6.2.1.2. From a complementary point of view, areas of interest, once defined, contain sub-domains, i.e., components or parts of components, that can still be subjected to idealizations, especially transformations of volumes sub-domains into shells/membranes and/or plates. Consequently, areas of weak interest are regarded as primary sub-domains to be de- fined. Then, entire idealization of components and pre-defined solutions fields will take place inside these areas, in a first place (identified as task 1 in Figure 6.5). These areas are necessarily disjoint from the areas of interest, therefore their processing cannot interfere with that of areas of interest. Sections 1.5.4 and 6.2.1.3 have shown that hypotheses about assembly interfaces influence the transformations of component boundaries. Hence, these hypotheses must be located outside of areas of weak interest to preserve the consistency of the overall simulation model. Subsequently, these hypotheses about interfaces are known once the 164Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models areas of weak interest have been specified. Consequently, they come as a task 2, after the definition of the areas of weak interest and the corresponding shape transformations of assembly interfaces should be applied at that stage. As highlighted at Sections 1.5.3, 1.5.4, and 6.2.1.3, idealizations are shape transformations having an important impact on component shapes. As mentioned at Section 2.2, the order of detail removal operations and idealizations has not been studied precisely yet. However, once idealizations have been assigned to sub-domains corresponding to primitives Pi of components, these transformations produce new interfaces between these sub-domains (see Figure 5.1) in addition to the assembly interfaces originated from the interactions between components. Independently of skin and topological details, idealizations can be regarded as task 3 in the preparation process flow. Effectively, these new interfaces are the consequences of idealizations of sub-domains that result from idealization processes. Therefore, these new interfaces cannot be processed during the second task. These new interfaces should be processed in a first place after the idealizations performed during the third task. The corresponding shape transformations attached to these new interfaces form task 4. Now, as pointed out at Section 6.2.1.3, idealizations can interact between themselves because the idealized sub-domains can be extended/merged in accordance to their geometric configurations to produce a connected idealized model wherever it is required by the simulation objectives. This new set of shape transformations can be regarded as task 5 that could indeed appear as part of an iterative process spanning tasks three and four. This has not yet been deeply addressed to characterize further these stages and conclude about a really iterative process or not. Even though task two addresses hypotheses attached to assembly interfaces and their corresponding shape transformations, it cannot be swapped with task three to contribute to iterative processes discussed before. Indeed, task 2 is connected to assembly interfaces between components and their processing could be influenced by component idealizations, e.g., in a shaft/bearing junction, idealizing the shaft influences its contact area with the bearings that guide its rotational movement. Hypotheses and shape transformations previously mentioned enable the definition of a mechanical model over each sub-domain resulting from the tasks described above but this model must be available among the entities of CAE software. This is mandatory to take advantage of this software where the FE mesh will be generated. Consequently, if an engineer defines interface transformations consistent with the simulation hypotheses, there may be further restrictions to ensure that the shapes and mechanical models produced are effectively compatible with the targeted CAE software capabilities. For sake of conciseness, this aspect is not addressed here. Toward a methodology of assembly model preparation This section has identified dependencies among shape transformations connected to 165Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models Task1 – Definition of areas of weak interest Entire idealization of components, pre-defined solution fields Task2 – Specification and transformations of components interfaces Task3 – Idealization of sub-domains outside the areas of weak interest Task4 – Specification and transformations of interfaces resulting from idealization Task5 – Interaction and transformations of idealized sub-domains Task6 – Skin or topological transformations of sub-domains Figure 6.5: Synthesis of the structure of an assembly simulation preparation process. simulation objectives and hypotheses. Shape details on components can be identified using the morphological analysis, as illustrated in Section 5.5.2. This analysis has shown that the primitives Pi obtained from the construction graph GD could be further decomposed into sub-domains after analyzing the result of a first MAT. This analysis has also shown its ability to categorize sub-domains relatively to each other. However, detail removal, which originates from different components or even represents an entire component, needs to be identified through a full morphological analysis of the assembly. This has not been investigated further and is part of future research. Currently, detail removals can take place after task two but they can be prior or posterior to idealizations. The definition of areas of interest has connections with the mesh generation process to monitor the level of discretization of sub-domains. This definition acts as a partitioning process that can take place at any time during the process flow of Figure 6.5. 166Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models 6.2.3 Conclusion and methodology implementation As a conclusion, the difference between a simulation model of a standalone component and that of an assembly relates to: • The interactions between components. The engineer formulates a hypothesis for each interface between components. These hypotheses derive from assembly simulation objectives; • The ordering of shape transformations. The entire idealization of components and the specification of pre-defined solution fields followed by shape transformations of component interfaces are prioritized; • The interactions between idealizations and assembly interface transformations. To be able to model large assemblies, not only components but groups of components have to be idealized, which can significantly increase the amount of interactions between idealizations and transformations of assembly interfaces. The simulation objectives are expressed through hypotheses that trigger shape transformations. Studying the interactions between simulation objectives, hypotheses, and shape transformations has revealed dependencies between categories of shape transformations. These dependencies have been organized to structure the assembly simulation model preparation process in terms of methodology and scope of shape transformation operators. The proposed methodology aims at successfully selecting and applying the geometric transformation operators corresponding to the simulation objectives of the engineer. Starting from an enriched structure of DMU as proposed in Section 3.3, the purpose of the next sections is to illustrate how this methodology can be applied to industrial use-cases. Two implementations are proposed and both are based on the exploitation of functional features of the assembly using the interfaces between components (see Figures 2.14 and 3.2). As a first methodology implementation, Section 6.3 develops the concept of templatebased operators. This concept uses functional information and the geometry of assembly interfaces to identify configurations such as bolted junctions and to apply specific simulation hypotheses to transform their assembly interfaces. This transformation creates a simplified volume model with a sub-domain decomposition around bolted junctions, as required by the simulation objectives (see Figure 6.6). The second methodology implementation, presented in Section 6.4, leads to a full idealization of an assembly use-case. This implementation confirms that the idealization process of Chapter 5 can be generalized to assembly structures. 167Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models Clearance Fitted contact Hypothesis: bolted junctions represented by a simplified bolt model Geometric transformations: cylindrical volume interface representing a screw/hole clearance transformed into a fitted contact. (a) Figure 6.6: Use-Case 1: simplified solid model with sub-domains decomposition around bolted junctions. Area enlarged (a): Illustration of task 2 that transforms bolted junction interfaces into fitted contacts. 6.3 Template-based geometric transformations resulting from function identifications As illustrated in Section 1.5.4, repetitive configurations, e.g., junctions, and their processing are critical when preparing assembly structures, justifying the need to automate the preparation of large assembly models. To improve the efficiency of DMU transformations for FEA, Section 3.5.2 has proposed to set up relationships between simulation objectives and geometric transformations through the symbolic representation of component functions and component interfaces. The method is based on an enriched DMU as input (see Section 3.3) which contains explicit geometric interfaces between components (contacts and interferences) as well as their functional designations. This enriched DMU has been generated based on the research work of Shahwan et al. [SLF∗12, SLF∗13]. The geometric interfaces feed instances of conventional interfaces (CI) (see Section 1.3.2) classes structured into a taxonomy, TCI , that binds geometric and symbolic data, e.g. planar contact, spherical partial contact, cylindrical interference, . . . Simultaneously, CI and assembly components are organized into a CI graph: CIG(C, I) where the components C are nodes and CI are arcs. Starting from this enriched model, Section 6.3.2 extends the functional structure to reach a level of product functions. Therefore, simulation objectives can be used to specify new geometric operators using these functions to robustly identify the components and assembly interfaces to transform [BSL∗14]. If close to Knowledge Based Engineering (KBE), this scheme is nonetheless more generic and more robust than KBE approaches due to the fact that functional designations and functions are generic concepts. KBE aims at structuring engineering knowledge and at processing it with symbolic representations [CP99, Roc12] using language-based approaches. Here, the focus is on a robust connection between geometric models and symbolic representations 168Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models featuring functions. To prove the validity of this approach and of the methodology proposed in Section 6.2.2, this section presents a template-based operator dedicated to the automation of shape transformations of bolted junctions (see Figure 6.6). The template operator is described in Section 6.3.3. Using functional information and geometric interfaces, this operator applies a user-defined template to simplify bolts and sets control sub-domains around them in their associated tightened components to enable the description of the friction phenomenon between these components. This template is used to precisely monitor the mesh generation process while preserving the consistency of contacts and adapting the assembly model to simulation objectives. Finally, Section 6.3.4 illustrates the result of the different tasks of the proposed methodology applied to the transformation of the bolted junctions of the root joint model presented in Figure 1.6. 6.3.1 Overview of the template-based process The overall principle of the template-based approach is introduced in Figure 6.7. It uses the available functional information and geometric interfaces, see (1) in Figure 6.7, as well as a library of pre-defined parametric templates, see (2). From this library, the templates are selected by the user according to his, resp. her, simulation objectives. Once the templates have been selected, the operator automatically identifies the functions in the assembly the templates are related to, see (3). Then, as explained in Section 6.3.2, the operator identifies in the CAD assembly the components and interfaces to be transformed, see (4). In (5), the templates definition are fitted to the real geometry, i.e. the components and interfaces dimensions involved in the geometrical transformations are updated in the pre-definition of the templates. Section 6.3.3.1 detailed the compatibility conditions required by the templates insertion in the real geometry. Finally, the real geoemtry is transformed according to the compatibility conditions and the templates are inserted in the assembly model, see task 6. Section 6.3.4 describes the application of template operator on two aeronautical use-cases. It results in a new CAD assembly model adapted to simulation objectives. 6.3.2 From component functional designation of an enriched DMU to product functions Though the bottom-up approach of Shahwan et al. [SLF∗12, SLF∗13] summarized in Section 3.3 provides assembly components with a structured model incorporating functional information that is independent of their dimensions, their functional designation does not appear as an appropriate entry point to derive shape transformation operators as required for FE analyses. Indeed, to set up FE assembly models, an engineer looks for bolted junctions that he, resp. she, wants to transform to express friction phenomena, pre-stressed state in the screw, . . . Consequently, the functional level needed is not 169Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models Template library: pre-defined parametric templates Definition of assembly interfaces and functional designations Geometrical adjustment of the template to real shapes Functions available for transformation Identification of components and interfaces involved in selected functions Adaptation of real shapes with template insertion Real shape Template 1 2 3 4 5 6 Adapted assembly Figure 6.7: Overview of the main phases of the template-based process. 170Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models Assembly Disassemblable Obstacle Bolted adjusted Screw + nut Blocked nut Adherence Dependent function Dependent function Bolted ... ... ... Nut Screw Plates Counter nut Figure 6.8: Subset of TF N , defining a functional structure of an assembly. the functional designation, which is bound to a single component, it is the product function itself that is needed to address the corresponding set of components and their functional interfaces. To this end, it is mandatory to refer to product functions. This is achieved with a taxonomy of functions, TF N , that can produce a functional structure of an assembly (see Figure 6.8). Blue items define the sub-path in TF N hierarchy that characterizes bolted junctions. Each instance of a class in TF N contains a set of components identified by their functional designation, i.e., it contains their structured geometric models and functional interfaces. As a result of the use of TF N , a component of a DMU can be automatically identified when it falls into the category of cap screws, nuts, locking nuts, that are required to define bolted junctions. This means that their B-Rep model incorporates their geometric interfaces with neighboring components. The graph of assembly interfaces set up as input to the process of functional designation assignment, identifies the components contained in a bolted junction. Each component is assigned a functional designation that intrinsically identifies cap screws, nuts, locking nuts, . . . , and connects it with an assembly instance in TF N . It is now the purpose of Section 6.3 to take advantage of this information to set up the template-based transformations. 6.3.3 Exploitation of Template-based approach for FE models transformations As a result of the functional enrichment process, the DMU is now geometrically structured, components are linked by their geometric interfaces, and groups of components can be accurately identified and located in the DMU using their function and geo- 171Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models metric structure, e.g., adjusted bolted junctions with (screw+nut) (see Figure 6.8). Now, the geometric transformations needed to adapt the DMU to FEA objectives are strengthened because screws, nuts, locking nuts can be robustly identified, groups of tightened components are also available through the load cycles attached to cap screws (see Section 3.3). Two possible accesses are proposed to define a function-based template T related to an assembly function: • A component C through a user-defined selection: from it and its functional designation, a data structure gives access to the functions it contributes to. After selecting C, the user selects the function of interest among the available functions attached to C in TF N and compatible with T. Other components are recovered through the selected function this component participates to; • The function itself in TF N that can lead to the set of components needed to define this function and all the instances of this function existing in the targeted assembly. These accesses can be subjected to constraints that can help identifying the proper set of instances. Constraints aim at filtering out instances when a template T is defined from a function to reduce a set of instances down to the users needs, e.g., assembly function with bolts ‘constrained with’ 2 tightening plates componenti and component j . Constraints aim at extending a set of instances when a template is defined from a component, i.e., a single function instance recovered, and needs to be extended, e.g., assembly function with bolts ‘constrained with’ same tightened components and screw head functional interface of type ‘planar support’ or ‘conical fit’. 6.3.3.1 Function-based template and compatibility conditions of transformations The previous section has sketched how component functions can be used to identify sets of components in an assembly. Indeed, this identification is based on classes appearing in TF N . Here, the purpose is to define more precisely how the template can be related to TF N and what constraints are set on shape transformations to preserve the geometric consistency of the components and their assembly. Shape transformations are application-dependent and the present context is structural mechanics and FEA to define a range of possible transformations. The simplest relationship between a template T and TF N is to relate T to a leaf of TF N . In this case, T covers instances defining sets of components that contain a variable number of components. T is also dimension independent since it covers any size of component, i.e., it is a parameterized entity. Shape transformations on T are designated as ST and the template devoted to an application becomes ST (T). Now, reducing the 172Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models scope to disassemblable assembly functions and more specifically bolted junctions, one leaf of TF N can be used to define more precisely T and ST (T). Conforming to Figure 6.8, let us restrict first to the leaf ‘screw+nut’ of TF N . Then, T contains the following functional interfaces: one threaded link, two or more planar supports (one between nut and plate and at least one between two plates), either one planar support or one conical fit between the screw head and a plate, as many cylindrical loose fits as plates between the screw and plates because the class of junctions is of type adjusted. The shape transformations ST (T) of T set up to process bolted junctions can be summarized as (see Figure 6.9): • ST 1: merging screw and nut (see Section 6.3.3.1); • ST 2: localization of friction effects with a sub-domain around a screw (see Section 6.3.3.1); • ST 3: removal of the locking nut if it exists (see Section 6.3.3.2); • ST 4: screw head transformation for mesh generation purposes (see Section 6.3.3.3); • ST 5: cylindrical loose fit around the screw shaft to support the contact condition with tightened plates (see Section 6.3.4). Each of these transformations are detailed throughout the following sections. Now, the purpose is to define ST so that ST (T) exists and preserves the consistency of the components and the assembly. This defines compatibility conditions, CC, between T and ST that are conceptually close to attachment constraints of form features on an object [vdBvdMB03] (see Figure 6.9). CC applied to ST are introduced briefly here. Given the set of components contained in T, this set can be subdivided into two disjoint subsets as follows: • IC is the set of components such that each of its components has all its functional interfaces in T, e.g., the screw belongs to IC. Consequently, components belonging to IC are entirely in T (see the green rectangle in Figure 6.9); • PC is the set of components such that each of its components has some of its functional interfaces in T, e.g., a plate belongs to PC. Components belonging to PC are partially in T (see the red rectangle in Figure 6.9). IC can be used to define a 3D sub-domain of T, TI defined as the union of all components belonging to IC. Now, if a transformation ST takes place in IC and geometrically lies inside TI , ST (T) is valid because it cannot create interferences with other components of the assembly, i.e., CC are satisfied. Let us consider some of these transformations to illustrate some CC. As an objective of FEA, the purpose of the assembly model is to analyze the stress distribution between 173Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models Functionally enriched DMU CIG Function FD (screw, nut, locking nut) FI CI Ci Cn T Cc Ic Pc Geometric Transformations ST1 : C’ ← merge (C ,C ) screw nut i j ST3 ST5 : remove (C ) locking nut i : (C’ ,C’ )←preserve FI(C ,C ) j screw i screw i j S : (C’ ,C’ ,…) ← subdivide(C ,C ,...) T2 ST4 i j i j k : (C’ i ,C’ j ) ← transform(C ,C ) i j screw screw Updated components (C’, C’,...C’ ) i j m (Automated) Ci : component i CI: conventional interface CIG: conventional interface graph CC: compatibility conditions between T and ST ; FD: functional designation of a component; FI: functional interface IC: set of components such that each of its components has all its FI in T PC: set of components such that each of its components has some of its FI in T ST : shape transformation incorporated in a template, hence ST (T); T: function-based template performing shape transformations; Figure 6.9: Principle of the template-based shape transformations. The superscript of a component C, if it exists, identifies its functional designation. Threaded link Pre- Load pressure Friction IC : Counter nut Nut Screw PC: Plates Domain decomposition (CC to verify) (a) (b) (c) (d) (e) ST4 (CC to verify) ST2 (CC satisfied) (CC satisfied) ST1 ST3 Figure 6.10: Compatibility conditions (CC) of shape transformations ST applied to T. (a) through (e) are different configurations of CC addressed in Section 6.3.3. 174Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models plates and their interactions with bolts. To this end, the stress distribution around the threaded link between a screw and a nut is not relevant. Therefore, one shape transformation, ST 1 is the removal of the threaded link to merge the screw and the nut (see Figure 6.10c). ST 1 is always compatible since the screw and the nut belong to IC, hence the CC are always valid. Now, let us consider another transformation, ST 2, that specifies the localization of friction effects between plates around the screw shaft and the representation of the stress distribution nearby the screw shaft. This is modeled with a circular area centered on the screw axis and a cylindrical sub-domain around the screw shaft (see Figure 6.10d). Indeed, ST 2, is a domain decomposition [CBG08, Cha12] taking place in the plates belonging to T. Because the plates belong to PC, CC are not trivial. However, ST 2, takes place inside the plates so they cannot interfere with other components, rather they can interfere with the boundary of the plates or they can interfere between them when several screws are close to each other on the same plate (see Figure 6.11). In this case, CC can be simply expressed, in a first place, as a non interference constraint. Other shape transformations are listed when describing one example template in Section 6.3.4. 6.3.3.2 Shape transformations and function dependency The previous section has connected T to TF N in the simplest way possible, i.e., using a leaf that characterizes a single function. The purpose of this section is to analyze into which extent T can connect to classes of TF N that perform several functions in the assembly. In a first place, let us review shortly some concepts of functional analysis [Ull09]. There, it is often referred to several categories of functions that are related to a design process, i.e., external, internal, auxiliary, . . . However, this does not convey consistency conditions among these functions, especially from a geometric point of view. Here, the current content of TF N refers to internal functions, i.e., functions strictly performed by components of the assembly. The ‘screw+nut’ function, as part of bolted junctions, is one of them. Bolted junctions can contain other functions. Let us consider the locking function, i.e., the bolt is locked to avoid any loss of tension in the screw when the components are subjected to vibrations. The locking process can take place either on the screw or on the nut. For the purpose of the analysis, we consider here a locking process on the nut, using a locking nut (see Figure 6.12a). In functional analysis, this function is designated as auxiliary function but this concept does not characterize geometric properties of these functions. From a geometric point of view, it can be observed that functional interfaces of the screw, nut and locking nut are located in 3D such that the functional interfaces (planar support) between the nut and locking nut cannot exist if the nut does not tighten the plates. Consequently, the locking function cannot exist if the tightening 175Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models Interference No interference User meshing tolerance Ø subdomains = user ratio ´ Ø screw Figure 6.11: Checking the compatibility of ST (T) with respect to the surrounding geometry of T. function does not exist. Rather than using the designation of auxiliary function, which is geometrically imprecise, it is referred to dependent function. The concept of dependent functions is inserted in TF N at different levels of TF N to attach the corresponding functions when they exist (see Figure 6.8). Based on the concept of dependent function, it is possible to extend the connection rule between T and TF N . Rather than connections at the leaf level, higher level classes can be connected to T if the dependent functions are taken into account in the CC of shape transformations ST so that ST (T) exists and preserves the consistency of the assembly. As an illustration, let us consider T connected to ‘Bolted adjusted’ (see Figure 6.8). Now, ST can cover the class of bolted junctions with locking nut. Let ST 3, be the transformation that removes the locking nut of a bolted junction, which meets also the FEA objectives mentioned earlier. Because ST 3, applies to a dependent function of ‘screw+nut’, the CC are always satisfied and the resulting model has a consistent layout of functional interfaces, i.e., the removal of the locking nut cannot create new interfaces in the assembly (see Figure 6.9 and 6.10b). Consequently, T can be effectively connected to ‘Bolted adjusted’, which is a generalization of T. 6.3.3.3 Template generation T is generated on the basis of the components involved in its associated function in TF N . T incorporates the objectives of the FEA to specify ST . Here, ST covers all the transformations described previously, i.e., ST 1, ST 2, ST 3. Figure 6.10 and 6.11 illustrates the key elements of these shape transformations. Other shape transformations, ST 4, can be defined to cover screw head transformations and extend the range of screws to flat head ones. However, this may involve 176Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models (a) (b) Figure 6.12: (a) Multi-scale simulation with domain decomposition around bolted junctions, (b) Load transfers at critical holes (courtesy of ROMMA project [ROM14]). geometric transformations where the volume of a screw head gets larger. In this case, ST 4 takes place in PC and the compatibility conditions are not intrinsic to T (see Figure 6.9). Consequently, it is mandatory to perform an interface/interference checking with the other components of the assembly to make sure that the transformation is valid (see Figure 6.10). Then, the set of shape transformations structures the dialog with the user to allow him, resp. her, to select some of these transformations. However, the user settings are applied to instances whenever possible, i.e., when the instance belongs to a class where the shape transformations are applicable. 6.3.4 Example of template-based operator of bolted junctions transformation In an aeronautical company, simulation engineers perform specific FEAs on assembly sub-structures such as the aircraft junction between wings and fuselage. Based on pre-existing physical testing performed by ROMMA project [ROM14] partners, this structure can be subjected to tensile and compressive forces to analyze: • The distribution of the load transfer among the bolted junctions; • The admissible extreme loads throughout this structure. From the physical testing and preliminary numerical models, the following simulation objectives have been set up that initiate the requirements for the proposed template-based transformations. To adapt the FE model to these simulation objectives while representing the physical behavior of the structure, an efficient domain decomposition approach [CBG08, Cha12] uses a coarse mesh far enough from the bolted junctions and a specific sub-domain around each bolted junction with friction and preload phenomena (see Figure 6.12a, b). The objective is not to generate a detailed stress distribution everywhere in this assembly but to observe the load distribution areas among bolts using the mechanical models set in the sub-domains. 177Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models The objective of this section is to validate the methodology of Section 6.2 through the template-based approach. The proposed demonstrator transforms automatically the bolts into simplified sub-domains ready for meshing with friction areas definition while preserving the consistency of the assembly. Consequently, there is no area of weak interest. All the above transformations are aggregated into a parameterized template whose input is the functional designation of components to locate the cap screws in the assembly. Then, the template adapts to the screw dimensions, the number of plates tightened, . . . , to apply the operators covering tasks 2 through 6. The template features are aligned with the needs for setting up a simulation model able to exhibit some of the physical phenomena observed during testing and expressed in the above simulations results. Operator description Having enriched the assembly with functional information, the template interface lets the engineer select a node of TF N that is compatible with T. In this example, the function to select is: ‘assembly with Bolted junction’ (see Figure 6.14). Now, several ST are either pre-set or user-accessible. Figures 6.6a and 6.13 illustrates the task 2 of the methodology. An hypothesis focuses on the interfaces between screw shafts and plate holes: the clearances there are regarded as small enough in the DMU to be reduced to a fitted configuration where shafts and holes are set to the same nominal diameter to produce a conform mesh with contact condition at these interfaces. To precisely monitor the stress distribution around bolts and friction between plates, ST 2 is user-selected. It a simplified model of the Rotschers cone [Bic95] that enables generating a simple mesh pattern around bolts. In this task also, hypothesizing that locking nuts, nuts, and screws can be reduced to a single medium leads to the removal of assembly interfaces between them. ST 3 is user-accessible and set here to remove the dependent function ‘locking with locking nut’. Then, there is no idealization taking place, hence no action in tasks 3 and 5. Task 4 connects to a hypothesis addressing the interfaces resulting from task 2, i.e., the interfaces between plates need not to model contact and friction over the whole interface. Friction effects can be reduced to a circular area around each screw, which produces a subdivision of these interfaces. ST 5 is pre-set in T to preserve the cylindrical loose fit between screw and plates to set up contact friction BCs without inter-penetration over these functional interfaces. ST 1 is also pre-set as well as ST 4. Finally, task 6 concentrates on skin and topological transformations. These are achieved with locking nut, nut, and screw shape transformations. The latter is performed on the functional interface (planar support) between the screw head/nut and the plates to obtain a meshing process independent of nut and screw head shapes. Now, T can cover any bolted junction to merge screw, nut and locking-nut into a single do- 178Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models Rotscher’s cone Sub-domains Structured hex Mesh (a) (b) (c) Figure 6.13: Template based transformation ST (T) of a bolted junction into simple mesh model with friction and contact areas definition around screw and nut. (a) the bolted junction in the DMU, (b) the bolted junction after simplification to define the simulation model, (c) the desired FE mesh of the bolted junction. main, reduce the screw and nut shapes to a simple shape of revolution while preserving the consistency of its interfaces. Based on T, ST (T) is fairly generic and parameterized to intelligently select and transform bolts, i.e., it is independent of the number and thicknesses of plates, of the screw diameter, the length and head type (cylindrical (see Figure 6.13a) versus flat ones) in addition to the location of each bolt. Here, ST (T) contains ST 2, a generation of sub-domains taking into account the physical effects of the Rotschers cone. This geometric transformation could interact with plate boundaries to change the shape of these sub-domains and influence the mesh generation process. Presently, templates are standalone entities and are not taking into account these effects left for future developments. At present, the engineer can adjust the sub-domain to avoid these interactions (see Figure 6.14a). Implementation and results The developed prototype is based on OpenCascade [CAS14] and Python scripting language. The DMU is imported as STEP assembly models, the geometric interfaces between components are represented as independent trimmed CAD faces with identifiers of the initial face pairs of the functional interfaces. The assembly functional description is imported as a text file from the specific application performing the functional enrichment described in [SLF∗13] and linked to the assembly model by component identifiers. Figure 6.14a shows the user interface of the prototype. When selecting the ‘assembly with Bolted Junction’ function, the user has a direct access to the list of bolted junctions in the assembly. To allow the user filtering his selection, DMU parameters are extracted from the functional designation of components, e.g., the screw and nut types, the number of tightened components, or from geometry processing based on functional interfaces, e.g., screw diameter. Using these parameters, the user is able 179Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models to select bolted junctions within a diameter range, e.g., between 10 and 16 mm (see Figure 6.14b) or bolted junctions with screw and locking nut (see Figure 6.14c), etc. The user can monitor the Rotschers cone dimension with a FEA parameter called ‘sub domain ratio’ that represents the ratio between the screw nominal diameter and the sub-domain diameter (see Figure 6.14d and e). Then, the user-defined ‘meshing tolerance’ is used during the verification phase to check the compatibility conditions, CC, between instances and their surrounding geometry (see Figure 6.9 and 6.11). Figure 6.15 shows two results of the template-based transformations on aircraft structures: Aircraft structure 1: A junction between the wing and the fuselage. The assembly contains 45 bolted junctions with 3 different diameters and 2 different screw heads; Aircraft structure 2: An engine pylon. The assembly contains over 250 bolted junctions with identical screws and nuts. The final CAD assembly (see Figure 6.14b) with simplified bolted junctions has been exported to a CAE software, i.e., Abaqus [FEA14]. STEP files [ISO94, ISO03] transfer the geometric model and associated xml files describes the interfaces between components to trigger meshing strategies with friction area definitions. Appendix D illustrates the STEP data structure used to transfer the Aircraft structure 1. Comparing with the process pipeline used with existing industrial software (see Section 1.4.3), the improvements are as follows. The model preparation from CAD software to an Abaqus simulation model takes 5 days of interactive work for ‘Aircraft structure 1’ mentioned above (see Figure 1.13b). Using the pipeline performing the functional enrichment of the DMU and the proposed template-based shape transformations to directly produce the meshable model in Abaqus and perform the mesh in Abaqus, the overall time is reduced to one hour. The adequacy of this model conforms to the preliminary numerical models set up in ROMMA project [ROM14] and extending this conformity to testing results is ongoing since the template enables easy adjustments of the mesh model. Regarding the ‘Aircraft structure 2’, there is no reference evaluation of its model preparation time from CAD to mesh generation because it is considered as too complex to fit into the current industrial PDP. However, it is possible to estimate the time reduction since the interactive time can be linearly scaled according with the number of bolted junction. This ends up with 25 days of interactive work compared to 1.5 hour with the proposed approach where the time is mostly devoted to the mesh generation phase rather than the template-based transformations. Though the automation is now very high, the template-based approach still leaves the engineer with meaningful parameters enabling him/her to adapt the shape transformations to subsets of bolted junctions when it is part of FE requirements. 180Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models (b) (d) (c) (e) (a) Figure 6.14: (a) User interface of a template to transform ‘assembly Bolted Junctions’. Results obtained when filtering bolts based on diameters (b) or screw type (c). Results of the template-based transformations with (d) or without (e) sub-domains around bolts. 181Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models (a) (b) (c) Aircraft structure 1 Aircraft structure 2 Figure 6.15: Results of template-based transformations on CAD assembly models: (a) CAD models with functional designations and geometric interfaces, (b) models (a) after applying ST (T) on bolts, (c) mesh assembly models obtained from (a) with friction area definition. 182Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models DMU Analysis for idealization Geometric Operations CAD Interfaces Simplify fastener SIMULATION Volume Segmentation Assembly Idealization Assembly analysis Components Idealization Interfaces Graph Idealized Assembly · Geometric Interfaces · Functinal Information · User Simulation Objectives · Idealized CAD assembly ready for meshing · Boundary conditions (contact) · Physical information (thickness, offset) Figure 6.16: Illustration of the idealization process of a CAD assembly model. All components are fully idealized and bolted junctions are represented with FE fasteners. Solid plates and stiffeners are idealized as surfaces. 6.4 Full and robust idealization of an enriched assembly The methodology of Section 6.2.2 has also been applied to create an idealized plate model of the ‘Root joint’ use-case presented in Figure 1.6. The simulation objectives are set on the global analysis of the stress field in the structure and the analysis of the maximal loads transferred through the bolted junctions. Consequently (see Section 1.4.3), the generated FEM contains idealized components with shell FE and each junction can be simplified with a fastener model (see Figure 6.17). Figure 6.16 illustrates the different data and processes used to transform the initial DMU model into the final FE mesh model. Once the CAD data associated with functional interfaces have been imported, all bolted connections are transformed into simplified fastener models (see Section 6.4.1) in a first step. Then, a second step segments and idealizes all components in accordance with the method described in Chapter 5 (see Section 6.4.2). 183Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models A B C A B C Hypothesis: bolted junctions represented by fasteners’ models Geometric transformations: a cylindral interfaces transformed into a single mesh node (a) Figure 6.17: Illustration of Task 2: Transformation of bolted junction interfaces into mesh nodes. Fastener connection points Figure 6.18: Results of the template-based transformation of bolted junctions. The blue segment defines the screw axis. Red segments are projections of interfaces between the plates and the screw. Yellow points are the idealizations of these interfaces to connect the screw to the plates. 184Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models 6.4.1 Extension of the template approach to idealized fastener generation Given the above simulation objectives, the first step of transformations is related to the transfer of plate loads throughout the assembly and FE beam elements are sufficient to model the bolts behavior. This hypothesis implies, in task 2 of the methodology, a transformation of cylindrical interfaces between bolts and plate holes into single mesh nodes (see Figure 6.17a) linked to beam elements that represent the idealized fasteners. A specific template-based operator has been developed to automate the transformation of bolted junctions into idealized fasteners. As in Section 6.3, the bolted junctions are identified among all 3D components through the function they are involved in: ‘adjusted bolted junctions with screw+nut’. Then, the template applies a set of shape transformations ST to generate the beam elements with their associated connection points. These shape transformations are described as follows: • ST 1: merging screw and nut (identical to Section 6.3.3.1); • ST 2: removal of the locking nut if it exists (identical to Section 6.3.3.2); • ST 3: screw transformation into beam elements (see Figure 6.18). FE beam elements are represented by line segments. The axis of the screw is used as location for the line segments; • ST 4: transfer of interfaces between plates and screw as points on the line segments. The blue part in Figure 6.18 represents the whole screw, the red parts are the projections of the interfaces between the plates and the screw while yellow points represent the idealization of these interfaces and define the connections between each plate and the screw; • ST 5: reduce junction holes in plates to single points. The fastener model used to represent bolted junctions does not represent holes in the final idealized model (see Figure 6.19). Different idealizations, i.e., the set of ST , can be generated to match different simulation objectives. For instance, one ST can focus on the screw stiffness in addition to the current simulation objectives. To this end, a new objective can be set up to compare the mechanical behavior when screws are modeled as beams and when they are perfectly rigid. Consequently, the screws must now be represented as rigid bodies. This means that the blue part (see Figure 6.18) representing the FE beam element is no longer needed. Then, the yellow points (see Figure 6.18) can be used directly to generate the mesh connection points in the final medial surfaces of the plate components. These points can be used to set kinematic constraints. Indeed, the list of previous geometric operations describes a new category of shape transformation, ST i that would be needed to meet this new simulation objective. Be- 185Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models cause these transformations are close the templates described, this shows how the principle of the template-based transformations can be extended to be adapted to new simulation objectives using additional elementary transformations. Here, ST 3 would be replaced by ST i that would produce the key points needed to express the kinematic constraints. Now that the bolted junctions have been simplified into FE fasteners, the next section illustrates the idealization of the whole assembly. 6.4.2 Presentation of a prototype dedicated to the generation of idealized assemblies In order to generalize the idealization approach presented in Chapter 5, a prototype has been developed to process not only components but also whole assemblies. Likewise the template demonstrator, the prototype is based on OpenCascade [CAS14] and Python scripting language. The CAD assembly as well as the geometric interfaces are imported as STEP models. Figure 6.19 illustrates the user interface of the prototype. Here, the 3D viewer shows the result of task 2 where the bolted junctions of the CAD assembly have been transformed into simple fasteners using the template-based approach. The interface graph in the graph tag shows all the neighboring relationships between assembly components (including the fasteners). Figure 6.20 illustrates task 3 where the assembly components are segmented into sub-domains according to shape primitives organized into a construction graph. The set of solid primitives in red is extracted using the algorithm described in Section 4.5.2. The primitives are then removed from the initial solid to obtain a simpler component shape. Once the construction graph is generated, the user selects a construction process which creates a component segmentation into volume sub-domains. Then, each sub-domain is idealized wherever the primitive extent versus its thickness satisfies the idealization criterion. The interfaces resulting from this idealization can be associated with new transformations of assembly interfaces, e.g., a group of parallel idealized surfaces linked to the same assembly interface can be aligned and connected. The analysis of interactions between independently idealized sub-domains can guide geometric transformations such as sub-domain offsets and connections. These transformations are part of task 4. Figure 6.21 illustrates an intermediate result of the idealization process of a component (task 5). The graph of primitives’ interfaces has been analyzed in task 4 to identify and align groups of parallel medial surfaces. For example, the medial surface highlighted in brown is offset by 2.9 mm from its original position. The medial surfaces are then connected with the operator described in Section 5.5.3. The result is a fully idealized representation of the component. Finally, other idealized components are incorporated in the idealized assembly 186Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models List of interfaces List of components 3D Viewer Output/ Input console Graph of interfaces Figure 6.19: User interface of the prototype for assembly idealization. The 3D viewer shows the assembly after the transformation of bolted junctions into FE fasteners. Configurations of component’s segmentation 3D Visualization of a set of primitives to be removed from solid Configuration of the primitives extraction algorithm Figure 6.20: Illustration of a component segmentation which extract extruded volumes to be idealized in task 3. The primitives to be removed from the initial solid are highlighted in red. 187Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models List of interfaces between primitives List of component’s primitives 3D Vizualization of component idealization Primitive attributes Graph of interfaces between primitives Figure 6.21: Illustration of task 4: Identification and transformation of groups of idealized surfaces connected to the same assembly interfaces. model using complementary sub-domain transformations applied to each of them as illustrated in Figure 6.22. List of idealized components and fasteners 3D Vizualization of assembly idealization Figure 6.22: Final result of the idealized assembly model ready to be meshed in CAE software. Again, functional information about components and successive decomposition into 188Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models sub-domains as well as idealization processes reduce the preparation process to minutes: up to approximately ten minutes to process all the components including all the user interactions required to load each component, select the appropriate template or process the components subjected to the segmentation process and the morphological analysis. This is a significant time reduction compared to the days required when performing interactively the same transformations using tedious interactions with low level operators existing in current CAE software. Yet, the constraints related to mesh generation through mesh size constraints, have not been taken into account in the current analysis of preparation processes. These constraints have to be addressed in future research work. It has also to be noted that the process flow of Figure 6.5 turns into a sequential flow in all the simulation frameworks illustrated. 6.5 Conclusion In this chapter, dependencies between categories of shape transformations have been organized to structure the assembly simulation model preparation process in terms of methodology and scope of shape transformation operators. The proposed methodology empowers the use of DMUs enriched with geometric interfaces and functional information to automate CAD assembly pre-processing and to generate a ‘FE-friendly’ equivalence of this assembly. The template-based method has shown that shape transformations highly benefit from functional information. Using functional information strengthens the transformation of complex models like assemblies where many components interact with each other. The template can be instantiated over the whole assembly to quickly transform repetitive configurations such as bolted junctions that are highly time-consuming when processed purely interactively. The idealization method introduces a robust geometric operator of assembly idealization. This operator takes advantage of the assembly decomposition into sub-domains and their associated geometric interfaces as produced in Chapter 5. This structure has been used successfully to idealize sub-domains and address some general mesh generation constraints to ensure obtaining high quality meshes. Finally, a demonstrator has been implemented to prove the validity of the proposed methodology. This prototype has been applied to an industrial use-case proposed in ROMMA project [ROM14] to create a simplified solid model and an idealized, surfacebased model, using the operators currently developed. This use-case has demonstrated the benefits of the proposed methodology to: 1. Efficiently process real 3D assemblies extracted from large DMU; 189Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models 2. Enable the implementation of a robust approach to monitor automated shape transformations. Thanks to this methodology, the preparation time can be drastically shortened compared to purely interactive processes as commonly practiced by today’s engineers. 190Conclusion and perspectives Assemblies, as sets of components, bring a new complexity level for CAD-FEA data processing of mechanical structures. Here, new principles and associated operators have been developed to automate the adaptation of CAD assembly models. The objective targeted robust transformations to process the large amount of repetitive geometric configurations of complex assemblies and to reduce the time spent by engineers to prepare these assemblies. Summary of conclusions Now, each of the contributions stated in the previous chapters can be synthesized and summarized. In-depth analysis of FE pre-processing rules The first contribution stands in the analysis of the current pre-processing of CAD models derived from DMUs to produce FE models. Due to the lack of assembly-related information in a DMU, very tedious tasks are required to process the large amount of components as well as their connections. Preparing each component is already a tedious task, especially when idealizations are necessary, that increases significantly with the number of components and their interfaces. Additionally, these interfaces form new entities to be processed. It has been observed that repetitive configurations and their processing are also an issue of assembly preparation, justifying the need to automate the preparation of large assembly models. This first analysis has concluded that the adaption of an assembly to FEA requirements and geometric transformations derive from simulation objectives and component functions are needed as well to geometrically transform groups of components. Also, it has been shown that functional information can be an efficient enrichment of a DMU to identify and process repetitive configurations. Challenging assembly models preparation Studying the current CAD-FEA methods and tools related to data integration 191Conclusion and perspectives reveals that operators currently available focus on the transformation of standalone components. One main contribution of this thesis is the proposal of an approach to assembly model preparation. Rather than reducing the preparation process to a sequence of separately prepared parts, the entire assembly has been considered when specifying shape transformations to reach simulation objectives, taking into account the kinematics and physics associated with assembly interfaces. The proposed approach to assembly pre-processing uses, as input model, a DMU enriched at an assembly level with interface geometry between components, additional functional properties of these components, and, at the component level, a structured volume segmentation using a graph structure. Geometrically enriched components A new approach has been proposed to decompose a B-Rep solid into volume subdomains. This approach robustly enriches a CAD component using a construction graph and provides a volume decomposition of each component shape. This construction graph is generated by iteratively identifying and removing a set of extrusion primitives from the current B-Rep shape. It has been shown that, compared to any initial construction process performed by a designer, the extracted graph is unique for a given object and is intrinsic to its shape because it overcomes modeling, surface decomposition, and topological constraints. In addition, it provides non trivial construction trees, i.e., variants of extrusion directions producing the same primitive are not represented and variants in primitives ordering are grouped into a single node. This generates a compact representation of a large set of shape construction processes. Moreover, the proposed approach, while enriching a standalone component shape, can be extended to assembly structures after they have been enriched with component interfaces. Each component construction graph can be nested into the component-interface assembly structure, thus forming a robust data structure for CAD-FEA transformation processes. Finally, a graph of generative processes of a B-Rep component is a promising basis to gain a better insight about a shape structure. The criteria used to generate this graph bring meaningful and simple primitives which can be subsequently used to support the idealization process of component shapes. Formalizing a shape idealization process through a morphological analysis It has been shown that generating a fully idealized model cannot be reduced to a pure application of a dimensional reduction operator such that this model is a mechanical equivalence of the initial component. The incorporation of idealization hypotheses requires the identification of candidate geometric areas associated with the connections between idealized sub-domains. The proposed idealization process benefits from the new enrichment of components with their shape structure. The segmentation of the component into meaningful primitives and interfaces between them has been used as a first step of a morphological analysis. This analysis evaluates each primitive with re- 192Conclusion and perspectives spect to its dominant idealized shape. Then, using a taxonomy of geometric interfaces between idealized sub-domains, this analysis is propagated over the whole component and results in the decomposition of its shape into ’idealizable’ areas of type ’plate/shell, beam and ’non-idealizable’ areas. Overall, the morphological analysis is independent from any resolution method and is able to characterize geometric details in relation to local and to ’idealizable’ regions of a component. Finally, an idealization operator has been developed which transforms the sub-domains into medial surfaces/lines and robustly connects them using the precise geometric definition of interfaces between primitives. Laying down the basis of a methodology to assembly preparation To address the current industrial needs about assembly pre-processing for structural simulation, the analysis of dependencies between geometric transformations, simulation objectives, and simplification hypotheses led to a first methodology increasing the level of automation of FE assembly model pre-processing. Using an enriched DMUs containing geometric interfaces between components and their primitives as well as functional information to end up with the generation of a ‘FE-friendly’ equivalence of this assembly, the methodology is in line with the industrial needs to develop a new generation of DMU: the Functional DMU. Finally, the development of a prototype platform has illustrated that the methodology fits well with the methods and tools proposed in this thesis. The template-based transformation, empowering the use of functional information, has illustrated how repetitive configurations, such as assembly junctions, can be automatically transformed. Then, the generation of the complete idealization of an aeronautical structure has demonstrated the ability of the proposed idealization approach to efficiently process CAD assemblies extracted from a large DMU. As a final conclusion, compared to purely geometric operators currently available in CAD-FEA integration, this thesis has proposed an approach based on a shape analysis of a enriched DMU model that significantly shortens the time commonly spent by today’s engineers and robustly performs repetitive idealization transformations of components and assemblies as well. Research perspectives From the proposed approach of DMU pre-processing for structural assembly simulation, future work can extend and build further on the methods and tools described in this thesis. The perspectives presented in this section refers to the generation of construction graph of B-Rep shapes and to the morphological analysis of DMU models. 193Conclusion and perspectives Construction graph Regarding the generation of construction graphs from B-Rep shapes, perspectives are listed as follows: • Extend the definition of primitive to include material removal as well as additional operations (revolution, sweep,. . . ). In a first step, to reduce the complexity of this research work, the choice has been made to concentrate the extraction of generative processes on extrusion primitives. Primitives are combined solely using a material addition operator. Clearly, future work will focus on incorporating material removal operations and revolutions to extend the range of objects that can be processed. Allowing new definitions of primitives may increase the amount of primitives. However, the construction graph can be even more compact. Indeed, groups of extrusion primitives can be replaced by a unique revolution, or a sweeping primitive in the construction graph. To reduce the complexity of the dimensional reduction of primitives, the presented idealization process favored primitives adding material instead of primitives removing material. Including primitives which removes material can be convenient for other applications, e.g., to simplify components’ shapes for 3D simulation or to identify cavities in components for computational fluid simulations. • Extend the attachment condition of primitives. Regarding the attachment of a primitive into an object, it has been shown all the benefits to avoid constraining the primitive identification process with their attachment conditions and to avoid looking at prioritizing primitives with geometric criteria such as: largest visible boundaries within the object. Identifying primitives without restriction on their ’visible’ boundaries is a way to release this constraint. However, to validate the major concepts of the proposed approach, two restrictions have been set on the primitive definition. The extrusion distance had to be represented by a lateral edge and one of the primitive’s base face had to be totally ’visible’. A future target stands in the generalization of the primitive definition to enlarge the number of valid primitives and hence, will produce a more generic algorithm; • Reduce the interaction between primitives. Currently, the computation time is highly dependent on the number of extracted primitives which are compared with each others. To reduce the complexity of the algorithm, future work may integrate the identification of repetitions and symmetries [LFLT14]. It is not only the global symmetries or repetitions, e.g., reflective symmetries valid at each point of an object, which may directly reduce the extent of a shape being analyzed, more frequently partial symmetries and repetitions are more efficient to identify specific relationships between primitives. Partial symmetries and repetitions initiated by the location of identical primitives convey a strong meaning from a shape point of view. They can be used after the extraction of primitives to generate groups of symmetrical/repetitive primitives or even before that 194Conclusion and perspectives stage to help identifying primitives, e.g., selecting a set of base faces sharing the same plane and the same orientation. Finally, symmetries and repetitions are very relevant to structure an idealized model to propagate these shape structure information across the mesh generation phase. • Further applications of construction graphs. A construction graph structures the shape of a B-Rep object independently from any CAD modeler. Applied to hex-meshing, a shape intrinsic segmentation into extrusion primitives extracted from its construction graph can be highly beneficial. Indeed, it directly provides simple meshable volumes. Moreover, the complementary information about the connections between primitive interfaces can help to generate a complete component 3D mesh. Applied to 3D direct modeling CAD software, this intrinsic shape structure can be used to significantly extend this approach with larger shape modification as well as parametrization capabilities. Because primitives are geometrically independent of each other, the parametrization of a primitive can be directly related to the object shape, i.e., the influence of the shape modification of a primitive can be identified through the interface of this primitive with the object. Morphological analysis of assemblies. The morphological analysis method of Chapter 5 has been presented as a preliminary step for dimensional reduction operations. The perspectives related to this morphological analysis are as follows: • Extend the taxonomy of reference morphologies. The determination of idealizable volume sub-domains in a component is based on a taxonomy of morphologies. Each morphology is associated with one medial edge of the MAT applied to the extrusion contour of a primitive. Clearly, this taxonomy is not complete. Only morphologies associated with straight medial edges with constant radius has been studied. To enable the processing of a larger range of component shapes, this taxonomy can be extended, in a first step, to curved edges, with or without radius variation, and, in a second step, to other types of primitives (revolution, sweeping, . . . ); • Extend the taxonomy of connections. Regarding the propagation of the morphological analysis of primitives to the whole object, the current taxonomy of connections covers extrusion sub-domains to be idealized with planar medial surfaces only. The detailed study of these configurations has demonstrated all the robustness of the proposed approach. Here too, this taxonomy can be enlarged to process beams, shells, or thick domains. In addition, the taxonomy of connections is currently restricted to couples of sub-domains. In case of groups of connected sub-domains, a new level of morphology may emerge, e.g., a set of piled up thin extrusion primitives forming a beam. Analyzing and formalizing 195Conclusion and perspectives this new taxonomy of connections between sub-domains will enlarge the shape configurations which can be processed; • Extend the approach to the morphological analysis of assemblies. Although construction graph structures (primitives/interfaces) is compatible with assembly structures (components/interfaces), the morphological analysis has been applied on standalone components. When the input model is an assembly structure, the assembly interfaces brings a new level of information. The influence of assembly interfaces on the established taxonomies have to be studied to extend the morphology analysis to assembly. For example, on large assemblies models, a group of component can be viewed as a unique morphology. Propagating the morphological analysis of components to the whole assembly will give the user a complete access to multi-resolution morphological levels of a DMU. Finally, we can mention the report from a 2013 ASME Panel on Geometric Interoperability for Advanced Manufacturing [SS14]. The panelists involved had considerable experience in the use of component geometry throughout various design and manufacturing softwares. They stated that current CAD systems have hit hard limit with the representation of 3D products. They came to the same conclusions that we have highlighted about the need of a better interoperability of geometric models and design systems with current DMUs. The proposed approaches made in this thesis with construction graph and morphological analysis of assembly offers new opportunities to adapt a model to the needs of the different applications involved in a product development process. 196Bibliography [AA96] Aichholzer O., Aurenhammer F.: Straight skeletons for general polygonal figures in the plane. In Computing and Combinatorics, Cai J.-Y., Wong C., (Eds.), vol. 1090. Springer Berlin Heidelberg, 1996, pp. 117–126. 57 [AAAG96] Aichholzer O., Aurenhammer F., Alberts D., Grtner B.: A novel type of skeleton for polygons. In J.UCS The Journal of Universal Computer Science, Maurer H., Calude C., Salomaa A., (Eds.). Springer Berlin Heidelberg, 1996, pp. 752–761. 57 [ABA02] Andujar C., Brunet P., Ayala D.: Topology-reducing surface simplification using a discrete solid representation. ACM Trans. Graph.. Vol. 21, Num. 2 (avril 2002), 88–105. 39, 44, 47, 64 [ABD∗98] Armstrong C. G., Bridgett S. J., Donaghy R. J., Mccune R. W., Mckeag R. M., Robinson D. J.: Techniques for interactive and automatic idealisation of CAD models. In Proceedings of the Sixth International Conference on Numerical Grid Generation in Computational Field Simulations (Mississippi State, Mississippi, 39762 USA, 1998), NSF Engineering Research Center for Computational Field Simulation, ISGG, pp. 643–662. 35, 53 [ACK01] Amenta N., Choi S., Kolluri R. K.: The power crust, unions of balls, and the medial axis transform. Computational Geometry. Vol. 19, Num. 2 (2001), 127–153. 54 [AFS06] Attene M., Falcidieno B., Spagnuolo M.: Hierarchical mesh segmentation based on fitting primitives. The Visual Computer. Vol. 22, Num. 3 (2006), 181–193. 59 [AKJ01] Anderson D., Kim Y. S., Joshi S.: A discourse on geometric feature recognition from cad models. J Comput Inform Sci Eng. Vol. 1, Num. 1 (2001), 440–746. 51 [AKM∗06] Attene M., Katz S., Mortara M., Patane G., Spagnuolo ´ M., Tal A.: Mesh segmentation-a comparative study. In Shape 197BIBLIOGRAPHY Modeling and Applications, 2006. SMI 2006. IEEE International Conference on (2006), IEEE, pp. 7–7. 59 [ALC08] ALCAS: Advanced low-cost aircraft structures project. http:// alcas.twisoftware.com/, 2005 – 2008. xv, 18 [AMP∗02] Armstrong C. G., Monaghan D. J., Price M. A., Ou H., Lamont J.: Engineering computational technology. Civil-Comp press, Edinburgh, UK, UK, 2002, ch. Integrating CAE Concepts with CAD Geometry, pp. 75–104. 35 [Arm94] Armstrong C. G.: Modelling requirements for finite-element analysis. Computer-aided design. Vol. 26, Num. 7 (1994), 573–578. xvi, 48, 49, 132 [ARM∗95] Armstrong C., Robinson D., McKeag R., Li T., Bridgett S., Donaghy R., McGleenan C.: Medials for meshing and more. In Proceedings of the 4th International Meshing Roundtable (1995). 48, 49, 53 [B∗67] Blum H., et al.: A transformation for extracting new descriptors of shape. Models for the perception of speech and visual form. Vol. 19, Num. 5 (1967), 362–380. 48 [Bad11] Badin J.: Ing´enierie hautement productive et collaborative `a base de connaissances m´etier: vers une m´ethodologie et un m´eta-mod`ele de gestion des connaissances en configurations. PhD thesis, BelfortMontb´eliard, 2011. 44 [Bat96] Bathe K.-J.: Finite element procedures, vol. 2. Englewood Cliffs, NJ: Prentice-Hall, 1996. 24 [BBB00] Belaziz M., Bouras A., Brun J.-M.: Morphological analysis for product design. Computer-Aided Design. Vol. 32, Num. 5 (2000), 377–388. 63 [BBT08] Bellenger E., Benhafid Y., Troussier N.: Framework for controlled cost and quality of assumptions in finite element analysis. Finite Elements in Analysis and Design. Vol. 45, Num. 1 (2008), 25– 36. 44 [BC04] Buchele S. F., Crawford R. H.: Three-dimensional halfspace constructive solid geometry tree construction from implicit boundary representations. CAD. Vol. 36 (2004), 1063–1073. 62, 63, 107 198BIBLIOGRAPHY [BCGM11] Badin J., Chamoret D., Gomes S., Monticolo D.: Knowledge configuration management for product design and numerical simulation. In Proceedings of the 18th International Conference on Engineering Design (ICED11), Vol. 6 (2011), pp. 161–172. 44 [BCL12] Ba W., Cao L., Liu J.: Research on 3d medial axis transform via the saddle point programming method. Computer-Aided Design. Vol. 44, Num. 12 (2012), 1161 – 1172. 54 [BEGV08] Barequet G., Eppstein D., Goodrich M. T., Vaxman A.: Straight skeletons of three-dimensional polyhedra. In Algorithms-ESA 2008. Springer, 2008, pp. 148–160. 57 [Bic95] Bickford J.: An Introduction to the Design and Behavior of Bolted Joints, Revised and Expanded, vol. 97. CRC press, 1995. 178 [BKR∗12] Barbau R., Krima S., Rachuri S., Narayanan A., Fiorentini X., Foufou S., Sriram R. D.: Ontostep: Enriching product model data using ontologies. Computer-Aided Design. Vol. 44, Num. 6 (2012), 575–590. 27 [BLHF12] Boussuge F., Leon J.-C., Hahmann S., Fine L. ´ : An analysis of DMU transformation requirements for structural assembly simulations. In The Eighth International Conference on Engineering Computational Technology (Dubronik, Croatie, 2012), B.H.V. Topping, Civil Comp Press. 65, 82 [BLHF14a] Boussuge F., Lon J.-C., Hahmann S., Fine L.: Extraction of generative processes from b-rep shapes and application to idealization transformations. Computer-Aided Design. Vol. 46, Num. 0 (2014), 79 – 89. 2013 {SIAM} Conference on Geometric and Physical Modeling. 86 [BLHF14b] Boussuge F., Lon J.-C., Hahmann S., Fine L.: Idealized models for fea derived from generative modeling processes based on extrusion primitives. In Proceedings of the 22nd International Meshing Roundtable (2014), Sarrate J., Staten M., (Eds.), Springer International Publishing, pp. 127–145. 86 [BLNF07] Bellec J., Ladeveze P., N ` eron D., Florentin E. ´ : Robust computation for stochastic problems with contacts. In International Conference on Adaptive Modeling and Simulation (2007). 82 [BR78] Babuvˇska I., Rheinboldt W. C.: Error estimates for adaptive finite element computations. SIAM Journal on Numerical Analysis. Vol. 15, Num. 4 (1978), 736–754. 46 199BIBLIOGRAPHY [BS96] Butlin G., Stops C.: Cad data repair. In Proceedings of the 5th International Meshing Roundtable (1996), pp. 7–12. 47 [BSL∗14] Boussuge F., Shahwan A., Lon J.-C., Hahmann S., Foucault G., Fine L.: Template-based geometric transformations of a functionally enriched dmu into fe assembly models. Computer-Aided Design and Applications. Vol. 11, Num. 4 (2014), 436–449. 168 [BWS03] Beall M. W., Walsh J., Shephard M. S.: Accessing cad geometry for mesh generation. In IMR (2003), pp. 33–42. 47 [CAS14] CASCADE O.: Open cascade technology, version 6.7.0 [computer software]. http://www.opencascade.org/, 1990 – 2014. 12, 112, 179, 186 [CAT14] CATIA D.: Dassault syst`emes catia, version v6r2013x [computer software]. http://www.3ds.com/products-services/catia/, 2008 – 2014. 89, I, IX [CBG08] Champaney L., Boucard P.-A., Guinard S.: Adaptive multianalysis strategy for contact problems with friction. Computational Mechanics. Vol. 42, Num. 2 (2008), 305–315. 30, 175, 177 [CBL11] Cao L., Ba W., Liu J.: Computation of the medial axis of planar domains based on saddle point programming. Computer-Aided Design. Vol. 43, Num. 8 (2011), 979 – 988. 54 [Cha12] Champaney L.: A domain decomposition method for studying the effects of missing fasteners on the behavior of structural assemblies with contact and friction. Computer Methods in Applied Mechanics and Engineering. Vol. 205 (2012), 121–129. 30, 175, 177 [CHE08] Clark B. W., Hanks B. W., Ernst C. D.: Conformal assembly meshing with tolerant imprinting. In Proceedings of the 17th International Meshing Roundtable. Springer, 2008, pp. 267–280. 65 [CP99] Chapman C., Pinfold M.: Design engineeringa need to rethink the solution using knowledge based engineering. Knowledge-Based Systems. Vol. 12, Num. 56 (1999), 257 – 267. 168 [CSKL04] Chong C., Senthil Kumar A., Lee K.: Automatic solid decomposition and reduction for non-manifold geometric model generation. Computer-Aided Design. Vol. 36, Num. 13 (2004), 1357–1369. 39, 44, 61, 95, 119 200BIBLIOGRAPHY [CV06] Chouadria R., Veron P.: Identifying and re-meshing contact interfaces in a polyhedral assembly for digital mock-up. Engineering with Computers. Vol. 22, Num. 1 (2006), 47–58. 65 [DAP00] Donaghy R. J., Armstrong C. G., Price M. A.: Dimensional reduction of surface models for analysis. Engineering with Computers. Vol. 16, Num. 1 (2000), 24–35. 39, 45, 53 [DLG∗07] Drieux G., Leon J.-C., Guillaume F., Chevassus N., Fine ´ L., Poulat A.: Interfacing product views through a mixed shape representation. part 2: Model processing description. International Journal on Interactive Design and Manufacturing (IJIDeM). Vol. 1, Num. 2 (2007), 67–83. 43 [DMB∗96] Donaghy R., McCune W., Bridgett S., Armstrong D., Robinson D., McKeag R.: Dimensional reduction of analysis models. In Proceedings of the 5th International Meshing Roundtable (1996). 53 [Dri06] Drieux G.: De la maquette numrique produit vers ses applications aval : propositions de modles et procds associs. PhD thesis, Institut National Polytechnique, Grenoble, FRANCE, 2006. 6, 19 [DSG97] Dey S., Shephard M. S., Georges M. K.: Elimination of the adverse effects of small model features by the local modification of automatically generated meshes. Engineering with Computers. Vol. 13, Num. 3 (1997), 134–152. 47 [Eck00] Eckard C.: Advantages and disavantadges of fem analysis in an early state of the design process. In Proc. of the 2nd Worldwide Automotive Conference, MSC Software Corp, Dearborn, Michigan, USA (2000). 44 [EF11] E. Florentin S.Guinard P. P.: A simple estimator for stress errors dedicated to large elastic finite element simulations: Locally reinforced stress construction. Engineering Computations: Int J for Computer-Aided Engineering (2011). 46 [FCF∗08] Foucault G., Cuilliere J.-C., Franc¸ois V., L ` eon J.-C., ´ Maranzana R.: Adaptation of cad model topology for finite element analysis. Computer-Aided Design. Vol. 40, Num. 2 (2008), 176–196. xvi, 34, 50, 97 [FEA14] FEA A. U.: Dassault syst`emes abaqus unified fea, version 6.13 [computer software]. http://www.3ds.com/products-services/ simulia/portfolio/abaqus/overview/, 2005 – 2014. 180, XIX 201BIBLIOGRAPHY [Fin01] Fine L.: Processus et mthodes d’adaptation et d’idalisation de modles ddis l’analyse de structures mcaniques. PhD thesis, Institut National Polytechnique, Grenoble, FRANCE, 2001. 22, 34, 45, 47 [FL∗10] Foucault G., Leon J.-C., et al. ´ : Enriching assembly cad models with functional and mechanical informations to ease cae. In Proceedings of the ASME 2010 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2010 August 15-18, 2010, Montr´eal, Canada (2010). 18, 20 [FLM03] Foskey M., Lin M. C., Manocha D.: Efficient computation of a simplified medial axis. In Proceedings of the eighth ACM symposium on Solid modeling and applications (2003), ACM, pp. 96–107. 54 [FMLG09] Ferrandes R., Marin P. M., Leon J. C., Giannini F. ´ : A posteriori evaluation of simplification details for finite element model preparation. Comput. Struct.. Vol. 87, Num. 1-2 (janvier 2009), 73– 80. 44 [Fou07] Foucault G.: Adaptation de modles CAO paramtrs en vue d’une analyse de comportement mcanique par lments finis. PhD thesis, Ecole ´ de technologie sup´erieure, Montreal, CANADA, 2007. 13, 18, 46 [FRL00] Fine L., Remondini L., Leon J.-C.: Automated generation of fea models through idealization operators. International Journal for Numerical Methods in Engineering. Vol. 49, Num. 1-2 (2000), 83–108. 44 [GPu14] GPure: Gpure, version 3.4 [computer software]. http://www. gpure.net, 2011 – 2014. 35 [GZL∗10] Gao S., Zhao W., Lin H., Yang F., Chen X.: Feature suppression based cad mesh model simplification. Computer-Aided Design. Vol. 42, Num. 12 (2010), 1178–1188. 44 [HC03] Haimes R., Crawford C.: Unified geometry access for analysis and design. In IMR (2003), pp. 21–31. 47 [HLG∗08] Hamri O., Leon J.-C., Giannini F., Falcidieno B., Poulat ´ A., Fine L.: Interfacing product views through a mixed shape representation. part 1: Data structures and operators. International Journal on Interactive Design and Manufacturing (IJIDeM). Vol. 2, Num. 2 (2008), 69–85. 43 202BIBLIOGRAPHY [HPR00] Han J., Pratt M., Regli W. C.: Manufacturing feature recognition from solid models: a status report. Robotics and Automation, IEEE Transactions on. Vol. 16, Num. 6 (2000), 782–796. 51 [HR84] Hoffman D. D., Richards W. A.: Parts of recognition. Cognition. Vol. 18, Num. 1 (1984), 65–96. 59 [HSKK01] Hilaga M., Shinagawa Y., Kohmura T., Kunii T. L.: Topology matching for fully automatic similarity estimation of 3d shapes. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques (2001), ACM, pp. 203–212. 59 [Hut86] Huth H.: Influence of fastener flexibility on the prediction of load transfer and fatigue life for multiple-row joints. Fatigue in mechanically fastened composite and metallic joints, ASTM STP. Vol. 927 (1986), 221–250. 29 [IIY∗01] Inoue K., Itoh T., Yamada A., Furuhata T., Shimada K.: Face clustering of a large-scale cad model for surface mesh generation. Computer-Aided Design. Vol. 33, Num. 3 (2001), 251–261. 49 [IML08] Iacob R., Mitrouchev P., Leon J.-C. ´ : Contact identification for assembly–disassembly simulation with a haptic device. The Visual Computer. Vol. 24, Num. 11 (2008), 973–979. 6 [ISO94] ISO:. ISO TC184-SC4: ISO-10303 Part 203 - Application Protocol: Configuration controlled 3D design of mechanical parts and assemblies, 1994. 33, 72, 89, 180, XIX [ISO03] ISO:. ISO TC184-SC4: ISO-10303 Part 214 - Application Protocol: Core data for automotive mechanical design processes, 2003. 33, 72, 89, 180, XIX [JB95] Jones M.R. M. P., Butlin G.: Geometry management support for auto-meshing. In Proceedings of the 4th International Meshing Roundtable (1995), pp. 153–164. 47 [JBH∗14] Jourdes F., Bonneau G.-P., Hahmann S., Leon J.-C., Faure ´ F.: Computation of components interfaces in highly complex assemblies. Computer-Aided Design. Vol. 46 (2014), 170–178. xvi, 65, 71, 72, 79, 116 [JC88] Joshi S., Chang T.-C.: Graph-based heuristics for recognition of machined features from a 3d solid model. Computer-Aided Design. Vol. 20, Num. 2 (1988), 58–66. 51 203BIBLIOGRAPHY [JD03] Joshi N., Dutta D.: Feature simplification techniques for freeform surface models. Journal of Computing and Information Science in Engineering. Vol. 3 (2003), 177. 51 [JG00] Jha K., Gurumoorthy B.: Multiple feature interpretation across domains. Computers in industry. Vol. 42, Num. 1 (2000), 13–32. 51, 94, 100 [KLH∗05] Kim S., Lee K., Hong T., Kim M., Jung M., Song Y.: An integrated approach to realize multi-resolution of b-rep model. In Proceedings of the 2005 ACM symposium on Solid and physical modeling (2005), ACM, pp. 153–162. 51 [Kos03] Koschan A.: Perception-based 3d triangle mesh segmentation using fast marching watersheds. In Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on (2003), vol. 2, IEEE, pp. II–27. 59 [KT03] Katz S., Tal A.: Hierarchical mesh decomposition using fuzzy clustering and cuts. ACM Trans. Graph.. Vol. 22, Num. 3 (juillet 2003), 954–961. 59 [KWMN04] Kim K.-Y., Wang Y., Muogboh O. S., Nnaji B. O.: Design formalism for collaborative assembly design. Computer-Aided Design. Vol. 36, Num. 9 (2004), 849–871. 27, 64 [LAPL05] Lee K. Y., Armstrong C. G., Price M. A., Lamont J. H.: A small feature suppression/unsuppression system for preparing b-rep models for analysis. In Proceedings of the 2005 ACM symposium on Solid and physical modeling (New York, NY, USA, 2005), SPM ’05, ACM, pp. 113–124. 35, 39, 44 [LDB05] Lavoue G., Dupont F., Baskurt A. ´ : A new cad mesh segmentation method, based on curvature tensor analysis. Computer-Aided Design. Vol. 37, Num. 10 (2005), 975–987. 59 [Ley01] Leyton M.: A Generative Theory of Shape (Lecture Notes in Computer Science, LNCS 2145). Springer-Verlag, 2001. 93 [LF05] Leon J.-C., Fine L. ´ : A new approach to the preparation of models for fe analyses. International journal of computer applications in technology. Vol. 23, Num. 2 (2005), 166–184. 35, 39, 44, 45, 81 [LFLT14] Li K., Foucault G., Leon J.-C., Trlin M.: Fast global and partial reflective symmetry analyses using boundary surfaces of mechanical components. Computer-Aided Design, Num. 0 (2014), –. 194 204BIBLIOGRAPHY [LG97] Liu S.-S., Gadh R.: Automatic hexahedral mesh generation by recursive convex and swept volume decomposition. In Proceedings 6th International Meshing Roundtable, Sandia National Laboratories (1997), Citeseer, pp. 217–231. 60 [LG05] Lockett H. L., Guenov M. D.: Graph-based feature recognition for injection moulding based on a mid-surface approach. ComputerAided Design. Vol. 37, Num. 2 (2005), 251–262. 51 [LGT01] Lu Y., Gadh R., Tautges T. J.: Feature based hex meshing methodology: feature recognition and volume decomposition. Computer-Aided Design. Vol. 33, Num. 3 (2001), 221–232. 60, 100 [LL83] Ladeveze P., Leguillon D.: Error estimate procedure in the finite element method and applications. SIAM Journal on Numerical Analysis. Vol. 20, Num. 3 (1983), 485–509. 46 [LLKK04] Lee J. Y., Lee J.-H., Kim H., Kim H.-S.: A cellular topologybased approach to generating progressive solid models from featurecentric models. Computer-Aided Design. Vol. 36, Num. 3 (2004), 217–229. 51 [LLM06] Li M., Langbein F. C., Martin R. R.: Constructing regularity feature trees for solid models. In Geometric Modeling and ProcessingGMP 2006. Springer, 2006, pp. 267–286. 63 [LLM10] Li M., Langbein F. C., Martin R. R.: Detecting design intent in approximate cad models using symmetry. Computer-Aided Design. Vol. 42, Num. 3 (2010), 183–201. 63 [LMTS∗05] Lim T., Medellin H., Torres-Sanchez C., Corney J. R., Ritchie J. M., Davies J. B. C.: Edge-based identification of dp-features on free-form solids. Pattern Analysis and Machine Intelligence, IEEE Transactions on. Vol. 27, Num. 6 (2005), 851–860. 93 [LNP07a] Lee H., Nam Y.-Y., Park S.-W.: Graph-based midsurface extraction for finite element analysis. In Computer Supported Cooperative Work in Design, 2007. CSCWD 2007. 11th International Conference on (2007), pp. 1055–1058. 56 [LNP07b] Lee H., Nam Y.-Y., Park S.-W.: Graph-based midsurface extraction for finite element analysis. In Computer Supported Cooperative Work in Design, 2007. CSCWD 2007. 11th International Conference on (2007), pp. 1055–1058. 119 205BIBLIOGRAPHY [LOC16] LOCOMACHS: Low cost manufacturing and assembly of composite and hybrid structures project. http://www.locomachs.eu/, 2012 – 2016. xv, 18 [LPA∗03] Lee K., Price M., Armstrong C., Larson M., Samuelsson K.: Cad-to-cae integration through automated model simplification and adaptive modelling. In International Conference on Adaptive Modeling and Simulation (2003). 47, 49 [LPMV10] Lou R., Pernot J.-P., Mikchevitch A., Veron P. ´ : Merging enriched finite element triangle meshes for fast prototyping of alternate solutions in the context of industrial maintenance. Computer-Aided Design. Vol. 42, Num. 8 (2010), 670–681. 65 [LSJS13] Lee-St John A., Sidman J.: Combinatorics and the rigidity of cad systems. Computer-Aided Design. Vol. 45, Num. 2 (2013), 473–482. 19 [LST∗12] Li K., Shahwan A., Trlin M., Foucault G., Leon J.-C. ´ : Automated contextual annotation of b-rep cad mechanical components deriving technology and symmetry information to support partial retrieval. In Proceedings of the 5th Eurographics Conference on 3D Object Retrieval (Aire-la-Ville, Switzerland, Switzerland, 2012), EG 3DOR’12, Eurographics Association, pp. 67–70. 20 [LZ04] Liu R., Zhang H.: Segmentation of 3d meshes through spectral clustering. In Computer Graphics and Applications, 2004. PG 2004. Proceedings. 12th Pacific Conference on (2004), IEEE, pp. 298–305. 59 [Man88] Mantyla M.: An Introduction to Solid Modeling. W. H. Freeman & Co., New York, NY, USA, 1988. 93 [MAR12] Makem J., Armstrong C., Robinson T.: Automatic decomposition and efficient semi-structured meshing of complex solids. In Proceedings of the 20th International Meshing Roundtable, Quadros W., (Ed.). Springer Berlin Heidelberg, 2012, pp. 199–215. xvi, 60, 119, 126 [MCC98] Mobley A. V., Carroll M. P., Canann S. A.: An object oriented approach to geometry defeaturing for finite element meshing. In Proceedings of the 7th International Meshing Roundtable, Sandia National Labs (1998), pp. 547–563. 35 [MCD11] Musuvathy S., Cohen E., Damon J.: Computing medial axes of generic 3d regions bounded by b-spline surfaces. Computer-Aided 206BIBLIOGRAPHY Design. Vol. 43, Num. 11 (2011), 1485 – 1495. ¡ce:title¿Solid and Physical Modeling 2011¡/ce:title¿. 54 [MGP10] Miklos B., Giesen J., Pauly M.: Discrete scale axis representations for 3d geometry. ACM Transactions on Graphics (TOG). Vol. 29, Num. 4 (2010), 101. 54 [OH04] O. Hamri J-C. Leon F. G.: A new approach of interoperability between cad and simulation models. In TMCE (2004). 48 [PFNO98] Peak R. S., Fulton R. E., Nishigaki I., Okamoto N.: Integrating engineering design and analysis using a multi-representation approach. Engineering with Computers. Vol. 14, Num. 2 (1998), 93– 114. 44 [QO12] Quadros W. R., Owen S. J.: Defeaturing cad models using a geometry-based size field and facet-based reduction operators. Engineering with Computers. Vol. 28, Num. 3 (2012), 211–224. 47 [QVB∗10] Quadros W., Vyas V., Brewer M., Owen S., Shimada K.: A computational framework for automating generation of sizing function in assembly meshing via disconnected skeletons. Engineering with Computers. Vol. 26, Num. 3 (2010), 231–247. 65 [RAF11] Robinson T., Armstrong C., Fairey R.: Automated mixed dimensional modelling from 2d and 3d cad models. Finite Elements in Analysis and Design. Vol. 47, Num. 2 (2011), 151 – 165. xvi, 44, 53, 54, 81, 119 [RAM∗06] Robinson T. T., Armstrong C. G., McSparron G., Quenardel A., Ou H., McKeag R. M.: Automated mixed dimensional modelling for the finite element analysis of swept and revolved cad features. In Proceedings of the 2006 ACM symposium on Solid and physical modeling (2006), SPM ’06, pp. 117–128. xvi, 53, 60, 61, 87, 126, 146 [RBO02] Rib R., Bugeda G., Oate E.: Some algorithms to correct a geometry in order to create a finite element mesh. Computers and Structures. Vol. 80, Num. 1617 (2002), 1399 – 1408. 47 [RDCG12] Russ B., Dabbeeru M. M., Chorney A. S., Gupta S. K.: Automated assembly model simplification for finite element analysis. In ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (2012), American Society of Mechanical Engineers, pp. 197–206. 64 207BIBLIOGRAPHY [Req77] Requicha A.: Mathematical models of rigid solid objects. Tech. rep., Technical Memorandum - 28, Rochester Univ., N.Y. Production Automation Project., 1977. 8 [Req80] Requicha A. G.: Representations for rigid solids: Theory, methods, and systems. ACM Computing Surveys (CSUR). Vol. 12, Num. 4 (1980), 437–464. 8 [Rez96] Rezayat M.: Midsurface abstraction from 3d solid models: general theory and applications. Computer-Aided Design. Vol. 28, Num. 11 (1996), 905 – 915. xvi, 35, 55, 56, 95, 100, 134 [RG03] Ramanathan M., Gurumoorthy B.: Constructing medial axis transform of planar domains with curved boundaries. Computer-Aided Design. Vol. 35, Num. 7 (2003), 619 – 632. 39, 54 [RG04] Ramanathan M., Gurumoorthy B.: Generating the mid-surface of a solid using 2d mat of its faces. Computer-Aided Design and Applications. Vol. 1, Num. 1-4 (2004), 665–674. 56 [RG10] Ramanathan M., Gurumoorthy B.: Interior medial axis transform computation of 3d objects bound by free-form surfaces. Computer-Aided Design. Vol. 42, Num. 12 (2010), 1217 – 1231. 54 [Roc12] Rocca G. L.: Knowledge based engineering: Between {AI} and cad. review of a language based technology to support engineering design. Advanced Engineering Informatics. Vol. 26, Num. 2 (2012), 159 – 179. Knowledge based engineering to support complex product design. 168 [ROM14] ROMMA: Robust mechanical models for assemblies. http://romma. lmt.ens-cachan.fr/, 2010 – 2014. xix, 177, 180, 189 [Sak95] Sakurai H.: Volume decomposition and feature recognition: Part 1polyhedral objects. Computer-Aided Design. Vol. 27, Num. 11 (1995), 833–843. 61 [SBBC00] Sheffer A., Bercovier M., Blacker T., Clements J.: Virtual topology operators for meshing. International Journal of Computational Geometry & Applications. Vol. 10, Num. 03 (2000), 309–331. 49 [SBO98] Shephard M. S., Beall M. W., O’Bara R. M.: Revisiting the elimination of the adverse effects of small model features in automatically generated meshes. In IMR (1998), pp. 119–131. 47 208BIBLIOGRAPHY [SD96] Sakurai H., Dave P.: Volume decomposition and feature recognition, part ii: curved objects. Computer-Aided Design. Vol. 28, Num. 6 (1996), 519–537. 61 [SGZ10] Sun R., Gao S., Zhao W.: An approach to b-rep model simplifi- cation based on region suppression. Computers & Graphics. Vol. 34, Num. 5 (2010), 556–564. 39 [Sha95] Shah J. J.: Parametric and feature-based CAD/CAM: concepts, techniques, and applications. John Wiley & Sons, 1995. 13, 50 [She01] Sheffer A.: Model simplification for meshing using face clustering. Computer-Aided Design. Vol. 33, Num. 13 (2001), 925–934. 34, 49 [Sil81] Silva C. E.: Alternative definitions of faces in boundary representatives of solid objects. Tech. rep., Tech. Memo. 36, Production Automation Project, Univ. of Rochester, Rochester, N.Y., 1981, 1981. 97 [SLF∗12] Shahwan A., Leon J.-C., Fine L., Foucault G., et al. ´ : Reasoning about functional properties of components based on geometrical descriptions. In Proceedings of the ninth International Symposium on Tools and Methods of Competitive Engineering, Karlsruhe, Germany) (2012), TMCE12. 38, 71, 73, 79, 116, 168, 169 [SLF∗13] Shahwan A., Leon J.-C., Foucault G., Trlin M., Palombi ´ O.: Qualitative behavioral reasoning from components interfaces to components functions for dmu adaption to fe analyses. ComputerAided Design. Vol. 45, Num. 2 (2013), 383–394. 19, 20, 27, 38, 71, 72, 73, 79, 116, 142, 160, 168, 169, 179 [SRX07] Stroud I., Renner G., Xirouchakis P.: A divide and conquer algorithm for medial surface calculation of planar polyhedra. Computer-Aided Design. Vol. 39, Num. 9 (2007), 794–817. 44, 54 [SS14] Shapiro V., Srinivasan V.: Opinion: Report from a 2013 asme panel on geometric interoperability for advanced manufacturing. Comput. Aided Des.. Vol. 47 (fvrier 2014), A1–A2. 196 [SSCO08] Shapira L., Shamir A., Cohen-Or D.: Consistent mesh partitioning and skeletonisation using the shape diameter function. The Visual Computer. Vol. 24, Num. 4 (2008), 249–259. 59 [SSK∗05] Seo J., Song Y., Kim S., Lee K., Choi Y., Chae S.: Wraparound operation for multi-resolution cad model. Computer-Aided Design & Applications. Vol. 2, Num. 1-4 (2005), 67–76. 51 209BIBLIOGRAPHY [SSM∗10] Sheen D.-P., Son T.-g., Myung D.-K., Ryu C., Lee S. H., Lee K., Yeo T. J.: Transformation of a thin-walled solid model into a surface model via solid deflation. Comput. Aided Des.. Vol. 42, Num. 8 (aot 2010), 720–730. 39, 44, 57, 95, 100, 119, 126, 134 [SSR∗07] Sheen D.-P., Son T.-g., Ryu C., Lee S. H., Lee K.: Dimension reduction of solid models by mid-surface generation. International Journal of CAD/CAM. Vol. 7, Num. 1 (2007). 57 [Str10] Stroud I.: Boundary Representation Modelling Techniques, 1st ed. Springer Publishing Company, Incorporated, 2010. 10 [SV93] Shapiro V., Vossler D. L.: Separation for boundary to csg conversion. ACM Trans. Graph.. Vol. 12, Num. 1 (1993), 35–55. 62, 63, 107 [Sza96] Szabo B.: The problem of model selection in numerical simulation. Advances in computational methods for simulation (1996), 9–16. 22 [Tau01] Tautges T. J.: Automatic detail reduction for mesh generation applications. In Proceedings 10th International Meshing Roundtable (2001), pp. 407–418. 35, 51 [TBG09] Thakur A., Banerjee A. G., Gupta S. K.: A survey of {CAD} model simplification techniques for physics-based simulation applications. Computer-Aided Design. Vol. 41, Num. 2 (2009), 65 – 80. 39, 51 [TNRA14] Tierney C. M., Nolan D. C., Robinson T. T., Armstrong C. G.: Managing equivalent representations of design and analysis models. Computer-Aided Design and Applications. Vol. 11, Num. 2 (2014), 193–205. 12 [Tro99] Troussier N.: Contribution to the integration of mechanical calculation in engineering design : methodological proposition for the use and the reuse. PhD thesis, University Joseph Fourier, Grenoble, FRANCE, 1999. 29, 44, 76 [TWKW59] Timoshenko S., Woinowsky-Krieger S., Woinowsky S.: Theory of plates and shells, vol. 2. McGraw-hill New York, 1959. 25, 126 [Ull09] Ullman D. G.: The mechanical design process, vol. Fourth Edition. McGraw-Hill Science/Engineering/Math, 2009. 175 [vdBvdMB03] van den Berg E., van der Meiden H. A., Bronsvoort W. F.: Specification of freeform features. In Proceedings of the Eighth ACM 210BIBLIOGRAPHY Symposium on Solid Modeling and Applications (New York, NY, USA, 2003), SM ’03, ACM, pp. 56–64. 173 [VR93] Vandenbrande J. H., Requicha A. A.: Spatial reasoning for the automatic recognition of machinable features in solid models. Pattern Analysis and Machine Intelligence, IEEE Transactions on. Vol. 15, Num. 12 (1993), 1269–1285. 51 [VSR02] Venkataraman S., Sohoni M., Rajadhyaksha R.: Removal of blends from boundary representation models. In Proceedings of the seventh ACM symposium on Solid modeling and applications (2002), ACM, pp. 83–94. 35, 52, 93 [Woo03] Woo Y.: Fast cell-based decomposition and applications to solid modeling. Computer-Aided Design. Vol. 35, Num. 11 (2003), 969– 977. 51, 61, 94 [Woo14] Woo Y.: Abstraction of mid-surfaces from solid models of thinwalled parts: A divide-and-conquer approach. Computer-Aided Design. Vol. 47 (2014), 1–11. xvi, 44, 61, 62, 89, 94, 119 [WS02] Woo Y., Sakurai H.: Recognition of maximal features by volume decomposition. Computer-Aided Design. Vol. 34, Num. 3 (2002), 195– 207. 51, 61, 94, 100 [ZM02] Zhu H., Menq C.: B-rep model simplification by automatic fillet/round suppressing for efficient automatic feature recognition. Computer-Aided Design. Vol. 34, Num. 2 (2002), 109–123. 35, 52, 93 [ZPK∗02] Zhang Y., Paik J., Koschan A., Abidi M. A., Gorsich D.: Simple and efficient algorithm for part decomposition of 3-d triangulated models based on curvature analysis. In Image Processing. 2002. Proceedings. 2002 International Conference on (2002), vol. 3, IEEE, pp. III–273. 59 [ZT00] Zienkiewicz O. C., Taylor R. L.: The Finite Element Method: Solid Mechanics, vol. 2. Butterworth-heinemann, 2000. 24 211BIBLIOGRAPHY 212Appendix A Illustration of generation processes of CAD components This appendix shows the construction process of two industrial use-cases. The components have been designed in CATIA [CAT14] CAD software. A.1 Construction process of an injected plastic part The followings figures present the complete shape generation process of the use-case as shown in Figure 4.1. The component has been designed with the successive application of 37 modeling features. The used features are: 1. Material addition or removal; 2. Surface operations: fillets, chamfers. Two views, defined as top and bottom view, are associated with the construction tree to present all modeling step of the object shape. A.2 Construction process of an aeronautical metallic part The following figures present a complete shape generation of a simple metallic component which is commonly found in aeronautical structures. The component has been designed using the successive application of boolean operations of type addition and removal. This is a common practice for aeronautical metallic design. This technique shows directly the machining steps in the component design but turn the shape generation process quite complex. The simple example presented in Figure A.6 contains nine main operations which could be reduced to three majors operations ( one extrusion and two hole drillings). IAppendix A: Illustration of CAD component generation Construction Tree TOP view BOTTOM view 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 Figure A.1: An example of a shape generation process of an injected plastic part 1/5 IIAppendix A: Illustration of CAD component generation Construction Tree TOP view BOTTOM view 9 10 11 12 13 14 15 16 9 10 11 12 13 14 15 16 Figure A.2: An example of a shape generation process of an injected plastic part 2/5 IIIAppendix A: Illustration of CAD component generation Construction Tree TOP view BOTTOM view 17 18 17 18 19 20 21 22 23 24 19 20 21 22 23 24 Figure A.3: An example of a shape generation process of an injected plastic part 3/5 IVAppendix A: Illustration of CAD component generation Construction Tree TOP view BOTTOM view 25 26 27 28 29 30 31 32 25 26 27 28 29 30 31 32 Figure A.4: An example of a shape generation process of an injected plastic part 4/5 VAppendix A: Illustration of CAD component generation Construction Tree TOP view BOTTOM view 33 34 35 36 37 33 34 35 36 37 Figure A.5: An example of a shape generation process of an injected plastic part 5/5 VIAppendix A: Illustration of CAD component generation Main Solid Solids removed or added with boolean operations Construction Tree 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 Figure A.6: An example of a shape generation process of a simple metallic component. The component has been mainly designed using boolean operations. VIIAppendix A: Illustration of CAD component generation VIIIAppendix B Features equivalence This appendix illustrates the main modeling features used in CAD software to design components (Illustrations courtesy of Dassault Syst`emes [CAT14]) IXAppendix B: Features equivalence Additive extrusion Non perpendicular additive extrusion Removal extrusion Additive revolution Removal revolution Sketch-Based Features Figure B.1: Examples of Sketch-Based Features XAppendix B: Features equivalence Hole Drilling 1 removal extrusion or 1 removal revolution Equivalent to: Hole type: 2 Removal extrusion or 1 removal revolution 1 removal revolution 1 removal revolution 1 removal revolution Additive sweep Removal sweep Stiffener Equivalent to additive extrusion Sketch-Based Features Figure B.2: Examples of Sketch-Based Features XIAppendix B: Features equivalence Fillet Fillet which can be included in extrusion contour Usual fillet Chamfer Equivalent to removal extrusion of can be included in additive extrusion contour Draft angle feature can be equivalent to various additive and removal extrusions Draft angle Simple draft can be included in additive extrusion contour Shell Equivalent to removal extrusion Equivalent to additive extrusion Thickness Dress-Up Features Figure B.3: Examples of Dress-Up Features XIIAppendix B: Features equivalence Boolean operations Difference Union Intersection Figure B.4: Examples of Boolean operations XIIIAppendix B: Features equivalence XIVAppendix C Taxonomy of a primitive morphology This appendix illustrates 18 morphological configurations associated with a MAT medial edge of a volume primitive Pi of type extrusion. The two tables differ according to whether the idealization direction of Pi corresponds to the extrusion direction, see Table C.1, or whether the idealization direction of Pi is included in the extrusion contour, see Table C.2. The reference ratio xr and user ratio xu are used to specify, in each table, the intervals of morphology differentiating beams, plates or shells and 3D thick domains. XVAppendix C: Features equivalence y = L2 / max Ø x = max Ø / L1 y < xu xu < y < xr xr < y xr < x xr < x < xu x < xu L1 < max Ø / xr max Ø < L2 < xu . max Ø max Ø / xr < L1 < max Ø / xu max Ø < L2 < xu . max Ø xu . max Ø < L1 < max Ø max Ø < L2 < xu . max Ø L1 < max Ø / xr xu . max Ø < L2 < xr . max Ø L1 < max Ø / xr xu . max Ø < L2 max Ø / xr < L1 < max Ø / xu xu . max Ø < L2 < xr . max Ø xu . max Ø < L1 < max Ø xu . max Ø < L2 < xr . max Ø max Ø / xr < L1 < max Ø / xu xu . max Ø < L2 xu . max Ø < L1 < max Ø xu . max O < L2 x = max Ø / L1 if max Ø > L1 x = L1 / max Ø if max Ø <= L1 y = L2 / max Ø L1 Ø L2 L1 Ø L2 Ø L2 L1 L1 Ø L2 L1 Ø L2 L1 Ø L2 L1 Ø L2 L1 Ø L2 L1 Ø L2 L1 Ø L2 PLATE THICK PLATE under user hypothesis (in plane Ø ) BEAM under user hypothesis (direction L2) BEAM to PLATE under user hypothesis BEAM to BAND under user hypothesis PLATE (plane orthogonal Ø ) to BEAM (direction L2) BAND under user hypothesis BAND Table C.1: Morphology associated with a MAT medial edge of a primitive Pi. 1/2 XVIAppendix C: Features equivalence L1 > 10 . L2 x < xu xu < x < xr xr < x max Ø < L1 < xu . max Ø max Ø < L2 < xu . max Ø L1 Ø L2 xu . max Ø < L1 < xr . max Ø max Ø < L2 < xu . max Ø L1 Ø L2 xr . max Ø < L1 max Ø < L2 < xu . max Ø L1 L Ø 2 max Ø < L1 < xu . max Ø xu . max Ø < L2 < xr . max Ø max Ø < L1 < xu . max Ø xu . max Ø < L2 xu . max Ø < L1 < xr . max Ø xu . max Ø < L2 < xr . max Ø xr . max Ø < L1 xu . max Ø < L2 < xr . max Ø xu . max Ø < L1 < xr . max Ø xu . max Ø < L2 xr . max Ø < L1 xu . max Ø < L2 L1 Ø L2 L1 Ø L2 L1 Ø L2 L1 > 10 . L2 L1 Ø L2 L1 Ø L2 L2 > 10 . L1 BAND MASSIF BEAM under user hypothesis (direction L1) BEAM (direction L1) BEAM (direction L2) SHELL under user hypothesis (plane L1/L2) SHELL (plan L1/L2) BAND SHELL under user hypothesis (plane L1/L2) BEAM (direction L1) to SHELL under user hypothesis (plane L1/L2) BEAM L1 Ø L2 y = L2 / max Ø x = max Ø / L1 y < xu xu < y < xr xr < y x = max Ø / L1 if max Ø > L1 x = L1 / max Ø if max Ø <= L1 y = L2 / max Ø L1 Ø L2 SHELL under user hypothesis (plane L1/L2) Ø Table C.2: Morphology associated with a MAT medial edge of a primitive Pi. 2/2 XVIIAppendix C: Features equivalence XVIIIAppendix D Export to CAE software This appendix illustrates the data structure used to transfer the adapted DMU to a CAE software, i.e., Abaqus [FEA14]. STEP files [ISO94, ISO03] are used to transfer the geometric model and associated xml files. XIXAppendix D: Export to CAE software BoltedJunction_patch (a) (b) Figure D.1: Illustration of the STEP export of a Bolted Junction with sub-domains around screw. (a) Product structure open in CATIA software, (b) associated xml file containing the association between components and interfaces BoltedJunction_patch Figure D.2: Illustration of the STEP export of a Bolted Junction. Each component containing volume sub-domains is exported as STEP assembly. XXAppendix D: Export to CAE software BoltedJunction_patch Inner Interfaces Figure D.3: Illustration of the STEP export of a Bolted Junction. Each inner interface between sub-domains is part of the component assembly. XXIAppendix D: Export to CAE software BoltedJunction_patch Outer Interfaces Figure D.4: Illustration of the STEP export of a Bolted Junction. Each outer interface between components is part of the root assembly. XXIIAppendix D: Export to CAE software Root Joint Patchs CAD assembly Interfaces Figure D.5: Illustration of the STEP export of the full Root Joint assembly. XXIII Modélisation et d´etection statistiques pour la criminalistique num´erique des images Thanh Hai Thai To cite this version: Thanh Hai Thai. Mod´elisation et d´etection statistiques pour la criminalistique num´erique des images. Statistics. Universit´e de Technologie de Troyes, 2014. French. HAL Id: tel-01072541 https://tel.archives-ouvertes.fr/tel-01072541 Submitted on 8 Oct 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destin´ee au d´epˆot et `a la diffusion de documents scientifiques de niveau recherche, publi´es ou non, ´emanant des ´etablissements d’enseignement et de recherche fran¸cais ou ´etrangers, des laboratoires publics ou priv´es.PHD THESIS to obtain the degree of DOCTOR of UNIVERSITY of TECHNOLOGY of TROYES Speciality: Systems Optimization and Dependability presented and defended by Thanh Hai THAI August 28th 2014 Statistical Modeling and Detection for Digital Image Forensics COMMITTEE: Mrs. Jessica FRIDRICH Professor Reviewer M. Patrick BAS Chargé de recherche CNRS Reviewer M. William PUECH Professeur des universités Examinator M. Igor NIKIFOROV Professeur des universités Examinator M. Rémi COGRANNE Maître de conférences Examinator M. Florent RETRAINT Enseignant-Chercheur SupervisorTo my Mom, my Dad, and my little brother, To Thu Thao, my fiancée, for their unlimited support, encouragement, and love.iii Acknowledgments This work has been carried out within the Laboratory of Systems Modeling and Dependability (LM2S) at the University of Technology of Troyes (UTT). It is funded in part by the strategic program COLUMBO. This work has been accomplished under the supervision of M. Florent RETRAINT. I would like to express my deepest gratitude to him for his highly professional guidance and incessant support. He has accompanied with me from my master’s internship and encouraged me to discover this field. I highly value the friendly yet professional environment created during my three-year doctoral. The high confidence he has given to me is possibly the greatest reward of my endeavors. I am greatly indebted to my PhD co-advisor, M. Rémi COGRANNE who has assisted me with high efficiency and availability. It was my honor and pleasure to work with him. His personal and professional help during my doctoral is invaluable. I would like to express my special thanks to Mrs. Jessica FRIDRICH and M. Patrick BAS for accepting to review my PhD thesis. I would also like to thank M. Igor NIKOFOROV and M. William PUECH for agreeing to examine this thesis. Valuable remarks provided by the respectful experts in this field like them would improve the thesis’s quality. I would like to thank the members of LM2S for the friendly and adorable environment which they have offered me since I joint the LM2S team.v Résumé Le XXIème siècle étant le siècle du passage au tout numérique, les médias digitaux jouent maintenant un rôle de plus en plus important dans la vie de tous les jours. De la même manière, les logiciels sophistiqués de retouche d’images se sont démocratisés et permettent aujourd’hui de diffuser facilement des images falsifiées. Ceci pose un problème sociétal puisqu’il s’agit de savoir si ce que l’on voit a été manipulé. Cette thèse s’inscrit dans le cadre de la criminalistique des images numériques. Deux problèmes importants sont abordés : l’identification de l’origine d’une image et la détection d’informations cachées dans une image. Ces travaux s’inscrivent dans le cadre de la théorie de la décision statistique et proposent la construction de dé- tecteurs permettant de respecter une contrainte sur la probabilité de fausse alarme. Afin d’atteindre une performance de détection élevée, il est proposé d’exploiter les propriétés des images naturelles en modélisant les principales étapes de la chaîne d’acquisition d’un appareil photographique. La méthodologie, tout au long de ce manuscrit, consiste à étudier le détecteur optimal donné par le test du rapport de vraisemblance dans le contexte idéal où tous les paramètres du modèle sont connus. Lorsque des paramètres du modèle sont inconnus, ces derniers sont estimés afin de construire le test du rapport de vraisemblance généralisé dont les performances statistiques sont analytiquement établies. De nombreuses expérimentations sur des images simulées et réelles permettent de souligner la pertinence de l’approche proposée. Abstract The twenty-first century witnesses the digital revolution that allows digital media to become ubiquitous. They play a more and more important role in our everyday life. Similarly, sophisticated image editing software has been more accessible, resulting in the fact that falsified images are appearing with a growing frequency and sophistication. The credibility and trustworthiness of digital images have been eroded. To restore the trust to digital images, the field of digital image forensics was born. This thesis is part of the field of digital image forensics. Two important problems are addressed: image origin identification and hidden data detection. These problems are cast into the framework of hypothesis testing theory. The approach proposes to design a statistical test that allows us to guarantee a prescribed false alarm probability. In order to achieve a high detection performance, it is proposed to exploit statistical properties of natural images by modeling the main steps of image processing pipeline of a digital camera. The methodology throughout this manuscript consists of studying an optimal test given by the Likelihood Ratio Test in the ideal context where all model parameters are known in advance. When the model parameters are unknown, a method is proposed for parameter estimation in order to design a Generalized Likelihood Ratio Test whose statistical performances are analytically established. Numerical experiments on simulated and real images highlight the relevance of the proposed approach.Table of Contents 1 General Introduction 1 1.1 General Context and Problem Description . . . . . . . . . . . . . . . 1 1.2 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Publications and Authors’ Contribution . . . . . . . . . . . . . . . . 6 I Overview on Digital Image Forensics and Statistical Image Modeling 9 2 Overview on Digital Image Forensics 11 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Image Processing Pipeline of a Digital Camera . . . . . . . . . . . . 12 2.2.1 RAW Image Formation . . . . . . . . . . . . . . . . . . . . . 13 2.2.2 Post-Acquisition Processing . . . . . . . . . . . . . . . . . . . 15 2.2.3 Image Compression . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3 Passive Image Origin Identification . . . . . . . . . . . . . . . . . . . 19 2.3.1 Lens Aberration . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.2 Sensor Imperfections . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.3 CFA Pattern and Interpolation . . . . . . . . . . . . . . . . . 25 2.3.4 Image Compression . . . . . . . . . . . . . . . . . . . . . . . . 26 2.4 Passive Image Forgery Detection . . . . . . . . . . . . . . . . . . . . 27 2.5 Steganography and Steganalysis in Digital Images . . . . . . . . . . . 29 2.5.1 LSB Replacement Paradigm and Jsteg Algorithm . . . . . . . 32 2.5.2 Steganalysis of LSB Replacement in Spatial Domain . . . . . 33 2.5.3 Steganalysis of Jsteg Algorithm . . . . . . . . . . . . . . . . . 38 2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3 Overview on Statistical Modeling of Natural Images 41 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.2 Spatial-Domain Image Model . . . . . . . . . . . . . . . . . . . . . . 41 3.2.1 Poisson-Gaussian and Heteroscedastic Noise Model . . . . . . 42 3.2.2 Non-Linear Signal-Dependent Noise Model . . . . . . . . . . . 44 3.3 DCT Coefficient Model . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.3.1 First-Order Statistics of DCT Coefficients . . . . . . . . . . . 45 3.3.2 Higher-Order Statistics of DCT Coefficients . . . . . . . . . . 46 3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46viii Table of Contents II Statistical Modeling and Estimation for Natural Images from RAW Format to JPEG Format 47 4 Statistical Image Modeling and Estimation of Model Parameters 49 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.2 Statistical Modeling of RAW Images . . . . . . . . . . . . . . . . . . 50 4.2.1 Heteroscedastic Noise Model . . . . . . . . . . . . . . . . . . . 50 4.2.2 Estimation of Parameters (a, b) in the Heteroscedastic Noise Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.3 Statistical Modeling of TIFF Images . . . . . . . . . . . . . . . . . . 57 4.3.1 Generalized Noise Model . . . . . . . . . . . . . . . . . . . . . 57 4.3.2 Estimation of Parameters (˜a, ˜b) in the Generalized Noise Model 59 4.3.3 Application to Image Denoising . . . . . . . . . . . . . . . . . 61 4.3.4 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . 62 4.4 Statistical Modeling in DCT Domain . . . . . . . . . . . . . . . . . . 65 4.4.1 Statistical Model of Quantized DCT Coefficients . . . . . . . 65 4.4.2 Estimation of Parameters (α, β) from Unquantized DCT Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.4.3 Estimation of Parameters (α, β) from Quantized DCT Coeffi- cients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.4.4 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . 71 4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 III Camera Model Identification in Hypothesis Testing Framework 75 5 Camera Model Identification Based on the Heteroscedastic Noise Model 77 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.2 Camera Fingerprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 5.3 Optimal Detector for Camera Model Identification Problem . . . . . 80 5.3.1 Hypothesis Testing Formulation . . . . . . . . . . . . . . . . . 80 5.3.2 LRT for Two Simple Hypotheses . . . . . . . . . . . . . . . . 80 5.4 GLRT with Unknown Image Parameters . . . . . . . . . . . . . . . . 83 5.5 GLRT with Unknown Image and Camera Parameters . . . . . . . . . 86 5.6 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.6.1 Detection Performance on Simulated Database . . . . . . . . 89 5.6.2 Detection Performance on Two Nikon D70 and Nikon D200 Camera Models . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5.6.3 Detection Performance on a Large Image Database . . . . . . 91 5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.8 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95Table of Contents ix 5.8.1 Expectation and Variance of the GLR Λhet(z wapp k,i ) under Hypothesis Hj . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 5.8.2 Expectation and Variance of the GLR Λehet(z wapp k,i ) under Hypothesis Hj . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6 Camera Model Identification Based on the Generalized Noise Model 99 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 6.2 Camera Fingerprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 6.3 Optimal Detector for Camera Model Identification Problem . . . . . 102 6.3.1 Hypothesis Testing Formulation . . . . . . . . . . . . . . . . . 102 6.3.2 LRT for Two Simple Hypotheses . . . . . . . . . . . . . . . . 103 6.4 Practical Context: GLRT . . . . . . . . . . . . . . . . . . . . . . . . 105 6.4.1 GLRT with Unknown Image Parameters . . . . . . . . . . . . 105 6.4.2 GLRT with Unknown Image and Camera Parameters . . . . . 106 6.5 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 108 6.5.1 Detection Performance on Simulated Database . . . . . . . . 108 6.5.2 Detection Performance on Two Nikon D70 and Nikon D200 Camera Models . . . . . . . . . . . . . . . . . . . . . . . . . . 109 6.5.3 Detection Performance on a Large Image Database . . . . . . 110 6.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 6.7 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 7 Camera Model Identification Based on DCT Coefficient Statistics115 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 7.2 Camera Fingerprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 7.2.1 Design of Camera Fingerprint . . . . . . . . . . . . . . . . . . 116 7.2.2 Extraction of Camera Fingerprint . . . . . . . . . . . . . . . . 117 7.2.3 Property of Camera Fingerprint . . . . . . . . . . . . . . . . . 119 7.3 Optimal Detector for Camera Model Identification Problem . . . . . 120 7.3.1 Hypothesis Testing Formulation . . . . . . . . . . . . . . . . . 120 7.3.2 LRT for Two Simple Hypotheses . . . . . . . . . . . . . . . . 121 7.4 Practical Context: GLRT . . . . . . . . . . . . . . . . . . . . . . . . 123 7.4.1 GLRT with Unknown Parameters αk . . . . . . . . . . . . . . 123 7.4.2 GLRT with Unknown Parameters (αk, c˜k,1, ˜dk,1) . . . . . . . . 124 7.5 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 126 7.5.1 Detection Performance on Simulated Database . . . . . . . . 127 7.5.2 Detection Performance on Two Canon Ixus 70 and Nikon D200 Camera Models . . . . . . . . . . . . . . . . . . . . . . 128 7.5.3 Detection Performance on a Large Image Database . . . . . . 129 7.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 7.7 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 7.7.1 Relation between the Parameters (˜a, ˜b, γ) and (αu,v, βu,v) . . 132 7.7.2 Laplace’s Approximation of DCT Coefficient Model . . . . . . 133x Table of Contents 7.7.3 Expectation and Variance of the LR Λdct(Ik,i) under Hypothesis Hj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 7.7.4 Asymptotic Expectation and Variance of the GLR Λedct(Ik,i) under Hypothesis Hj . . . . . . . . . . . . . . . . . . . . . . . 135 IV Statistical Detection of Hidden Data in Natural Images 137 8 Statistical Detection of Data Embedded in Least Significant Bits of Clipped Images 139 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 8.2 Cover-Image Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 8.2.1 Non-Clipped Image Model . . . . . . . . . . . . . . . . . . . . 141 8.2.2 Clipped Image Model . . . . . . . . . . . . . . . . . . . . . . 142 8.3 GLRT for Non-Clipped Images . . . . . . . . . . . . . . . . . . . . . 142 8.3.1 Impact of LSB Replacement: Stego-Image Model . . . . . . . 142 8.3.2 Hypothesis Testing Formulation . . . . . . . . . . . . . . . . . 143 8.3.3 ML Estimation of Image Parameters . . . . . . . . . . . . . . 144 8.3.4 Design of GLRT . . . . . . . . . . . . . . . . . . . . . . . . . 145 8.4 GLRT for Clipped Images . . . . . . . . . . . . . . . . . . . . . . . . 148 8.4.1 ML Estimation of Image Parameters . . . . . . . . . . . . . . 148 8.4.2 Design of GLRT . . . . . . . . . . . . . . . . . . . . . . . . . 148 8.5 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 151 8.5.1 Detection Performance on Simulated Database . . . . . . . . 151 8.5.2 Detection Performance on Real Image Database . . . . . . . . 153 8.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 8.7 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 8.7.1 Denoising Filter for Non-Clipped RAW Images Corrupted by Signal-Dependent Noise . . . . . . . . . . . . . . . . . . . . . 155 8.7.2 Statistical distribution of the GLR Λbncl(Z) under hypothesis H0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 8.7.3 Statistical distribution of the GLR Λbncl(Z) under hypothesis H1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 8.7.4 ML Estimation of Parameters in Truncated Gaussian Data . 159 8.7.5 Statistical distribution of the GLR Λbcl(Z) . . . . . . . . . . . 160 9 Steganalysis of Jsteg Algorithm Based on a Novel Statistical Model of Quantized DCT Coefficients 163 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 9.2 Optimal Detector for Steganalysis of Jsteg Algorithm . . . . . . . . . 164 9.2.1 Hypothesis Testing Formulation . . . . . . . . . . . . . . . . . 164 9.2.2 LRT for Two Simple Hypotheses . . . . . . . . . . . . . . . . 165 9.3 Quantitative Steganalysis of Jsteg Algorithm . . . . . . . . . . . . . 168 9.3.1 ML Estimation of Embedding Rate . . . . . . . . . . . . . . . 168Table of Contents xi 9.3.2 Revisiting WS estimator . . . . . . . . . . . . . . . . . . . . . 169 9.4 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 169 9.4.1 Detection Performance of the proposed LRT . . . . . . . . . . 169 9.4.2 Accuracy of the Proposed Estimator . . . . . . . . . . . . . . 172 9.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 10 Conclusions and Perspectives 175 10.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 10.2 Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 10.2.1 Perspectives to Digital Forensics . . . . . . . . . . . . . . . . 178 10.2.2 Perspectives to Statistical Image Modeling . . . . . . . . . . . 180 10.2.3 Perspectives to Statistical Hypothesis Testing Theory Applied for Digital Forensics . . . . . . . . . . . . . . . . . . . . . . . 180 A Statistical Hypothesis Testing Theory 181 A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 A.2 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 A.3 Test between Two Simple Hypotheses . . . . . . . . . . . . . . . . . . 183 A.4 Test between Two Composite Hypotheses . . . . . . . . . . . . . . . 187 A.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Bibliography 193List of Figures 1.1 Example of falsification. . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Structure of the work presented in this thesis. . . . . . . . . . . . . . 4 2.1 Image processing pipeline of a digital camera. . . . . . . . . . . . . . 13 2.2 Sample color filter arrays. . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3 JPEG compression chain. . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4 Typical steganographic system. . . . . . . . . . . . . . . . . . . . . . 29 2.5 Operations of LSB replacement (top) and Jsteg (bottom). . . . . . . 31 2.6 Diagram of transition probabilities between trace subsets under LSB replacement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.1 Scatter-plot of pixels’ expectation and variance from a natural RAW image with ISO 200 captured by Nikon D70 and Nikon D200 cameras. The image is segmented into homogeneous segments. In each segment, the expectation and variance are calculated and the parameters (a, b) are estimated as proposed in Section 4.2.2. The dash line is drawn using the estimated parameters (a, b). Only the red channel is used in this experiment. . . . . . . . . . . . . . . . . . . . . . . . . 51 4.2 Scatter-plot of pixels’ mean and variance from JPEG images with ISO 200 issued from Nikon D70 and Nikon D200 cameras. The red channel is used in this experiment. The image is segmented into homogeneous segments to estimate local means and variances. The generalized noise model is used to fit to the data. . . . . . . . . . . . 58 4.3 Estimated parameters (˜a, ˜b) on JPEG images issued from different camera models in Dresden image database. . . . . . . . . . . . . . . 63 4.4 Comparison between the proposed method and Farid’s for estimation of gamma factor on JPEG images issued from Nikon D200 camera model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.5 Comparison between the Laplacian, GΓ and proposed model of DCT coefficients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.6 Comparison between the quantized Laplacian, quantized GΓ and proposed model for quantized AC coefficient. . . . . . . . . . . . . . . . 68 4.7 Averaged χ 2 test statistics of GG, GΓ and proposed model for 63 quantized AC coefficients . . . . . . . . . . . . . . . . . . . . . . . . 71 5.1 Estimated camera parameters (a, b) on 20 RAW images of different camera model with ISO 200 and different camera settings. . . . . . . 79 5.2 Estimated camera parameters (a, b) of different devices per camera model with ISO 200 and different camera settings. . . . . . . . . . . 79xiv List of Figures 5.3 The detection performance of the GLRT δ ? het with 50 pixels selected randomly from simulated images. . . . . . . . . . . . . . . . . . . . . 85 5.4 The detection performance of the GLRT δe? het with 50 pixels selected randomly on simulated images. . . . . . . . . . . . . . . . . . . . . . 88 5.5 The detection performance of the test δ ? het with 200 pixels selected randomly on simulated images for a0 = 0.0115 and different parameters a1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.6 The detection performance of the test δ ? het and δe? het on simulated images for different numbers of pixels. . . . . . . . . . . . . . . . . . 89 5.7 The detection performance of the GLRTs δ ? het and δe? het on the Dresden database for different numbers of pixels. . . . . . . . . . . . . . . . . 91 5.8 Comparison between the theoretical false alarm probability (FAP) and the empirical FAP, from real images of Dresden database, plotted as a function of decision threshold τ . . . . . . . . . . . . . . . . . . . 92 6.1 Empirical distribution of noise residuals z˜ res k,i in a segment compared with theoretical Gaussian distribution. . . . . . . . . . . . . . . . . . 101 6.2 Estimated parameters (˜a, ˜b) on JPEG images issued from Canon Ixus 70 camera with different camera settings. . . . . . . . . . . . . . . . 102 6.3 Estimated parameters (˜a, ˜b) on JPEG images issued from different devices of Canon Ixus 70 model. . . . . . . . . . . . . . . . . . . . . 103 6.4 Detection performance of the proposed tests for 50 and 100 pixels extracted randomly from simulated JPEG images with quality factor 100. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 6.5 Detection performance of the GLRT δe? gen for 100 pixels extracted randomly from simulated JPEG images with different quality factors. 109 6.6 Detection performance of the GLRT δ ? gen and δe? gen for 50 and 100 pixels extracted randomly from JPEG images of Nikon D70 and Nikon D200 cameras. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 6.7 Comparison between the theoretical false alarm probability (FAP) and the empirical FAP, plotted as a function of decision threshold τ . 111 7.1 Estimated parameters (α, β) at frequency (0, 1) and (8, 8) of uniform images generated using a˜ = 0.1, ˜b = 2, γ = 2.2. . . . . . . . . . . . . 117 7.2 Estimated parameters (α, β) at frequency (8, 8) of natural JPEG images issued from Canon Ixus 70 and Nikon D200 camera models. . . 119 7.3 Estimated parameters (˜c, ˜d) at frequency (8, 8) of natural JPEG images issued from different camera models in Dresden database. . . . . 120 7.4 Detection performance of proposed tests on simulated vectors with 1024 coefficients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 7.5 Detection performance of proposed tests on simulated vectors with 4096 coefficients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125List of Figures xv 7.6 Detection performance of proposed GLRTs for 1024 coefficients at frequency (8, 8) extracted randomly from simulated images with different quality factors. . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 7.7 Detection performance of proposed tests for different number of coefficients at frequency (8, 8) of natural JPEG images taken by Canon Ixus 70 and Nikon D200 camera models. . . . . . . . . . . . . . . . 127 7.8 Detection performance of the GLRT δe? dct for 4096 coefficients at different frequencies of natural JPEG images taken by Canon Ixus 70 and Nikon D200 camera models. . . . . . . . . . . . . . . . . . . . . 128 7.9 Comparison between the theoretical false alarm probability (FAP) and the empirical FAP, plotted as a function of decision threshold τ , of the proposed tests at the frequency (8,8) of natural images. . . . . 129 8.1 Detection performance on non-clipped simulated images for embedding rate R = 0.05. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 8.2 Detection performance on clipped simulated images for embedding rate R = 0.05. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 8.3 Detection performance on real clipped images for embedding rate R = 0.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 8.4 Detection performance on real clipped images for embedding rate R = 0.4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 8.5 Detection performance on real clipped images for embedding rate R = 0.6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 8.6 Detection performance on 12-bit images taken by Canon 400D with ISO 100 from BOSS database for embedding rate R = 0.05. . . . . . 153 8.7 Detection performance on 5000 images from BOSS database for embedding rate R = 0.05. . . . . . . . . . . . . . . . . . . . . . . . . . . 153 8.8 Empirical false-alarm probability from real images of BOSS database plotted as a function of decision threshold, compared with theoretical FAP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 9.1 Detection performance of the test δ ? jst based on the proposed model with embedding rate R = 0.05 on the simulated images and real images.167 9.2 Detection performance of the test δ ? jst based on the quantized Laplacian, quantized GG, quantized GΓ, and proposed model on the BOSSBase with embedding rate R = 0.05. . . . . . . . . . . . . . . . . . . 170 9.3 Detection performance of the test δ ? jst based on the quantized Laplacian, quantized GG, quantized GΓ, and proposed model on the subset of 1000 images from the BOSSBase with embedding rate R = 0.05. . 171 9.4 Comparison between the proposed test δ ? jst, ZMH-Sym detector, ZP detector, WS detector and quantized Laplacian-based test. . . . . . . 172 9.5 Mean absolute error for all estimators. . . . . . . . . . . . . . . . . . 172 9.6 Mean absolute error for proposed ML estimator, standard WS estimator and improved WS estimator. . . . . . . . . . . . . . . . . . . . 173xvi List of Figures 9.7 Comparison between the proposed test δ ? jst, standard WS detector and improved WS detector. . . . . . . . . . . . . . . . . . . . . . . . 174List of Tables 4.1 Parameter estimation on synthetic images . . . . . . . . . . . . . . . 63 4.2 PSNR of the extended LLMMSE filter . . . . . . . . . . . . . . . . . 64 4.3 χ 2 test statistics of Laplacian, GG, GΓ, and proposed model for the first 9 quantized coefficients of 3 testing standard images. . . . . . . 71 5.1 Camera Model Used in Experiments (the symbol * indicates our own camera) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.2 Detection performance of the proposed detector. . . . . . . . . . . . 93 5.3 Detection performance of PRNU-based detector for ISO 200. . . . . 93 6.1 Camera Model Used in Experiments . . . . . . . . . . . . . . . . . . 112 6.2 Performance of proposed detector . . . . . . . . . . . . . . . . . . . . 112 6.3 Performance of SVM-based detector (the symbol * represents values smaller than 2%) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.4 Performance of PRNU-based detector . . . . . . . . . . . . . . . . . 113 7.1 Camera Model Used in Experiments . . . . . . . . . . . . . . . . . . 129 7.2 Detection performance of proposed detector δe? dct (the symbol * represents values smaller than 2%) . . . . . . . . . . . . . . . . . . . . . 130 7.3 Detection performance of SVM-based detector . . . . . . . . . . . . 130 7.4 Detection performance of PRNU-based detector . . . . . . . . . . . 131 7.5 Detection performance of proposed detector δ ? dct . . . . . . . . . . . 131 7.6 Detection performance of proposed detector δe? dct on 4 camera models of BOSS database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 10.1 List of proposed statistical tests. . . . . . . . . . . . . . . . . . . . . 177List of Abbreviations Acronym What (it) Stands For AC Alternating Current. AUMP Asymptotically Uniformly Most Powerful. AWGN Additive White Gaussian Noise. cdf cumulative distribution function. CCD Charge-Coupled Device. CFA Color Filter Array. CLT Central Limit Theorem. CMOS Complementary Metal-Oxide Semiconductor. CRF Camera Response Function. DC Direct Current. DCT Discrete Cosine Transform. DSC Digital Still Camera. DSLR Digital Single Lens Reflex. EXIF Exchangeable Image File. GG Generalized Gaussian. GLR Generalized Likelihood Ratio. GLRT Generalized Likelihood Ratio Test. GOF goodness-of-fit. GΓ Generalized Gamma. IDCT Inverse Discrete Cosine Transform. i.i.d independent and identically distributed. JPEG Join Photographic Expert Group. KS Kolmogorov-Smirnov. LSB Least Significant Bit. LR Likelihood Ratio. LRT Likelihood Ratio Test. LS Least Squares. mAE Median Absolute Error. MAE Mean Absolute Error. MGF Moment-Generating Function ML Maximum Likelihood. MM Method of Moments. MP Most Powerful. MSE Mean Squared Error. NP Neyman-Pearson. PCE Peak to Correlation Energy. pdf probability density function. PRNU Photo-Response Non-Uniformity. RLE Run-Length Encoding.xx List of Tables ROC Receiver Operating Characteristic. R/T Rounding and Truncation. SPA Sample Pair Analysis. SPN Sensor Pattern Noise. SVM Support Vector Machine. TIFF Tagged Image File Format. UMP Uniformly Most Powerful. WLS Weighted Least Squares. WS Weighted Stego-image. ZMH Zero Message Hypothesis. ZP Zhang and Ping.Glossary of Notations Notation Definition α0 False alarm probability. β(δ) Power of the test δ. χ 2 K Chi square distribution with K degree of freedom. γ Gamma factor. δ Statistical test. η Noise. κ Quantization step in the spatial domain. µ Expectation. ν Bit-depth. σ Standard deviation. τ Decision threshold. ξ Number of collected electrons, which is modeled by Poisson distribution. ϕ 2-D normalized wavelet scaling function. φ Probability density function of a standard Gaussian random variable. ∆ Quantization step in the DCT domain. Λ Likelihood Ratio. Φ Cumulative distribution function of a standard Gaussian random variable. Θ Parameter space. Cov Covariance. E Mathematical expectation. P[E] Probability that an event E occurs. R Set of real numbers. Var Variance. Z Set of integer numbers. B(n, p) Binomial distribution where n is the number of experiments and p is the success probability of each experiment. D Denoising filter. G(α, β) Gamma distribution with shape parameter α and scale parameter β. H0, H1 Null hypothesis and alternative hypothesis. I Set of pixel indices. Kα0 Class of tests whose false alarm probability is upper-bounded by α0. L Log-likelihood function. N (µ, σ2 ) Gaussian distribution with mean µ and variance σ 2 .xxii List of Tables P(λ) Poisson distribution with mean λ and variance λ. Q∆ Quantization with step ∆. S Source of digital images. U[a, b] Uniform distribution on the interval [a, b]. Z Set of possible pixel values. Z N Image space. C Cover-image that is used for data hiding. D Quantized DCT coefficients. F Fisher information. HDM Linear filter for demosaicing. I Unquantized DCT coefficients. Idn Identity matrix of size n × n. K PRNU. M Secret message to be embedded. PCFA CFA pattern. S Stego-image that contains hidden data. Z Natural image. (a, b) Parameters of heteroscedastic noise model. (˜a, ˜b) Parameters of generalized noise model. c Color channel. (˜c, ˜d) Parameters characterizing the relation between α and β, which are the parameters of DCT coefficient model. fCRF Camera response function. fX(x) Probability density function of a random variable X. gWB Gain for white-balancing. pX(x) Probability mass function of a random variable X. r Change rate, r = R 2 . B Boundary of the dynamic range, B = 2ν − 1. L Secret message length. N Number of pixels in the image Z, N = Nr × Nc if Z is a grayscale image and N = Nr × Nc × 3 if Z is a full-color image. Nblk Number of blocks. Nc Number of columns. Nr Number of rows. Ns Number of sources. Pθ Probability distribution characterized by a parameter vector θ. QR,θ Probability distribution after embedding at rate R. R Embedding rate.Chapter 1 General Introduction 1.1 General Context and Problem Description Traditionally, images are considered as trustworthy since as they are known as being captured through analog acquisition devices to depict the real-world happenings. This traditional trustworthiness is built on remarkable difficulties of image content modification. Indeed, modifying the content of a film-based photo requires special skills, yet time-consuming and costly, through dark room tricks. Therefore, this modification is of limited extent. In the past decades, we have witnessed the evolution of digital imaging technology with a dramatic improvement of digital images’ quality. This improvement is not only due to advances in semiconductor fabrication technology that makes it possible to reduce the pixel size in an image sensor and thus raises the total number of pixels, but also advances in image processing technology that allows to reduce noise introduced in a camera and enhance details of the physical scene. The digital revolution largely replace their analog counterparts to enable ease of digital content creation and processing at affordable cost and in mass scale. Nowadays, digital still cameras (DSCs) are taking over a major segment of the consumer photography marketplace. Only at the very high end (large format, professional cameras with interchangeable and highly adjustable lenses) and very low end (inexpensive automated snapshot cameras) are traditional film cameras holding their own. Besides, the development of communication and networking infrastructure allows digital content to be more accessible. One of the greatest advantage of digital images acquired by DSCs is the ease of transmission over communication networks, which film cameras are difficult to enable. Unfortunately, this path of technological evolution may provide means for malicious purposes. Digital images can be easily edited, altered or falsified because of a large availability of low-cost image editing tools. Consequently, falsified photographs are appearing with a growing frequency and sophistication. The credibility and trustworthiness of digital images have been eroded. This is more crucial when falsified images that were utilized as evidence in a courtroom could mislead the judgement and lead to either imprisonment for the innocent or freedom for the guilty. In general, the falsification of digital images may result in important consequences in terms of political, economic, and social issues. One example of falsification that causes political issues is given in Figure 1.1. In the left corner image, President G.W. Bush and a young child are both reading from America: A Patriotic Primer by Lynne Cheney. But if we look closely, it2 Chapter 1. General Introduction (a) Forged image. (b) Original image. Figure 1.1: Example of falsification. appears that President Bush is holding his book upside down. An unknown hoaxer has horizontally and vertically flipped the image on the back of the book in Bush’s hands. This photo of George Bush holding a picture book the wrong way up during a visit to a school delighted some opponents of the Republican president, and helped foster his buffoonish image. But press photos from the event in 2002 revealed that Mr Bush had been holding the book correctly, i.e. hoaxers had simply used photo editing software to rotate the cover. The original version of the photo (right corner) was taken in the Summer of 2002 while Bush was visiting George Sanchez Charter School in Houston. It was distributed by the Associated Press. By comparing the forged photo and original photo, it can be noted that a dark blue spot is close to the spine of Bush’s book, but this same spot in the girl’s copy is near the left-hand edge of the book. This forensic clue can be considered as evidence of forgery. However in most of the cases, the forgery is not as easy to detect. The human eyes can hardly differentiate a genuine scene from a deliberately forged scene. Overall, the digital revolution has raised a number of information security challenges. To restore the trust to digital images, the field of digital image forensics was born. Because of importance of information security in many domains, digital image forensics has attracted a great attention from academic researchers, law enforcement, security, and intelligence agencies. Conducting forensic analysis is a difficult mission since forensic analysts need to answer several questions before stating that digital content is authentic: 1. What is the true origin of this content? How was it generated? By whom was it taken? 2. Is the image still depicting the captured original scene? Has its content been altered in some way? How has it been processed? The first question involves the problem of image origin identification. Source information of digital images represents useful forensic clues because knowing the1.2. Outline of the Thesis 3 source device that captures the inspected image can facilitate verification or tracing of device owner as well as the camera taker. This situation is as identical as bullet scratches allowing forensic analysts to match a bullet to a particular barrel or gun and trace the gun owner.1 Besides, knowing device model or brand information can help forensic analysts know more about characteristics of acquisition devices, which leads to a potential improvement of detecting the underlying forgeries that could be performed in the inspected image. Another issue is to determine what imaging mechanism has been used to generate the inspected image (e.g. scanners, cell-phone cameras, or computer graphics) before assuming that the inspected image is taken by a digital camera, which can significantly narrow down the search range for the next step of the investigation. The second problem is image content integrity. An image has to be proven authentic and its content has not be forged before it can be used as forensic clues or as evidence in a legal context. Determining whether an image is forged, which manipulation has been performed on the image, or which region of the image has been altered are fundamental tasks. Beside some basic manipulations such as adding, splicing, and removal, the image can be also manipulated by embedding a message into image content directly. The message remains secret such that it is only known by the sender and receiver and an adversary does not recognize its existence visually. This concept is called steganography, which is a discipline of the field of information hiding. However, the concept of steganography has been misused for illegal activities. Detecting existence of secret messages and revealing their content are also the tasks of forensic analysts. This task is called steganalysis. The field of digital image forensics, including steganalysis, is part of an effort to counter cyber-attacks, which is nowadays one of strategy priorities for defence and national security in most countries. 1.2 Outline of the Thesis The main goal of this thesis is to address information security challenges in the field of digital image forensics. In particular, the problems of image origin identification and hidden data detection are studied. The thesis is structured in four main parts. Apart from the first part providing an overview on the field of digital image forensics and statistical image modeling, the rest of the thesis involves many contributions. All the work presented in this thesis is illustrated in Figure 1.2. Part II establishes a profound statistical modeling of natural images by analyzing the image processing pipeline of a digital camera, as well as proposes efficient algorithms for estimation of model parameters from a single image. Typically, the image processing pipeline is composed of three main stages: RAW image acquisition, 1Evidently, tracing an imaging device owner is more difficult as average users have rights to buy a camera easily in a market with millions of cameras while the use of guns is banned or controlled in many countries and a gun user has to register his identities.4 Chapter 1. General Introduction Figure 1.2: Structure of the work presented in this thesis. post-acquisition enhancement, and JPEG compression that employs Discrete Cosine Transform (DCT). Therefore, the statistical image modeling in Part II is performed both in the spatial domain and the DCT domain. By modeling the photo-counting and read-out processes, a RAW image can be accurately characterized by the heteroscedastic noise model in which the RAW pixel is normally distributed and its variance is linearly dependent on its expectation. This model is more relevant than the Additive White Gaussian Noise (AWGN) model widely used in image processing since the latter ignores the contribution of Poisson noise in the RAW image acquisition stage. The RAW image then undergoes post-acquisition processes in order to provide a high-quality full-color image, referred to as TIFF image. Therefore, to study image statistics in a TIFF image, it is proposed to start from the heteroscedastic noise model and take into account non-linear effect of gamma correction, resulting in a generalized noise model. This latter involves a non-linear relation between pixel’s expectation and variance. This generalized noise model has not been proposed yet in the literature. Overall, the study of noise statistics in the spatial domain indicates the non-stationarity of noise in a natural image, i.e. pixel’s variance is dependent on the expectation rather than being constant in the whole image. Besides, pixels’ expectations, namely the image content, are also heterogeneous. Apart from studying image statistics in the spatial domain, it is proposed to study DCT coefficient statistics. Modeling the distribution of DCT coefficients is not a trivial task due to heterogeneity in the natural image and complexity of image statistics. It is worth noting that most of existing models of DCT coefficients, which are only verified by conducting the goodness-of-fit test with empirical data, are given without a mathematical justification. Instead, this thesis provides a mathematical framework of modeling the statistical distribution of DCT coefficients by relying on the double stochastic model that combines the statistics of DCT coefficients in a block whose variance is constant with the variability of block variance in a natural image. The proposed model of DCT coefficients outperforms the others including the Laplacian, Generalized Gaussian, and Generalized Gamma model. Numerical1.2. Outline of the Thesis 5 results on simulated database and real image database highlight the relevance of the proposed models and the accuracy of the proposed estimation algorithms. The solid foundation established in Part II emphasizes several aspects of interest for application in digital image forensics. Relying on a more relevant image model and an accurate estimation of model parameters, the detector is expected to achieve better detection performance. Part III addresses the problem of image origin identification within the framework of hypothesis testing theory. More particularly, it involves designing a statistical test for camera model identification based on a parametric image model to meet the optimality bi-criteria: the warranting of a prescribed false alarm probability and the maximization of the correct detection probability. Camera model identification based on the heteroscedastic noise model, generalized noise model, and DCT coefficients is respectively presented in Chapter 5, Chapter 6, and Chapter 7. The model parameters are proposed to be exploited as unique fingerprint for camera model identification. In general, the procedure in those chapters is similar. The procedure starts from formally stating the problem of camera model identification into hypothesis testing framework. According to Neyman-Pearson lemma, the most powerful test for the decision problem is given by the Likelihood Ratio Test (LRT). The statistical performance of the LRT can be analytically established. Moreover, the LRT can meet the two required criteria of optimality. However, this test is only of theoretical interest because it is based on an assumption that all model parameters are known in advance. This assumption is hardly met in practice. To deal with the difficulty of unknown parameters, a Generalized Likelihood Ratio Test (GLRT) is proposed. The GLRT is designed by replacing unknown parameters by their Maximum Likelihood (ML) estimates in the Likelihood Ratio. Consequently, the detection performance of the GLRT strongly depends on the accuracy of employed image model and parameter estimation. It is shown in Chapter 5, 6, and 7 that the proposed GLRTs can warrant a prescribed false alarm probability while ensuring a high detection performance. Moreover, the efficiency of the proposed GLRTs is highlighted when applying on a large image database. The problem of hidden data detection is addressed in Part IV. This problem is also formulated into hypothesis testing framework. The main idea is to rely on an accurate image model to detect small changes in statistical properties of a cover image due to message embedding. The formulation in the hypothesis testing framework allows us to design a test that can meet two above criteria of optimality. Chapter 8 addresses the steganalysis of Least Significant Bit (LSB) replacement technique in RAW images. More especially, the phenomenon of clipping is studied and taken into account in the design of the statistical test. This phenomenon is due to to limited dynamic range of the imaging system. The impact of the clipping phenomenon on the detection performance of steganalysis methods has not been studied yet in the literature. The approach proposed in Chapter 8 is based on the heteroscedastic noise model instead of the AWGN model. Besides, the approach proposes to exploit the state-of-the-art denoising method to improve the estimation of pixels’ expectation and variance. The detection performance of the proposed6 Chapter 1. General Introduction GLRTs on non-clipped images and clipped images is studied. It is shown that the proposed GLRTs can warrant a prescribed false alarm probability and achieve a high detection performance while other detectors fail in practice, especially the Asymptotically Uniformly Most Powerful (AUMP) test. Next, Chapter 9 addresses the steganalysis of Jsteg algorithm. It should be noted that Jsteg algorithm is a variant of LSB replacement technique. Instead of embedding message bits in the spatial domain, Jsteg algorithm utilizes the LSB of quantized DCT coefficients and embeds message bits in the DCT domain. The goal of Chapter 9 is to exploit the state-of-the-art model of quantized DCT coefficients in Chapter 4 to design a LRT for the steganalysis of Jsteg algorithm. For the practical use, unknown parameters of the DCT coefficient model are replaced by their ML estimates in the Likelihood Ratio. Experiments on simulated database and real image database show a very small loss of power of the proposed test. Furthermore, the proposed test outperforms other existing detectors. Another contributions in Chapter 9 are that a Maximum Likelihood estimator for embedding rate is proposed using the proposed model of DCT coefficients as well as the improvement of the existing Weighted Stego-image estimator by modifying the technique of calculation of weights. 1.3 Publications and Authors’ Contribution Most of the material presented in this thesis appears in the following publications that represent original work, of which the author has been the main contributor. Patents 1. T. H. Thai, R. Cogranne, and F. Retraint, "Système d’identification d’un modèle d’appareil photographique associé à une image compressée au format JPEG, procédé, utilisations and applications associés", PS/B52545/FR, 2014. 2. T. H. Thai, R. Cogranne, and F. Retraint, "Système d’identification d’un modèle d’appareil photographique associé à une image compressée au format JPEG, procédé, utilisations and applications associés", PS/B52546/FR, 2014. Journal articles 1. T. H. Thai, R. Cogranne, and F. Retraint, "Camera model identification based on the heteroscedastic noise model", IEEE Transactions on Image Processing, vol. 23, no. 1, pp. 250-263, Jan. 2014. 2. T. H. Thai, F. Retraint, and R. Cogranne, "Statistical detection of data hidden in least significant bits of clipped images", Elsevier Signal Processing, vol. 98, pp. 263-274, May 2014. 3. T. H. Thai, R. Cogranne, and F. Retraint, "Statistical model of quantized DCT coefficients: application in the steganalysis of Jsteg algorithm", IEEE Transactions on Image Processing, vol. 23, no. 5, pp. 1980-1993, May 2014.1.3. Publications and Authors’ Contribution 7 Journal articles under review 1. T. H. Thai, F. Retraint, and R. Cogranne, "Generalized signal-dependent noise model and parameter estimation for natural images", 2014. 2. T. H. Thai, F. Retraint, and R. Cogranne, "Camera model identification based on the generalized noise model in natural images", 2014. 3. T. H. Thai, R. Cogranne, and F. Retraint, "Camera model identification based on DCT coefficient statistics", 2014. Conference papers 1. T. H. Thai, F. Retraint, and R. Cogranne, "Statistical model of natural images", in IEEE International Conference on Image Processing, pp. 2525- 2528, Sep. 2012. 2. T. H. Thai, R. Cogranne, and F. Retraint, "Camera model identification based on hypothesis testing theory", in European Signal Processing Conference, pp. 1747-1751, Aug. 2012. 3. T. H. Thai, R. Cogranne, and F. Retraint, "Steganalysis of Jsteg algorithm based on a novel statistical model of quantized DCT coefficients", in IEEE International Conference on Image Processing, pp. 4427-4431, Sep. 2013. 4. R. Cogranne, T. H. Thai, and F. Retraint, "Asymptotically optimal detection of LSB matching data hiding", in IEEE International Conference on Image Processing, pp. 4437-4441, Sep. 2013. 5. T. H. Thai, R. Cogranne, and F. Retraint, "Optimal detector for camera model identification based on an accurate model of DCT coefficient", in IEEE International Workshop on Multimedia Signal Processing (in press), Sep. 2014. 6. T. H. Thai, R. Cogranne, and F. Retraint, "Optimal detection of OutGuess using an accurate model of DCT coefficients", in IEEE International Workshop on Information Forensics and Security (in press), Dec. 2014.Part I Overview on Digital Image Forensics and Statistical Image ModelingChapter 2 Overview on Digital Image Forensics Contents 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Image Processing Pipeline of a Digital Camera . . . . . . . 12 2.2.1 RAW Image Formation . . . . . . . . . . . . . . . . . . . . . 13 2.2.2 Post-Acquisition Processing . . . . . . . . . . . . . . . . . . . 15 2.2.3 Image Compression . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3 Passive Image Origin Identification . . . . . . . . . . . . . . . 19 2.3.1 Lens Aberration . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.2 Sensor Imperfections . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.3 CFA Pattern and Interpolation . . . . . . . . . . . . . . . . . 25 2.3.4 Image Compression . . . . . . . . . . . . . . . . . . . . . . . . 26 2.4 Passive Image Forgery Detection . . . . . . . . . . . . . . . . 27 2.5 Steganography and Steganalysis in Digital Images . . . . . 29 2.5.1 LSB Replacement Paradigm and Jsteg Algorithm . . . . . . . 32 2.5.2 Steganalysis of LSB Replacement in Spatial Domain . . . . . 33 2.5.2.1 Structural Detectors . . . . . . . . . . . . . . . . . . 33 2.5.2.2 WS Detectors . . . . . . . . . . . . . . . . . . . . . 35 2.5.2.3 Statistical Detectors . . . . . . . . . . . . . . . . . . 36 2.5.2.4 Universal Classifiers . . . . . . . . . . . . . . . . . . 37 2.5.3 Steganalysis of Jsteg Algorithm . . . . . . . . . . . . . . . . . 38 2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.1 Introduction The goal of this chapter is to provide an overview on the field of digital image forensics. As described in Section 1.1, digital image forensics involve two key problems: image origin identification and image forgery detection. In general, there are two approaches to address these problems. Active forensics aims to authenticate image content by generating extrinsically security measures such as digital watermarks [1–6] and digital signatures [7–10] and adding them to the image file. These12 Chapter 2. Overview on Digital Image Forensics security measures are referred to as extrinsic fingerprints. Although active forensics can provide powerful tools to secure a digital camera and restore the credibility of digital images, it is of limited extent due to many strict constraints in its protocols. In order to solve these problems in their entirety, passive forensics has been quickly evolved. In contrast to active forensics, passive forensics does not impose any constraint and do not require any prior information including the original reference image. Forensic analysts have only the suspect image at their disposal and must explore useful information from the image to gather forensic evidence, trace the acquisition device and detect any act of manipulation. Passive forensics works on an assumption that the image contains some internal traces left from the camera. Every stage from real-world scene acquisition to image storage can provide clues for forensic analysis. These internal traces are called intrinsic fingerprints. Extrinsic and intrinsic fingerprints are two forms of digital fingerprints in digital forensics, which are analogous to human fingerprints in criminal domain. Since passive forensics does not require neither any external security measures generated in the digital camera, nor any prior information, it can authenticate an image in a blind manner and can widely be applied to millions of images that circulate daily on communication networks. This thesis mainly addresses the problem of origin identification and integrity based on passive approach. The chapter is organized as follows. Before discussing active and passive forensics, it is vital to understand deeply the creation and characteristics of digital images. Section 2.2 briefly introduces the typical image processing pipeline of a digital camera, highlighting several aspects of potential interest for applications in digital image forensics. Section 2.3 analyzes passive methods proposed for image origin identification. Section 2.4 briefly discusses passive methods for image forgery detection. Next, Section 2.5 introduces the concept of steganography, which is a type of image content manipulation, and presents prior-art methods for detecting secret data embedded in digital images. Finally, Section 2.6 concludes the chapter. 2.2 Image Processing Pipeline of a Digital Camera This thesis only deals with DSCs and digital images acquired by them. By terminology, a natural image means a digital image acquired by a DSC. Other sources of digitized images such as scanners are not addressed in this thesis but a similar methodology can be easily derived. Image processing pipeline involves several steps from light capturing to image storage performed in a digital camera [11]. After measuring light intensity at each pixel, RAW image that contains exactly information recorded by the image sensor goes through some typical post-acquisition processes, e.g. demosaicing, whitebalancing and gamma correction, to render a full-color high-quality image, referred to as TIFF image. Image compression can be also performed for ease of storage and transmission. The image processing pipeline of a digital camera is shown in2.2. Image Processing Pipeline of a Digital Camera 13 Figure 2.1: Image processing pipeline of a digital camera. Figure 2.1. It should be noted that the sequence of operations differs from manufacturer to manufacturer but basic operations remain similar. In general, the image processing pipeline designed in a digital camera is complex, with trade-offs in the use of buffer memory, computing operations, image quality and flexibility [12]. This section only discusses some common image processing operations such as demosaicing, white balancing, gamma correction and image compression. Other processing operations, e.g. camera noise reduction and edge enhancement, are not included in this discussion. A full-color digital image consists of three primary color components: red, green, and blue. These three color components are sufficient to represent millions of colors. Formally, the full-color image of a DSC can be represented as a three-dimensional matrix of size Nr × Nc × 3 where Nr and Nc are respectively the number of rows and columns. Let c ∈ {R, G, B} denote a color channel where R, G and B stand for respectively the red, green and blue color. Typically, the output image is coded with ν bits and each pixel value is a natural integer. The set of possible pixel values is denoted by Z = {0, 1, . . . , B} with B = 2ν − 1. Therefore, an arbitrary image belongs to the finite image space Z N with N = Nr × Nc × 3. In general, the image space Z N is high dimensional because of a large number of pixels. To facilitate discussions, let Z denote an image in RAW format and Ze denote an image in TIFF or JPEG format. Each color component of the image Z is denoted by Z c and a pixel of the color channel c at the location (m, n) is denoted by z c (m, n), 1 ≤ m ≤ Nr, 1 ≤ n ≤ Nc. 2.2.1 RAW Image Formation Typically, a digital camera includes an optical sub-system (e.g. lenses), an image sensor and an electronic sub-system, which can be regarded as the eye, retina, and brain in the human visual system. The optical sub-system allows to attenuate effects of infrared rays and to provide an initial optic image. The image sensor consists of a two-dimensional arrays of photodiodes (or pixels) fabricated on a silicon14 Chapter 2. Overview on Digital Image Forensics Figure 2.2: Sample color filter arrays. wafer. Two common types of an image sensor are Charge-Coupled Device (CCD) and Complementary Metal-Oxide Semiconductor (CMOS). Each pixel enables to convert light energy to electrical energy. The output signals of the image sensor are analog. These signals are then converted to digital signals by an analog-to-digital (A/D) converter inside the camera. The RAW image is obtained at this stage. Depending on the analog-to-digital circuit of the camera, the RAW image is recorded with 12, 14 or even 16 bits. One key advantage is that the RAW image exactly contains information recorded by the image sensor and it has not yet undergone post-acquisition operations. This offers more flexibility for further adjustments. Although the image sensor is sensitive to light intensity, it does not differentiate light wavelength. Therefore, to record a color image, a Color Filter Array (CFA) is overlaid on the image sensor. Each pixel records a limited range of wavelength, corresponding to either red, green or blue. Some examples of CFA patterns are shown in Figure 2.2. Among available CFA patterns, the Bayer pattern is the most popular. It contains twice as many green as red or blue samples because the human eye is more sensitive to green light than both red or blue light. The higher rate of sampling for the green component enables to better capture the luminance component of light and, thus, provides better image quality. There are few digital cameras that allow to acquire a full-resolution information for all three color components (e.g. Sigma SD9 or Polaroid x530 ). This is not only due to high production cost but also due to the requirement of a perfect alignment of three color planes together. Let Z represent the RAW image recorded by the image sensor. Because of the CFA sampling, the RAW image Z is a single-channel image, namely that it is represented as a two-dimensional matrix of size Nr × Nc. Each pixel value of the RAW image Z corresponds to only one color channel. For subsequent processing operations, each color component is extracted from the RAW image Z. A pixel of each color component is given by z c (m, n) = ( z(m, n) if PCFA(m, n) = c 0 otherwise, (2.1) where PCFA is the CFA pattern. The RAW image acquisition stage is not ideal due to the degradation introduced by several noise sources. This stage involves two predominant random noise sources. The first is the Poisson-distributed noise associated with the stochastic nature of the photo-counting process (namely shot noise) and dark current generated by the thermal energy in the absence of light. Dark current is also referred to as Fixed Pattern Noise (FPN). While shot noise results from the quantum nature of light and it can not be eliminated, dark current can be subtracted from the image [13].2.2. Image Processing Pipeline of a Digital Camera 15 The second random noise sources account for all remaining electronic noises involved in the acquisition chain, e.g. read-out noise, which can be modeled by a Gaussian distribution with zero-mean. Apart from random noises, there is also a multiplicative noise associated with the sensor pattern. This noise accounts for differences of pixels response to the incident light due to imperfections during the manufacturing process and inhomogeneity of silicon wafers. Therefore, this noise is referred to as PhotoResponse Non-Uniformity (PRNU). The PRNU, which is typically small compared with the signal, is a deterministic component that is present in every image. FPN and PRNU are two main components of Sensor Pattern Noise (SPN). The PRNU is unique for each sensor, thus it can be further used for forensic analysis. 2.2.2 Post-Acquisition Processing Although the use of the CFA allows to reduce the cost of the camera, this requires to estimate the missing color values at each pixel location in order to render a full-color image. It means that all the zero values in the sub-images need to be interpolated. This estimation process is commonly referred to as CFA demosaicing or CFA interpolation [14]. Technically, demosaicing algorithms estimate a missing pixel value by using its neighborhood information. The performance of CFA demosaicing affects greatly the image quality. Demosaicing algorithms can be generally classified into two categories: non-adaptive and adaptive algorithms. Non-adaptive algorithms apply the same interpolation technique for all pixels. The nearest neighborhood, bilinear, bicubic, and smooth hue interpolations are typical examples in this category. For example, the bilinear interpolation can be written as a linear filtering Z c DM = Hc DM ~ Z c , (2.2) where ~ denotes the two-dimensional convolution, Z c DM stands for the demosaiced image of the color channel c, and Hc DM is the linear filter for the color channel c HG DM = 1 4   0 1 0 1 4 1 0 1 0   , HR DM = HB DM = 1 4   1 2 1 2 4 2 1 2 1   . (2.3) Although non-adaptive algorithms can provide satisfactory results in smooth regions of an image, they usually fail in textured regions and edges. Therefore, adaptive algorithms, which are more computationally intensive, employ edge information or inter-channel correlation to find an appropriate set of coefficients to minimize the overall interpolation error. Because the CFA interpolation commonly estimates a missing pixel value using its neighbors, it thus creates a correlation between adjacent pixels. This spatial correlation may be amplified during subsequent processing stages. Furthermore, to improve the visual quality, the RAW image needs to go through another processing step, e.g. white balancing [11]. In fact, an object may appear different in color when it is illuminated under different light sources. This is due to the color temperature difference of the light sources, which causes the shift of the16 Chapter 2. Overview on Digital Image Forensics reflection spectrum of the object from the true color. In other words, when a white object is illuminated under a light source with low color temperature, the reflection become reddish. On the other hand, a light source with high color temperature can cause the white object to become bluish. The human visual system can recognize the white color of the white object under different light sources. This phenomenon is called color constancy. However, the digital camera does not have such luxury of millions of year of evolution as human visual system. Therefore, the white balance adjustment is implemented in the digital camera to compensate this illumination imbalance so that a captured white object is rendered white in the image. Basically, white balance adjustment is performed by multiplying pixels in each color channel by a different gain factor. For instance, one classical white balancing algorithm is the Gray World assuming that the average value of three color channels will average to a common gray value z R DM = z G DM = z B DM, (2.4) where z c DM denotes the average intensity of the demosaiced image Z c DM z c DM = 1 Nr · Nc X Nr m=1 X Nc n=1 z c DM(m, n). (2.5) In this algorithm, the green channel is fixed because human eye is more sensitive to this channel (i.e. g G WB = 1). The gain factor for other color channels is given by g R WB = z G DM z R DM , and g B WB = z G DM z B DM , (2.6) where g c WB denotes the gain factor of the color channel c for white balance adjustment. Therefore, the white-balanced image Z c WB is simply given by Z c WB = g c WB · Z c DM. (2.7) Other white-balancing algorithms may be also designed using different gain factors. Actually, the white balance adjustment is a difficult task due to estimation or selection of appropriate gain factors to correct for illumination imbalance. In this task, the prior knowledge of light sources is critical so that the camera knows to select appropriate gain factors. Therefore, some typical light sources such as daylight, incandescent or fluorescent are stored in the camera. The white balance can be done automatically in the camera. Some expensive cameras employ preprogrammed or manual white balance for adapting to illumination conditions correctly. Generally, the pixel intensity is still linear with respect to scene intensity before gamma correction [11, 12]. However, most displays have non-linear characteristics. The transfer function of these devices can be fairly approximated by a simple power function that relates the luminance L to voltage V L = V γ . (2.8)2.2. Image Processing Pipeline of a Digital Camera 17 Figure 2.3: JPEG compression chain. Typically, γ = 2.2. To compensate this effect and render the luminance into a perceptually uniform domain, the gamma correction is done in the image processing pipeline. Gamma correction is roughly the inverse of Equation (2.8), applying to each input pixel value z c GM(m, n) = z c WB(m, n)  1 γ . (2.9) After going through all post-acquisition processes, a full-color high-quality image, referred to as TIFF image, is rendered. For the sake of simplicity, let ZeTIFF denote the final full-color TIFF image. 2.2.3 Image Compression The TIFF format does not make ease of storage or transmission. Therefore, most digital cameras commonly apply lossy compression algorithms to reduce the image data size. Lossy compression algorithms allow to discard information that is not visually significant. Therefore, lossy compression algorithms are irreversible when the image reconstructed from the compressed image data is not as identical as the original image. Moreover, the use of a lossy compression algorithm is a balancing act between storage size and image quality. An image which is compressed with a high compression factor requires little storage space, but it will probably be reconstructed with a poor quality. Although many lossy compression algorithms have been proposed, most manufacturers predominately utilize JPEG compression. The JPEG compression scheme consists of three fundamental settings: color space, subsampling technique, and quantization table. Even though JPEG was already proposed by the standard Independent JPEG Group [15], manufacturers typically design their own compression scheme for optimal trade-off of image quality versus file size. Fundamental steps of the typical JPEG compression chain are shown in Figure 2.3. The JPEG compression scheme works in the different color space, typically YCbCr color space, rather than the RGB color space. The transformation to the YCbCr color space is to reduce correlations among red, green and blue components. It allows for more efficient compression. The channel Y represents the luminance of18 Chapter 2. Overview on Digital Image Forensics a pixel, and the channels Cb and Cr represent the chrominance. Each channel Y, Cb and Cr is processed separately. In addition, the channels Cb and Cr are commonly supsampled by a factor of 2 horizontally and vertically. The transformation from the RGB color space to the YCbCr color space is linear   Y Cb Cr   =   0.299 0.587 0.114 −0.169 −0.331 0.5 0.5 −0.419 0.081     R G B   +   0 128 128   . (2.10) To avoid introducing too many symbols, let ZeTIFF denote also the image obtained after this transformation. The JPEG compression algorithm consists of two key steps: Discrete Cosine Transform (DCT), and quantization. It works separately on each 8 × 8 block of a color component. The DCT operation converts pixel values from the spatial domain into transform coefficients I(u, v) = 1 4 TuTv X 7 m=0 X 7 n=0 z˜TIFF(m, n) × cos  (2m + 1)uπ 16  cos  (2n + 1)vπ 16  , (2.11) where z˜TIFF(m, n) is a pixel in a 8 × 8 block, 0 ≤ m, n ≤ 7, I(u, v) denotes the two-dimensional DCT coefficient, 0 ≤ u, v ≤ 7, and Tu is the normalized weight Tu = ( √ 1 2 for u = 0 1 for u > 0 . (2.12) The index of color channel Y, Cb, and Cr is omitted for simplicity as each color channel is processed separately. The coefficient at location (0, 0), called the Direct Current (DC) coefficient, represents the mean value of pixels in the 8 × 8 block. The remaining 63 coefficients are called the Alternating Current (AC) coefficients. The DCT is known as sub-optimal transform with two important properties: energy compaction and decorrelation. In a natural image, the majority of the energy tends to be more located in low frequencies (i.e. the upper left corner of the 8 × 8 grid) while high frequencies contains information that is not visually significant. Then, the DCT coefficients go through the quantization process. The quantization is carried out by simply dividing each coefficient by the corresponding quantization step, and then rounding to the nearest integer D(u, v) = round  I(u, v) ∆(u, v)  , (2.13) where D(u, v) is the quantized DCT coefficient and ∆(u, v) is the corresponding element in the 8 × 8 quantization table ∆. The quantization table is designed differently for each color channel. The quantization is irreversible, which results in an impossibility of recovering the original image exactly. The final processing2.3. Passive Image Origin Identification 19 step is entropy coding that is a form of lossless process. It arranges quantized DCT coefficients into the zig-zag sequence and then employs the Run-Length Encoding (RLE) algorithm and Huffman coding. This step is perfectly reversible. The JPEG decompression works in the reverse order: entropy decoding, dequantization, and Inverse DCT (IDCT). When the image is decompressed, the entropy is decoded, we obtain the two-dimensional quantized DCT coefficients. The dequantization is performed by multiplying the quantized DCT coefficient D(u, v) by the corresponding quantization step ∆(u, v) I(u, v) = ∆(u, v) · D(u, v), (2.14) where I(u, v) stands for the dequantized DCT coefficient. The IDCT operation is applied to dequantized DCT coefficients to return to the spatial domain z˜IDCT(m, n) = X 7 uh=0 X 7 uv=0 1 4 TuTvI(u, v) × cos  (2m + 1)uπ 16  cos  (2n + 1)vπ 16  . (2.15) After upsampling color components and transforming into the RGB color space, the values are rounded to the nearest integers and truncated to a finite dynamic range (typically [0, 255]) z˜JPEG(m, n) = trunc roundh z˜IDCT(m, n) i, (2.16) where z˜JPEG(m, n) is the final decompressed JPEG pixel. In general, the JPEG pixel z˜JPEG(m, n) differs from the original TIFF pixel z˜TIFF(m, n) due to the quantization, rounding and truncation (R/T) errors in the process. Note that in this image processing pipeline, R/T errors are only take into account one time for the sake of simplification. The way that JPEG compression works separately on each 8×8 block generates discontinuities across the boundaries of the blocks, which are also known as block artifacts [16]. The blocking artifacts are more severe when the quantization steps are coarser. Moreover, because of the quantization in the DCT domain, the DCT coefficients obtained by applying the DCT operation on the decompressed JPEG image will cluster around integer multiples of ∆(u, v), even though those DCT coefficients are perturbed by R/T errors. These two artifacts provide a rich source of information for forensic analysis of digital images. 2.3 Passive Image Origin Identification Basically, when an image is captured by a camera, it is stored with the metadata headers in a memory storage device. The metadata, e.g. Exchangeable Image File (EXIF) and JPEG headers, contain all recording and compression history. Therefore, a simplest way to determine the image’s source is to read out directly from the20 Chapter 2. Overview on Digital Image Forensics metadata. However, such metadata headers are not always available in practice if the image is resaved in a different format or recompressed. Another problem is that the metadata headers are not reliable as they can be easily removed or modified using low-cost editing tools. Therefore, the metadata should not be considered in forensic analysis. The common philosophy of passive approach for image origin identification is to rely on inherent intrinsic fingerprints that the digital camera leaves in a given image. The fingerprint can discriminate different camera brands, camera models, and even camera units. Any method proposed for image origin identification must respond to following questions: 1. Which fingerprints are utilized for origin identification? 2. How to extract these fingerprints accurately from a given image? 3. Under which frameworks is the method designed to exploit the discriminability of fingerprints extracted from images captured by different sources1 and to calculate the similarity of fingerprints extracted from images captured by the same source? Every stage from real-world scene acquisition to image storage can provide intrinsic fingerprints for forensic analysis (see Figure 2.1). Although the image processing pipeline is common for most cameras, each processing step is performed according to manufacturers’ own design. Thus the information left by each processing step is useful to trace down to the device source. A fingerprint must satisfy three following important requirements: • Generality: the fingerprint should be present in every image. • Invariant: the fingerprint does not vary for different image contents. • Robustness: the fingerprint survives non-linear operations such as lossy compression or gamma correction. The second question involves a challenge for any forensic method since the fingerprint extraction may be severely contaminated by non-linear operations (e.g. gamma correction and lossy compression). Generally, the image origin identification problem can be formulated into two frameworks: supervised classification [17–19] and hypothesis testing [20]. Compared with hypothesis testing framework, supervised classification framework is utilized by most of existing methods in the literature. The construction of a classifier typically consists of two stages: training stage and testing stage. It is assumed that the entire image space Z N that includes all images from all the sources in the real world can be 1The term source means an individual camera instance, a camera model, or a camera brand. Other sources such as cell-phone cameras, scanners, computer graphic are not addressed in this thesis.2.3. Passive Image Origin Identification 21 divided into disjoint subsets in which images with same characteristics from the same source are grouped together. Let {S1, S2, . . . , SNs} be Ns different sources that are required to be classified. Typically, each source Sn, 1 ≤ n ≤ Ns, is a subset of Z N . In the training stage, suppose that Nim images are collected to be representative for each source. Each image in the source Sn is denoted by Zn,i. Then a feature vector is extracted from each image. Formally, a feature vector is a mapping f : Z N → F where each image Z is mapped to a Nf-dimensional vector v = f(Z). Here, F is called feature space and Nf is the number of selected features, which is also the dimension of the feature space F. The number of features Nf is very small compared with the number of pixels N. Working in a low-dimensional feature space F that represent the input images is much simpler than working on high-dimensional noisy image space Z N . The choice of an appropriate feature vector is primordial in supervised classification framework since the accuracy of a classifier highly depends on it. Thus we obtain a set of feature vectors {vn,i} that is representative for each source. In this training stage feature refinement can be also performed such as dimensionality reduction or feature selection to avoid overtraining and redundant features. The knowledge learnt from the set of refined feature vectors helps build a classifier using supervised machine learning algorithms. A classifier typically is a learned function that can map an input feature vector to a corresponding source. Therefore, in the testing stage, the same steps such as feature selection and feature refinement are performed on the testing images. The output of the trained classifier is the result predicted for input testing images. Among many existing powerful machine learning algorithms, Support Vector Machines (SVM) [18, 19] seem to be the most popular choice in passive forensics. SVM are established in a solid mathematical foundation, namely statistical learning theory [18]. Moreover, their implementation is available for download and is easy to use [21]. Supervised classification framework involves two main drawbacks. To achieve high accuracy, supervised classification framework requires an expensive training stage that involves many images with different characteristics (e.g. image content or camera settings) from various sources for representing a real-world situation, which might be unrealistic in practice. Another drawback of this framework is that the trained classifier can not establish the statistical performances analytically since it does not rely on knowledge of a priori statistical distribution of images. In the operational context, such as for law enforcement and intelligent agencies, the design of an efficient method might not be sufficient. Forensic analysts also require that the probability of false alarm should be guaranteed and below a prescribed rate. The analytic establishment of statistical performances still remains an open problem in machine learning framework [22]. On the other hand, the problem of image origin identification problem lends itself to a binary hypothesis testing formulation. Definition 2.1. (Origin identification problem). Given an arbitrary image Z under investigation, to identify the source of the image Z, forensic analysts decide between22 Chapter 2. Overview on Digital Image Forensics two following hypotheses ( H0 : Z is acquired by the source of interest S0 H1 : Z is acquired by a certain source S1 that differs from the source S0. (2.17) Suppose that the source S0 is available, so forensic analysts can have access to its characteristics, or its fingerprints. Therefore, they can make a decision by checking whether the image in question Z contains the fingerprints of the source. Relying on a priori statistical distribution of the image Z under each source, forensic analysts can establish a test statistic that can give a decision rule according to some criteria of optimality. Statistical hypothesis testing theory has been considerably studied and applied in many fields. Several statistical tests as well as criteria of optimality have been proposed. While supervised learning framework only requires to find an appropriate set of forensic features, the most challenging part in hypothesis testing framework is to establish a statistical distribution to accurately characterize a high-dimensional real image. In doing so, hypothesis testing framework allows us to establish analytically the performance of the detector and warrant a prescribed false alarm probability, which are two crucial criteria in the operational context that supervised classification framework can not enable. However, hypothesis testing framework is of limited exploitation in forensic analysis. For the sake of clarity, hypothesis testing theory will be more detailed in Chapter A. There are many passive forensic methods proposed in the literature for image origin identification. In this thesis, we limit the scope of our review to methods for identification of the source of a digital camera (e.g. camera brand, camera model, or individual camera instance). The methods to identify other imaging mechanisms such as cell-phone cameras, scanners, and computer graphics will not be addressed. It is important to distinguish the problem of camera instance identification and the problem of camera model/brand identification. More specifically, fingerprints used for camera instance identification should capture individuality, especially cameras coming from the same brand and model. For camera model/brand identification, it is necessary to exploit fingerprints that are shared between cameras of the same model/brand but discriminative for different camera models/brands. Existing methods in the literature can be broadly divided into two categories. Methods in the first category exploit differences in image processing techniques and component technologies among camera models and manufacturers such as lens aberration [23], CFA patterns and interpolation [24–26] and JPEG compression [27,28]. The main challenge in this category is that the image processing techniques remain identical or similar, and the components produced by a few manufacturers are shared among camera models. Methods in the second category aim to identify unique characteristics or fingerprints of the acquisition device such as PRNU [29–36]. The ability to reliably extract this fingerprint from an image is the main challenge in the second category since different image contents and non-linear operations may2.3. Passive Image Origin Identification 23 severely affect this extraction. Below we will present the methods according to the position of exploited fingerprints in the image acquisition pipeline of a digital camera. 2.3.1 Lens Aberration Digital cameras use lenses to capture incident light. Due to the imperfection of the design and manufacturing process, lenses cause undesired effects in output images such as spherical aberration, chromatic aberration, or radial distorsion. Spherical aberration occurs when all incident light rays end up focusing at different points after passing through a spherical surface, especially light rays passing through the periphery of the spherical lens. Chromatic aberration is a failure of lens to converge different wavelengths at the same position on the image sensor. Radial distorsion causes straight lines rendered as curved lines on the image sensor and it occurs when the transverse magnification (ratio of the image distance to the object distance) is not a constant but a function of the off-axis image distance. Among these effects, radial distorsion may be the most severe part that lens produces in output images. Different manufacturers design different lens systems to compensate the effect of radial distorsion. Moreover, focal length also affects the degree of radial distorsion. As a result, each camera brand or model may leave an unique degree of radial distorsion on the output images. Therefore, radial distorsion of lens is exploited in [23] to identify the source of the image. The authors in [23] take the center of an image as the center of distorsion and model the undistorted radius ru as a non-linear function of distorted radius rd ru = rd + k1r 3 d + k2r 5 d . (2.18) Distorsion parameters (k1, k2) are estimated using the straight line method [37,38]. Then the distorsion parameters (k1, k2) are exploited as forensic features to train a SVM classifier. Although experiments provided a promising result, experiments were only conducted on three different camera brands. Experiments on large database including different devices per camera model and different camera models are more desirable. However, this lens aberration-based classifier would fail for a camera with possibly interchangeable lenses, e.g. Digital Single Lens Reflex (DSLR) camera. 2.3.2 Sensor Imperfections As discussed in Section 2.2, imperfections during the manufacturing process and inhomogeneity of silicon wafers leads to slight variations in the response of each pixel to incident light. These slight variations are referred as to PRNU, which is unique for each sensor pattern. Thus PRNU can be exploited to trace down to individual camera instance. The FPN was also used in [39] for camera instance identification. However, the FPN can be easily compensated, thus it is not a robust fingerprint and no longer used in later works.24 Chapter 2. Overview on Digital Image Forensics Generally, PRNU is modeled as a multiplicative noise-like signal [30, 32] Z = µ + µK + Ξ, (2.19) where Z is the output noisy image, µ is the ideal image in the absence of noise, K represents the PRNU, and Ξ accounts for the combination of other noise sources. All operations in (2.19) are pixel-wise. Like supervised classification, the PRNU-based method also consists two stages. The training stage involves collecting Nim images that are acquired by the camera of interest S0 and extracting the PRNU K0 characterizing this camera. This is accomplished by applying a denoising filter on each image to suppress the image content, then performing Maximum Likelihood (ML) estimation of K0 K0 = PNim i=1 Z res i Zi PNim i=1 Zi 2 , (2.20) where Z res i = Zi −D Z  is the noise residual corresponding to the image Zi , 1 ≤ i ≤ Nim, and D stands for the denoising filter. Note that when the PRNU is extracted from JPEG images, it may contain artifacts introduced by CFA interpolation and JPEG compression. These artifacts are not unique to each camera instance and shared among different camera units of the same model. To render the PRNU unique to the camera and improve the accuracy of the method, a preprocessing step is performed to suppress these artifacts by subtracting the averages from each row and column, and applying the Wiener filter in the Fourier domain [30]. In the testing stage, given an image under investigation Z, the problem of camera source identification (2.17) is rewritten as follows ( H0 : Z res = µK0 + Ξe H1 : Z res = µK1 + Ξe, (2.21) where the noise term Ξe includes the noise Ξ and additional terms introduced by the denoising filter. This formulation must be understood as follows: hypothesis H0 means that the noise residual Z res contains the PRNU K0 characterizing the camera of interest S0 while hypothesis H1 means the opposite. It should be noted that the PRNU detection problem in [30, 32] is formulated in the reverse direction. The sub-optimal detector for the problem (2.21) is the normalized cross correlation between the PRNU term µK and the noise residual Z res [30]. In fact, the normalized cross correlation is derived from the Generalized Likelihood Ratio Test (GLRT) by modeling the noise term Ξe as white noise with known variance [40]. A more stable statistic derived in [32] is the Peak to Correlation Energy (PCE) as it is independent of the image size and has other advantages such as its response to the presence of weak periodic signals. Theoretically, the decision threshold for the problem (2.21) is given by τ = Φ −1 (1 − α0) 2 where α0 is the prescribed false alarm probability, Φ(·) and Φ −1 (·) denotes respectively the cumulative distribution function (cdf) of the standard Gaussian random variable and its inverse. If the2.3. Passive Image Origin Identification 25 PCE is smaller than a threshold τ , the image Z is claimed taken by the camera in question. The detection performance can be improved by selecting an appropriate denoising filter [33], attenuating scene details in the test image [34,35], or recognizing the PRNU term with respect to each sub-sample of the CFA pattern [36]. Beside individual camera instance identification, the fingerprint PRNU can be also used for camera model identification [31]. This is based on an assumption that the fingerprint obtained from TIFF or JPEG images contains traces of postacquisition processes (e.g. CFA interpolation) that carry information about the camera model. In this case, the above preprocessing step that removes the linear pattern from the PRNU will not be performed. The features extracted from the PRNU term including statistical moments, cross correlation, block covariance, and linear pattern, are used to train a SVM classifier. 2.3.3 CFA Pattern and Interpolation Based on the assumption that different CFA patterns and CFA interpolation algorithms are employed by different manufacturers, even in different camera models, thus they can be used to discriminate camera brands and camera models. Typically, both CFA pattern and interpolation coefficients are unknown in advance. They must be estimated together from a single image. An algorithm has been developed in [24] to jointly estimate CFA pattern and interpolation coefficients, which has shown the robustness to JPEG compression with low quality factors. Firstly, a search space including 36 possible CFA patterns is established based on the observation that most cameras use a RGB type of CFA with a fixed periodicity of 2 × 2. Since a camera may employ different interpolation algorithms for different types of regions, it is desirable to classify the given image into three types of regions based on gradient information in a local neighborhood of a pixel: region contains parts of the image with a significant horizontal gradient, region contains parts of the image with a significant vertical gradient, and region includes the remaining smooth parts of the image. For every CFA pattern PCFA in the search space, the interpolation coefficients are computed separately in each region by fitting linear models. Using the final output image Z and the assumed CFA pattern PCFA, we can identify the set of pixels that acquired directly from the image sensor and those obtained by interpolation. The interpolated pixels are assumed to be a weighted average of the pixels acquired directly. The interpolation coefficients are then obtained by solving these equations. To overcome the difficulty of noisy pixel values and interference of non-linear post-acquisition processes, singular value decomposition is employed to estimate the interpolation coefficients. These coefficients are then use to re-estimate the output image Zb, and find the interpolation error Zb − Z. The CFA pattern that gives the lowest interpolation error and its corresponding coefficients are chosen as final results [24]. As soon as the interpolation coefficients are estimated from the given image, they are used as forensic features to train a SVM classifier for classification of cam-26 Chapter 2. Overview on Digital Image Forensics era brands and models [24]. The detection performance can be further enhanced by taking into account intra-channel and inter-channel correlations and more sophisticated interpolation algorithms in the estimation methodology [26]. Other features can be used together with interpolation coefficients such as the peak location and magnitudes of the frequency spectrum of the probability map [41]. 2.3.4 Image Compression Image compression is the final step in the image processing pipeline. As discussed in Section 2.2, manufacturers have their own compression scheme for optimal tradeoff of image quality versus file size. Different component technologies (e.g. lenses, sensors), different in-camera processing operations (e.g. CFA interpolation, whitebalancing), together with different quantization matrices will jointly result in statistical difference of quantized DCT coefficient. Capturing this statistical difference and extracting useful features from it may enable to discriminate different camera brands or camera models. To this end, instead of extracting statistical features directly from quantized DCT coefficients, features are extracted from the difference JPEG 2-D array [28]. The JPEG 2-D array consists of the magnitudes (i.e. absolute values) of quantized DCT coefficients. Three reasons behind taking absolute values are the followings: 1. The magnitudes of DCT coefficients decrease along the zig-zag order. 2. Taking absolute values can reduce the dynamic range of the resulting array. 3. The signs of DCT coefficients mainly carry information of the outlines and edges of the original spatial-domain image, which does not involve information about camera models. Thus by taking absolute values, all the information regarding camera models remains. Then to reduce the influence of image content and enhance statistical difference introduced in image processing pipeline, the difference JPEG 2-D array, which is defined by taking the difference between an element and one of its neighbors in the JPEG 2-D array, is introduced. The difference can be calculated along four directions: horizontal, vertical, main diagonal, and minor diagonal. To model the statistical difference of quantized DCT coefficients and take into account the correlation between coefficients, the Markovian transition probability matrix is exploited. Each difference JPEG 2-D array from a direction generates its own transition probability matrix. Each probability value in the transition matrix is given by P h X(uh + 1, uv) = k X(uh, uv) = l i = PNh uh=1 PNv uv=1 1X(uh,uv)=l,X(uh+1,uv)=k PNh uh=1 PNv uv=1 1X(uh,uv)=l , (2.22) where X(uh, uv) denotes an element in the difference JPEG 2-D array and 1E is an indicator function 1E = ( 1 if E is true 0 otherwise. (2.23)2.4. Passive Image Forgery Detection 27 These steps are performed for the Y and Cb components of the compressed JPEG image. Totally, we can collect 324 transition probabilities for Y component and 162 transition probabilities for Cb component. The transition probabilities are then used as forensic features for SVM classification. Experiments are then conducted on a large database including 40000 images of 8 different camera models, providing a good classification performance [28]. In this method it is more desirable to perform feature refinement to reduce the number of features and the complexity of the algorithm. 2.4 Passive Image Forgery Detection Image forgery detection is another fundamental task of forensic analysts, which aims to detect any act of manipulation on image content. The main assumption is that even though a forger with skills and powerful tools does not leave any perceptible trace of manipulation, the manipulation creates itself inconsistencies in image content. Depending on which type of inconsistencies is investigated and how passive forensic methods operate, they can broadly be divided into five categories. A single method can hardly detect all types of forgery, so forensic analysts should use these methods together to reliably detect a wide variety of tampering. 1. Universal Classifiers: Any act of manipulation may lead to statistical changes in the underlying image. Instead of capturing these changes directly in a high-dimensional and non-stationary image, which is extremely difficult, one approach is to detect changes in a set of features that represent an image. Based on these features, supervised classification is employed to provide universal classifiers to discriminate between unaltered images and manipulated images. Some typical forensic features are higher-order wavelet statistics [42], image quality and binary similarity measures [43, 44]. These universal classifiers are not only able to detect some basic manipulations such as resizing, splicing, contrast enhancement, but also reveal the existence of hidden messages [45]. 2. Camera Fingerprints-Based: A typical scenario of forgery is to cut a portion of an image and paste it into a different image, then create the so-called forged image. The forged region may not be taken by the same camera as remaining regions of the image, which results in inconsistencies in camera fingerprints between those regions. Therefore, if these inconsistencies exist in an image, we could assume that the image is not authentic. For authentication, existing methods have exploited many camera fingerprints such as chromatic aberration [46], PRNU [30,32], CFA interpolation and correlation [25, 47–49], gamma correction [50, 51], Camera Response Function (CRF) [52–54]. 3. Compression and Coding Fingerprints-Based: Nowadays most commercial cameras export images in JPEG format for ease of storage and transmission. As discussed in Section 2.2, JPEG compression introduces two important artifacts: clustering of DCT coefficients around integer multiples of the quan-28 Chapter 2. Overview on Digital Image Forensics tization step, and blocking artifacts. Checking inconsistencies in these two artifacts can trace the processing history of an image and determine its origin and authenticity. A possible scenario is that while the original image is saved in JPEG format, a forger could save it in a lossless format after manipulation. Existence of these artifacts in an image in a lossless format can show that it has been previously compressed [55–57]. Another scenario is that the forger could save the manipulated image in JPEG format, which means that the image has undergone JPEG compression twice. Detection of double JPEG compression can be performed by checking periodic patterns (e.g. double peaks and missing centroids) in the histogram of DCT coefficients due to different quantization steps [51, 58, 59], which are not present in singly compressed images, or using the distribution of the first digit of DCT coefficients [60, 61]. The detection of double JPEG compression is of greater interest since it can reveal splicing or cut-and-paste forgeries due to the fact that the the forged region and remaining regions of the image may not have the same processing history. Inconsistencies can be identified either in DCT domain [62–65] or in spatial domain via blocking artifacts [66, 67]. Furthermore, the detection of double JPEG compression can be applied for detecting hidden messages [58, 59]. 4. Manipulation-Specific Fingerprints-Based: Each manipulation may leave specific fingerprints itself within an image, which can be used as evidence of tampering. For example, resampling causes specific periodic correlations between neighboring pixels. These correlations can be estimated based on the Expectation Maximization (EM) algorithm [68], and then used to detect the resampling [68, 69]. Furthermore, resampling can be also detected by identifying periodicities in the average of an image’s second derivative along its row and columns [70], or periodicities in the variance of an image’s derivative [71]. Contrast enhancement creates impulsive peaks and gaps in the histogram of the image’s pixel value. These fingerprints can be detected by measuring the amount of high frequency energy introduced into the Fourier transform of an image’s pixel value histogram [72]. Median filtering introduces streaking into the signals [73]. Streaks correspond to a sequence of adjacent signal observations all taking the same value. Therefore, median filtering can be detected by analyzing statistical properties of the first difference of an image’s pixel values [74–77]. Splicing disrupts higher-order Fourier statistics, which leaves traces to detect splicing [78]. 5. Physical Inconsistencies-Based: Methods in this category do not make use of any form of fingerprints but exploit properties of lighting environment for forgery detection. The main assumption is that all the objects within an image are typically illuminated under the same light sources, so the same properties of lighting environments. Therefore, difference in lighting across an image can be used as evidence of tampering, e.g. splicing. To this end, it is necessary to estimate the direction of the light source illuminating an object. This can be accomplished by considering two-dimensional [79] or three-dimensional [80]2.5. Steganography and Steganalysis in Digital Images 29 Figure 2.4: Typical steganographic system. surface normals, and illumination under a single light source [79] or even under multiple light sources [81]. The lighting environment coefficients of all objects in an image are then used for checking inconsistencies. 2.5 Steganography and Steganalysis in Digital Images Steganography is the art and science of hiding communication. The concept of steganography is used for invisible communication between only two parties, the sender and the receiver, such that the message exchanged between them can not be detected by an adversary. This communication can be illustrated by prisoners’ problem [82]. Two prisoners, Alice and Bob, want to develop an escape plan but all communications between them are unfortunately monitored by a warden named Wendy. The escape plan must be kept secret and exchanged without raising Wendy’s suspicion. It means that the communication does not only involve the confidentiality of the escape plan but also its undetectability. For this purpose, a practical way is to hide the the escape plan, or the secret message in a certain ordinary object and send it to the intended receiver. By terminology, the original object that is used for message hiding is called cover-object and the object that contains the hidden message is called stego-object. The hiding technique does not destroy the object content perceptibly to not raise Wendy’s suspicion, nor modify the message content so the receiver could totally understand the message. The advances in information technologies make digital media (e.g. audio, image, or video) ubiquitous. This ubiquity facilitates the choice of a harmless object in which the sender can hide a secret message, so sending such media is inconspicuous. Furthermore, the size of digital media is typically large compared to the size of secret message. Thus the secret message can be easily hidden in digital media without visually destroying digital content. Most of researches focus on digital images, which are also the type of media addressed in this thesis. A typical steganographic system is shown in Figure 2.4. It consists of two stages:30 Chapter 2. Overview on Digital Image Forensics embedding stage and extraction stage. When Alice wants to send a secret message M, she hides it into a cover-image C using a key and an embedding algorithm. The secret message M is a binary sequence of L bits, M = (m1, m2, . . . , mL) T with mi ∈ {0, 1}, 1 ≤ i ≤ L. The resulting stego-image S is then transmitted to Bob via an insecure channel. Bob can retrieve the message M since he knows the embedding algorithm used by Alice and has access to the key used in embedding process. Bob does not absolutely require the original cover-image C for message extraction. From the Kerckhoffs’ principle [83], it is assumed that in digital steganography, steganographic algorithms are public so that all parties including the warden Wendy have access to them. The security of the steganographic system relies solely on the key. The key could be secret key exchanged between Alice and Bob through a secure channel, or public key. In general, steganographic systems can be evaluated by three basic criteria: capacity, security, and robustness. Capacity is defined as the maximum length of a secret message. The capacity depends on the embedding algorithm and properties of cover-images. The security of a steganographic system is evaluated by the undetectability rather than the difficulty of reading the message content in case of cryptographic system. However, we can see that steganographic systems also exploit the idea of exchange of keys (secret and public) from cryptographic system to reinforce the security. Robustness means the difficulty of removing a hidden message from a stego-image, so the secret message survives some accidental channel distortions or systematic interference of the warden that aims to prevent the use of steganography. It can be noted that longer messages will lead to more changes in the cover image, thus less security. In brief, these three criteria are mutually dependent and are balanced when designing a steganographic system. The purpose of steganography is to secretly communicate through a public channel. However, this concept has been misused by anti-social elements, criminals, or terrorists. It could lead to important consequences to homeland security or national defence when, for example, two terrorists exchange a terrorist plan. Therefore, it is urgent for law enforcement and intelligence agencies to build up a methodology in order to detect the mere existence of a secret message and break the security of steganographic systems. Embedding a secret message into a cover-image is also an act of manipulating image content, so steganalysis is one of important tasks of forensic analysts, or steganalysts in this case. Unlike in cryptanalysis, the steganalyst Wendy does not require to retrieve the actual message content. As soon as she have detected its existence in an image, she can cut off the communication channel by putting two prisoners in separate cells. This is the failure of steganography. Besides, the task of steganalysis must be accomplished blindly without knowledge of original cover image. Generally, the steganalyst Wendy can play either active or passive role. While the active steganalyst is allowed to modify exchanged objects through the public channel in order to prevent the use of steganography, the passive steganalyst is not. The only goal of passive steganalyst is to detect the presence of a hidden message in a given image, which is also the typical scenario on which most of researches mainly2.5. Steganography and Steganalysis in Digital Images 31 Figure 2.5: Operations of LSB replacement (top) and Jsteg (bottom). focus. It can be noted that the steganalysis is like the coin-tossing game since the decision of steganalysts is made by telling that the given image is either a coverimage or a stego-image. Hence in any case, steganalysts can get a correct detection probability of 50%. However, steganalysts should establish the problem of hidden message detection in a more formal manner and design a powerful steganalysis tool with higher correct detection probability, rather than a random guess. Apart from detecting the presence of a hidden message, it may be desirable for steganalysts to estimate the message length or brute-force the secret key and retrieve the message content. The estimation of the message length is called quantitative steganalysis. Brute-forcing the secret key and extraction of the message content are referred to as forensic steganalysis. As stated above, designing a steganographic system is a trade-off between three basic criteria. Thus many steganographic algorithms have been proposed for different purposes such as mimic natural processing [84–86], preserve a model of coverimages [87, 88], or minimize the distorsion function [89, 90]. Among available algorithms, Least Significant Bit (LSB) replacement might be the oldest embedding technique in digital steganography. This algorithm is simple and easy to implement, thus it is available in numerous low-cost steganographic softwares on the Internet despite its relative insecurity. In addition, LSB replacement inspires a majority of other steganographic algorithms (e.g. LSB matching [91], Jsteg [92]). Jsteg algorithm is simply the implementation of LSB replacement in the DCT domain. Therefore, understanding LSB replacement paradigm is a good starting point before addressing more complex embedding paradigms. In this thesis, we only review LSB replacement and Jsteg algorithm, and their powerful steganalysis detectors proposed in the literature. The readers can be referred to [93–95] for other steganographic and steganalysis methods.32 Chapter 2. Overview on Digital Image Forensics 2.5.1 LSB Replacement Paradigm and Jsteg Algorithm Considering the cover-image C as a column vector, the LSB replacement technique involves choosing a subset of L cover-pixels {c1, c2, . . . , cL}, and replacing the LSB of each cover pixel by a message bit. The LSB of a cover-pixel ci is defined as follows LSB(ci) = ci − 2 j ci 2 k , (2.24) where b·c is the floor function. The LSB of the cover-pixel ci takes values in {0, 1}. Therefore, by embedding a message bit mi into the cover-pixel ci , the stego-pixel si is given by si = 2j ci 2 k + mi . (2.25) We see that when LSB(ci) = mi , the pixel value does not change after embedding, si = ci . By contrast, when LSB(ci) 6= mi , the stego-pixel si can be defined as a function of the cover-pixel ci in the following manner si = ci + 1 − 2 · LSB(ci) = ci + (−1)ci , ci , (2.26) where ci is the pixel with flipped LSB. In other words, even values are never decremented whereas odd values are never incremented. The absolute difference between a cover-pixel ci and a stego-pixel si is smaller than 1, |ci − si | ≤ 1, thus the artifact caused by the insertion of secret message M could be imperceptible under human vision. The operation of LSB replacement technique is illustrated in Figure 2.5. One problem that remains to be solved is the choice of the subset of cover-pixels or the sequence of pixel indices used in embedding process. To increase the complexity of the algorithm, the sender could create a pseudorandom path generated from the secret key shared between the sender and the receiver so that the secret message bits are spread randomly over the cover-image. Therefore, the distance between two embedded bits is also determined pseudorandomly, which would not raise the suspicion of the warden. We can see that the number of message bits that can be embedded does not exceed beyond the number of pixels of the image Z: L ≤ N, which leads us to define an embedding rate R R = L N . (2.27) This embedding rate R is a measure of the capacity of the steganographic system based on LSB replacement technique. Jsteg algorithm is a variant of LSB replacement technique in spatial domain. Jsteg algorithm embeds the secret message into the DCT domain by replacing LSBs of quantized DCT coefficients by message bits. The difference from the LSB replacement technique in spatial domain is that Jsteg algorithm does not embed message bits in the coefficients that are equal to 0 and 1 since artifacts caused by such embedding can be perceptibly and easily detected. The DC coefficient is not used as well for the same reason. The AC coefficients that differ from 0 et 1 are usable2.5. Steganography and Steganalysis in Digital Images 33 coefficients. Consequently, the embedding rate R in Jsteg algorithm is defied as the ratio of the length L and the number of usable coefficients in the cover-image C R = L P64 k=2 nk . (2.28) where nk is the number of usable coefficients at the frequency k, 2 ≤ k ≤ 64. 2.5.2 Steganalysis of LSB Replacement in Spatial Domain Like the origin identification problem (2.17), the steganalysis problem can be also formulated as a binary hypothesis testing. Definition 2.2. (Steganalysis problem). Given a suspect image Z, to verify whether the image Z contains a secret message or not, the steganalyst decides between two following hypotheses ( H0 : Z = C, no hidden message. H1 : Z = S, with hidden message. (2.29) To solve the steganalysis problem (2.29), several methods have been proposed in the literature. Even though the secret message is imperceptible to human eye, the act of embedding a secret message modifies the cover content and leaves itself artifacts that can be detected. Steganalysis methods of LSB replacement can be roughly divided into four categories: structural detectors, Weighted Stego-image (WS) detectors, statistical detectors, and universal classifiers. Typically, structural detectors and WS detectors are quantitative detectors that provide an estimation of secret message length while statistical detectors and universal classifiers attempt to separate stego-images from cover-images based on changes in statistical properties of cover-images due to message embedding. Below we briefly discuss each category of detectors. 2.5.2.1 Structural Detectors Structural detectors exploit all combinatorial measures of the artificial dependence between sample differences and the parity structure of the LSB replacement in order to estimate the secret message length. Some representatives in this category are the Regular-Singular (RS) analysis [96], the Sample Pair Analysis (SPA) [97– 99], and the Triple/Quadruple analysis [100, 101]. The common framework is to model effects of LSB replacement as a function of embedding rate R, invert these effects to approximate cover-image properties from the stego-image, and find the best candidate Rb to match cover assumptions. Both RS and SPA methods rely on evaluating groups of spatially adjacent pixels. The observations made in RS analysis were formally justified in SPA. For pedagogical reasons, we discuss the SPA method. For the representation of the SPA method, we34 Chapter 2. Overview on Digital Image Forensics E2k+1 O2k E2k O2k−1 R 2 (1− R 2 ) R 2 (1− R 2 ) R 2 (1− R 2 ) R 2 (1− R 2 ) R 2 4 R 2 4 (1− R 2 ) 2 (1− R 2 ) 2 (1− R 2 ) 2 (1− R 2 ) 2 Figure 2.6: Diagram of transition probabilities between trace subsets under LSB replacement. use the extensible alternative notations in [95, 101]. Given an image Z, we define a trace set Ck that collect all pairs of adjacent pixels (z2i , z2i+1) as follows Ck = n (2i, 2i + 1) ∈ I2 j z2i 2 k = j z2i+1 2 k + k o , (2.30) where I is the set of pixel indices. Each trace set Ck is then partitioned into four trace subsets, Ck = E2k ∪ E2k+1 ∪ O2k ∪ O2k−1, where Ek and Ck are defined by    Ek = n (2i, 2i + 1) ∈ I2 z2i = z2i+1 + k, z2i+1 is eveno Ok = n (2i, 2i + 1) ∈ I2 z2i = z2i+1 + k, z2i+1 is oddo . (2.31) We can observe that the LSB replacement technique never changes the trace set Ck of a sample pair but can move sample pairs between trace subsets. Therefore, we establish transition probabilities as functions of the embedding rate R, which is shown in Figure 2.6. Thus we can derive the relation between trace subsets of a stego-image and those of a cover-image   |Es 2k+1| |Es 2k | |Os 2k | |Os 2k−1 |   =   1 − R 2 2 R 2 1 − R 2  R 2 1 − R 2  R2 4 R 2 1 − R 2  1 − R 2 2 R2 4 R 2 1 − R 2  R 2 1 − R 2  R2 4 1 − R 2 2 R 2 1 − R 2  R2 4 R 2 1 − R 2  R 2 1 − R 2  1 − R 2 2     |Ec 2k+1| |Ec 2k | |Oc 2k | |Oc 2k−1 |   , (2.32) where E c k and Oc k are trace subsets of the cover-image, and E s k and Os k are trace subsets of the stego-image. Here |S| denotes the cardinality of the set S. After inverting the transition matrix and assuming that |Ec 2k+1| = |Oc 2k+1|, we obtain a quadratic equation 0 = R 2  |Ck| − |Ck+1|  + 4 |Es 2k+1| − |Os 2k+1|  + 2R  |Es 2k+2| + |Os 2k+2| − 2|Es 2k+1| + 2|Os 2k+1| − |Es 2k | − |Os 2k |  . (2.33)2.5. Steganography and Steganalysis in Digital Images 35 The solution of Equation (2.33) is an estimator of the embedding rate R. The SPA method was further improved by combining with Least Squares (LS) method [98] and Maximum Likelihood [99], or generalizing from analysis of pairs to analysis of k-tuples [100, 101]. 2.5.2.2 WS Detectors WS detectors were originally proposed by J. Fridrich in [102] and then improved in [103, 104]. The key idea of WS is that the embedding rate can be estimated via the weight that minimizes the distance between the weighted stego image and the cover image [95]. The weighted stego-image with scalar parameter λ of the image Z is defined by ∀i ∈ I, z (λ) i = (1 − λ)zi + λzi , with zi = zi + (−1)zi . (2.34) The estimator Rb can be provided by minimizing the Euclidian distance between the weighed stego-image and the cover image Rb = 2 arg min λ X N i=1 wi z (λ) i − ci 2 , (2.35) where the normalized weight vector w with PN i=1 wi = 1 is taken into account in the minimization problem (2.35) to reflect the heterogeneity in a natural image. By solving the root of the first derivative in (2.35), a simplified estimator is given as Rb = 2X N i=1 wi(zi − zi)(zi − ci). (2.36) Since the cover-pixels ci are unknown in advance, a local estimator for each pixel from its spatial neighborhood can be employed, or more generally, a linear filter D, to provide an estimate of cover-image: Cb = D(Z). The estimator Rb in (2.36) follows immediately. From above observations, the choices of an appropriate linear filter D and weight vector w are crucial for improvement of WS detectors’ performance [95, 103, 104]. Both structural detectors and WS detectors are established into quantitative steganalysis framework, which means that instead of indicating a suspect image Z is either a cover-image or stego-image, the output of those detectors is a real-value estimate of the secret message length. In other words, even no secret message is embedded in the image, i.e. R = 0, we could still obtain a negative or positive value. Nevertheless, quantitative detectors offer an additional advantage over statistical detectors, namely that the detection performance can be measured by evaluating the deviation of the estimator Rb from the true embedding rate R. Some criteria can be used as measures of performance such as • Mean Absolute Error (MAE): 1 Nim X Nim n=1 |Rbn − R|, (2.37)36 Chapter 2. Overview on Digital Image Forensics where Nim is number of images. • Median Absolute Error (mAE): mediann|Rbn − R|. (2.38) 2.5.2.3 Statistical Detectors In contrast to structural detectors and WS detectors, statistical detectors rely on changes in statistical properties due to message embedding to detect the presence of the secret message. The output of statistical detectors is a binary decision. Some representatives are χ 2 detector [105] and Bayesian approach-based detector [106]. Another interesting approach is the one proposed in [107] that is based on the statistical hypothesis testing theory. To this end, two preliminary assumptions are given in the following proposition: Proposition 2.1. In the LSB replacement embedding technique, we assume that 1. The secret message bits are uniformly distributed over the cover-image, namely that the probability of embedding a message bit into every cover-pixel is identique. Moreover, message bits and cover pixels are statistically uncorrelated [107]. 2. Secret message bits are independent and identically distributed (i.i.d), and each message bit mi is drawn from the Binomial distribution B(1, 1 2 ) P[mi = 0] = P[mi = 1] = 1 2 , (2.39) where P[E] denotes the probability that an event E occurs. Therefore, from the mechanism of LSB replacement, we can see that the probability that the pixel does not change after embedding is 1 − R 2 while the probability that its LSB is flipped is R 2 P[si = ci ] = 1 − R 2 and P[si = ci ] = R 2 . (2.40) Let P0 be the probability distribution of cover-images. Due to message embedding at rate R whose properties are given in Proposition 2.1, the cover image moves from the probability distribution P0 to a different probability distribution, denoted PR. Thus the steganalysis problem (2.29) can be rewritten as follows ( H0 : Z ∼ P0 H1 : Z ∼ PR. (2.41) Based on the assumption that all pixels are independent and identically distributed, the authors in [107] have developed two schemes depending on the knowledge of the probability distribution P0. When the probability distribution P0 is not known,2.5. Steganography and Steganalysis in Digital Images 37 the authors study the asymptotically optimal detector (as the number of pixels N → ∞) according to Hoeffding’s test [108]. When the probability distribution P0 is known in advance, an optimal detector is given in the sense of NeymanPearson [20, Theorem 3.2.1]. Although the statistical detector proposed in [107] is interesting from theoretical point of view, its performance in practice is quite moderate due to the fact that the cover model used in [107] is not sufficiently accurate to describe a natural image. The assumption of independence between pixels does not hold since the image structure and the non-stationarity of noises during image acquisition process are not taken into account. Some later works [109–114] rely on a simplistic local polynomial model in which pixel’s expectations are different in order to design a statistical detector, providing high detection performance compared with structural and WS ones. Far from assuming that all the pixels are i.i.d as in [107], those works propose to model each cover-pixel by the Gaussian distribution, ci ∼ N (µi , σ2 i ), in order to design the Likelihood Ratio Test (LRT) in which the Likelihood Ratio (LR) Λ can be given by Λ(Z) ∝ X i 1 σ 2 i (zi − zi)(zi − µi). (2.42) The LRT is the most powerful test in the sense of Neyman-Pearson approach [20, Theorem 3.2.1] that can meet simultaneously two criteria of optimality: warranting a prescribed false alarm probability and maximizing the correct detection probability. Moreover, the specificity in this approach is to show that the WS detector [102–104] is indeed a variant of the LRT, which justifies the good detection performance of such ad hoc detector. Besides, hypothesis testing theory has been also extended to other complex embedding algorithm, e.g. LSB matching [115, 116]. 2.5.2.4 Universal Classifiers Three previous families of detectors are targeted to a specific steganographic algorithm, namely LSB replacement. In other words, these three families work on an assumption that steganalysts know in advance the embedding algorithm used by the steganographer. Such scenario may not be realistic in the practical context. Universal classifiers are employed by steganalysts to work in a blind manner in order to discriminate stego-images and cover-images. Even though universal classifiers have lower performance than specific embedding-targeted detectors, they are still important because of their flexibility and ability to be adjusted to completely unknown steganographic methods. Typically, universal classifiers can be divided into two types: supervised and unsupervised. Supervised classification [45, 76, 117–120] has been already discussed in Section 2.3. While supervised classification requires to know in advance the label of each image (i.e. cover-image or stego-image) and then build a classifier based on labeled images, unsupervised classification works in a scenario of unlabeled images and classifies them automatically without user interference. The accuracy of supervised classifiers is limited if the training data is not perfectly representative of38 Chapter 2. Overview on Digital Image Forensics cover source, which may result in mismatch problem [121]. Unsupervised classifiers try to overcome this problem of model mismatch by postponing building a cover model until the classification stage. However, to the best of our knowledge, there has not been yet a reliable method dealing with this scenario in steganalysis. In universal steganalysis, the design of features is of crucial importance. Features used for classification should be sensitive to changes caused by embedding, yet insensitive to variations between covers including also some non-steganographic processing techniques. In general, the choice of suitable features and machine learning tools remains open problems [121]. 2.5.3 Steganalysis of Jsteg Algorithm Like steganalysis of LSB replacement in spatial domain, existing methods for steganalysis of Jsteg algorithm can be also divided into four categories. Structural detectors detect the presence of secret message by employing the symmetry of the histogram of DCT coefficients in natural images, which is disturbed by the operation of Jsteg embedding. Some representative structural detectors are Zhang and Ping (ZP) detector [122], DCT coefficient-based detector [123], and category attack [124, 125]. Furthermore, the power of structural detectors can be combined with theoretically well-founded ML principle [99] or the concept of Zero Message Hypothesis (ZMH) [96]. These two approaches have been formally analyzed in [126]. Similar to structural detectors for steganalysis of LSB replacement technique, the ZHM framework starts by choosing a feature vector x of the cover-image (e.g. trace subsets in case of SPA method), establishes the change in the feature vector x due to embedding algorithm Emb, then inverts embedding effects to provide a hypothetical feature vector ˆx ˆx = Emb−1 (x r , r), (2.43) where x r is the stego vector and r is the change rate defined as the ratio between the number of modified DCT coefficients and the maximum number of usable coefficients, thus r = R 2 . Using cover assumptions and zero message properties (e.g. natural symmetry of the histogram of DCT coefficients), an appropriate penalty function zmh(x) ≥ 0 is defined so that it returns zero on cover features and nonzero otherwise. Therefore, the change rate estimator rˆ is defined as the solution of a minimization problem rˆ = arg min r≥0 zmh(ˆx) = arg min r≥0 zmh(Emb−1 (x r , r)). (2.44) The minimization in (2.44) can be performed either analytically or numerically by implementing a one-dimensional gradient-descent search over r. The main interest in [126] is that all features proposed in [104,122,124,125] have been revisited within ZHM framework. The detector proposed in [123] has been also improved in [126] within ML framework using a more accurate model of DCT coefficients, namely Generalized Cauchy distribution. It can be noted that although ZMH is only a2.6. Conclusion 39 heuristic framework and less statistically rigorous than ML framework, it has some important advantages in terms of low computational complexity and flexibility. Although Jsteg algorithm replaces LSBs by secret message bits in DCT domain, the mathematical foundation of WS detectors can be also applied for steganalysis of Jsteg [127, 128]. Given a vector of AC coefficients D = {D1, D2, . . . , DN }, the WS-like detector is given by Rb ∝ X i wi(Di − Di)Di . (2.45) The difference between the WS detector in (2.36) and the one in (2.45) is that the local predictor for cover AC coefficients is omitted since the expected value of AC coefficients is zero in natural images. The weigh wi for each coefficient Di is estimated by taking the coefficients at the same location as Di but in four adjacent blocks. More details were provided in [128]. The hypothesis testing theory was also applied to the steganalysis of Jsteg algorithm. By relying the Laplacian model of DCT coefficients, a statistical test was designed in [129]. However, a considerable loss of power was revealed due to the fact that the Laplacian model is not accurate enough to characterize DCT coefficients. 2.6 Conclusion This chapter discusses the emerging field of digital image forensics consisting of two main problems: image origin identification and image forgery detection. To address these problems, active forensic approach has been proposed by generating extrinsically fingerprints and adding them into the digital image in the image formation process, thus creates a trustworthy digital camera. However, active approach is of limited application due to many strict contraints in its protocols. Therefore, passive forensic approach has been considerably evolved to help solve these problems in their entirety. This approach relies on intrinsic traces left by the digital cameras in the image processing pipeline and by the manipulations themselves to gather forensic evidence of image origin or forgery. Some intrinsic fingerprints for identification of image source such as lens aberration, PRNU, CFA pattern and interpolation, and JPEG compression are reviewed. The task of steganalysis that aims to detect the mere presence of a secret message in a digital image is also discussed in this chapter. The state of the art has shown that most of existing methods have been designed within classification framework. Hypothesis testing framework is of limited exploitation although this framework offers many advantages, namely that the statistical performance of detectors can be analytically established and a prescribed false alarm probability can be guaranteed. Besides, existing methods are designed using simplistic image models, which results in overall poor detection performance. This thesis focuses on applying the hypothesis testing theory in digital image forensics based on an accurate image model, which is established by modeling the main steps in the image processing pipeline. These aspects will be discussed in the rest of the thesis.Chapter 3 Overview on Statistical Modeling of Natural Images Contents 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.2 Spatial-Domain Image Model . . . . . . . . . . . . . . . . . . 41 3.2.1 Poisson-Gaussian and Heteroscedastic Noise Model . . . . . . 42 3.2.2 Non-Linear Signal-Dependent Noise Model . . . . . . . . . . 44 3.3 DCT Coefficient Model . . . . . . . . . . . . . . . . . . . . . . 45 3.3.1 First-Order Statistics of DCT Coefficients . . . . . . . . . . . 45 3.3.2 Higher-Order Statistics of DCT Coefficients . . . . . . . . . . 46 3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.1 Introduction The application of hypothesis testing theory in digital image forensics requires an accurate statistical image model to achieve high detection performance. For instance, the PRNU-based image origin identification [30,32] takes into account various noise sources during image acquisition inside a digital camera, which provides an image model allowing to accurately extract the fingerprint for source identification. An inaccurate image model will result in a poor detection performance, e.g. in case of statistical detectors [107, 129]. Therefore in this chapter, the state of the art on statistical modeling of natural images is reviewed. The statistical image modeling can be performed either in spatial domain or DCT domain. The chapter is organized as follows. Section 3.2 analyzes noise statistics in spatial domain and presents some dominant image models widely used in image processing. Section 3.3 discusses empirical statistical models of DCT coefficients. Finally, Section 3.4 concludes the chapter. 3.2 Spatial-Domain Image Model In this section, we adopt the representation of an arbitrary image Z as a column vector of length N = Nr × Nc. The representation as a two-dimensional matrix is of42 Chapter 3. Overview on Statistical Modeling of Natural Images no interest in the study of statistical noise properties. The index of color channel is omitted for simplicity. Due to the stochastic nature of noise, a pixel is regarded as a random variable. Generally, the random variable zi , i ∈ I, can be decomposed as zi = µzi + ηzi , (3.1) where I = {1, . . . , N} denotes the set of pixel indices, µzi denotes the expectation of the pixel zi in the absence of noise, and ηzi accounts for all noise sources that interfere with the original signal. By convention, µX and σ 2 X denote respectively the expectation and variance of a random variable X. Here, the expectation µzi is considered deterministic and will not be modeled. However, the expectations differ from each other due to heterogeneity in a natural image. From (3.1), it is easily seen that the variance of noise ηzi is equal to the variance of pixel zi , i.e. σ 2 zi = σ 2 ηzi . Some models have been proposed in the literature for the noise ηzi in an uncompressed image. They can be classified into two groups: signal-independent and signal-dependent noise models. While signal-independent noise models assume the stationarity of noise in the whole image, regardless original pixel intensity, signal-dependent noise models take into account the proportional dependence of noise variance on the original pixel intensity. A typical example for the group of signal-independent noise is the Additive White Gaussian Noise (AWGN). Besides, signal-dependent noise includes Poisson noise or film-grain noise [130], PoissonGaussian noise [131, 132], heteroscedastic noise model [133, 134], and non-linear noise model [135]. Although the AWGN model is widely adopted in image processing because of its simplicity, it ignores the contribution of Poisson noise to the image acquisition chain, which is the case of an image acquired by a digital camera. Noise sources in a natural image are inherently signal-dependent. Therefore, a signal-dependent noise model is more expected to be employed in further applications. Since our works mainly focus on signal-dependent noise, only the group of signal-dependent noise models are discussed in this section. 3.2.1 Poisson-Gaussian and Heteroscedastic Noise Model The study of noise statistics requires to take into account the impact of Poisson noise related to the stochastic nature of photon-counting process and dark current [131–134, 136]. Let ξi denote the number of collected electrons with respect to the pixel zi . The number of collected electrons ξi follows the Poisson distribution with mean λi and variance λi ξi ∼ P(λi). (3.2) This Poisson noise results in the dependence of noise variance on original pixel intensity. The number of collected electrons is further degraded by the AWGN read-out noise ηr with variance ω 2 . Therefore, the RAW image pixel recorded by the image sensor can be defined as [136] zi = a · ξi + ηr  , (3.3)3.2. Spatial-Domain Image Model 43 where a is the analog gain controlled by the ISO sensitivity. This leads to the statistical distribution of the RAW pixel zi zi ∼ a · h P(λi) + N (0, ω2 ) i . (3.4) This model is referred to as Poisson-Gaussian noise model [131,132]. One interesting property of this model is the linear relation of pixel’s expectation and variance. Taking mathematical expectation and variance from (3.4), we obtain µzi = E[zi ] = a · λi , (3.5) σ 2 zi = Var[zi ] = a 2 · (λi + ω 2 ), (3.6) where E[X] and Var[X] denote respectively the mathematical expectation and variance with respect to a random variable X. Consequently, the heteroscedastic relation is derived as σ 2 zi = a · µzi + b, (3.7) where b = a 2ω 2 . In some image sensors, the collected electrons ξi may be added by a base pedestal parameter p0 to constitute an offset-from-zero of the output pixel [133] zi ∼ a · h p0 + P(λi − p0) + N (0, ω2 ) i . (3.8) Hence, the parameter b is now given by b = a 2ω 2 − a 2p0. Therefore, the parameter b can be negative when p0 > ω2 . To facilitate the application of this signal-dependent noise model, some works [133, 134] have attempted to approximate the Poisson distribution by the Gaussian distribution in virtue of a large number of collected electrons P(λi) ≈ N (λi , λi). (3.9) In fact, for λi ≥ 50, the Gaussian approximation is already very accurate [133] while full-well capacity is largely above 100000 electrons. Finally, the statistical distribution of the RAW pixel zi can be approximated as zi ∼ N µzi , a · µzi + b  . (3.10) This model is referred to as heteroscedastic noise model in [134]. The term "heteroscedasticity" means that each pixel exposes a different variability with the other. Both Poisson-Gaussian and heteroscedastic noise models are more accurate to characterize a RAW image than the conventional AWGN, but they do not take into account yet non-linear post-acquisition operations. Therefore, they are not appropriate for modeling a TIFF or JPEG image. Besides, it should be noted that the Poisson-Gaussian and heteroscedastic noise model assume that the effect of PRNU is negligible, namely that all the pixels respond to the incident light uniformly. The very small variation in pixel’s response does not strongly affect its statistical distribution [136].44 Chapter 3. Overview on Statistical Modeling of Natural Images 3.2.2 Non-Linear Signal-Dependent Noise Model To establish a statistical model of a natural image in TIFF or JPEG format, it is necessary to take into account effects of post-acquisition operations in the image processing pipeline. However, as discussed in Section 2.2, the whole image processing pipeline is not as simple. Some processing steps that are implemented in a digital camera are difficult to model parametrically. One approach is to consider the digital camera as a black box in which we attempt to establish a relation between input irradiance and output intensity. This relation is called Camera Response Function (CRF), which is described by a sophisticated non-linear function fCRF(·) [137]. Gamma correction might be the simplest model for the CRF with only one parameter. Other parametric models have been proposed for CRF such as polynomial model [137] or generalized gamma curve model [138]. Therefore, the pixel zi can be formally defined as [135, 139] zi = fCRF Ei + ηEi  , (3.11) where Ei denotes the image irradiance and ηEi accounts for all signal-independent and signal-dependent noise sources. We can note that although some methodologies have been proposed for estimation of CRF [50, 137, 140], it is also difficult to study noise statistics with those sophisticated models. To facilitate the study of noise statistics, the authors in [135] exploit the first order of Taylor’s series expansion zi = fCRF Ei + ηEi  ≈ fCRF Ei  + f 0 CRF Ei  ηEi , (3.12) where f 0 CRF denotes the first derivative of the CRF fCRF. Therefore, a relation between noises before and after transformation by the CRF is obtained ηzi = f 0 CRF Ei  ηEi . (3.13) It can be noted that even when noise before transformation is independent of the signal, the non-linear transformation fCRF generates a dependence between pixel’s expectation and variance. Based on experimental observations, the authors in [135] obtain a non-linear parametric model zi = µzi + µ γ˜ zi · ηu, (3.14) where ηu is zero-mean stationary Gaussian noise, ηu ∼ N (0, σ2 ηu ), and γ˜ is an exponential parameter to account for the non-linearity of the camera response. Here, taking variance on the both sides of (3.14), we obtain σ 2 zi = µ 2˜γ zi · Var[ηu] = µ 2˜γ zi · σ 2 ηu . (3.15) In this model, the pixel zi still follows the Gaussian distribution and the noise variance σ 2 ηzi is non-linearly dependent on the original pixel intensity µzi zi ∼ N µzi , µ2˜γ zi · σ 2 ηu  . (3.16) This model allows to represent several kinds of noise such as film-grain, Poisson noise by changing the parameters γ˜ and σ 2 ηu (e.g γ˜ = 0.5).3.3. DCT Coefficient Model 45 3.3 DCT Coefficient Model 3.3.1 First-Order Statistics of DCT Coefficients Apart from modeling an image in the spatial domain, many researches attempt to model it in the DCT domain since the DCT is a fundamental operation in JPEG compression. The model of DCT coefficients has been considerably studied in the literature. However, a majority of DCT coefficient models has just been proposed without giving any mathematical foundation and analysis. Many researches focus on comparing the empirical data with a variety of popular statistical models by conducting the goodness-of-fit (GOF) test, e.g. the Kolmogorov-Smirnov (KS) or χ 2 test. Firstly, the Gaussian model for the DCT coefficients was conjectured in [141]. The Laplacian model was verified in [142] by performing the KS test. This Laplacian model remains a dominant choice in image processing because of its simplicity and relative accuracy. Other possible models such as Gaussian mixture [143] and Cauchy [144] were also proposed. In order to model the DCT coefficients more accurately, the previous models were extended to the generalized versions including the Generalized Gaussian (GG) [145] and the Generalized Gamma (GΓ) [146] models. It has been recently reported in [146] that the GΓ model outperforms the Laplacian and GG model. Far from giving a mathematical foundation of DCT coefficient model, these empirical models were only verified using GOF test on a few standard images. Thus, they can not guarantee the accuracy of the chosen model to a wide range of images, which leads to a lack of robustness. The first mathematical analysis for DCT coefficients is given in [147]. It relies on a doubly stochastic model combining DCT coefficient statistics in a block whose variance is constant with the variability of block variance in a natural image. However, this analysis is incomplete due to the lack of mathematical justification for the block variance model. Nevertheless, it has shown an interest for further improvements. Therefore, here we provide a discussion about this mathematical foundation. Let I denote AC coefficient and σ 2 blk denote block variance. The DC coefficient is not addressed in this work [147]. The index of frequency is omitted for the sake of clarity. Using the conditional probability, the doubly stochastic model is given by fI (x) = Z ∞ 0 fI|σ 2 blk (x|t)fσ 2 blk (t)dt x ∈ R, (3.17) where fX(x) denotes the probability density function (pdf) of a random variable X. This doubly stochastic model can be considered as infinite mixture of Gaussian distributions [148, 149]. From the establishment of DCT coefficients in (2.11), it is noted that each DCT coefficient is a weighted sum of random variables. If the block variance σ 2 blk is constant, the AC coefficient I can be approximated as a zero-mean Gaussian random variable based on the Central Limit Theorem (CLT) fI|σ 2 blk (x|t) = 1 √ 2πt exp  − x 2 2t  . (3.18)46 Chapter 3. Overview on Statistical Modeling of Natural Images Even though the pixels are spatially correlated in a 8 × 8 block due to demosaicing algorithms implemented in a digital camera, the CLT can still be used for Gaussian approximation of a sum of correlated random variables [150]. It remains to find the pdf of σ 2 blk to derive the final pdf of the AC coefficient I. To this end, it was reported in [147] that from experimental observations, the block variance σ 2 blk can be modeled by exponential or half-Gaussian distribution. These two distributions can lead to the Laplacian distribution for the DCT coefficient I [147]. However, as stated above, due to the fact that the pdf of block variance σ 2 blk is not mathematically justified, this mathematical framework is incomplete. 3.3.2 Higher-Order Statistics of DCT Coefficients The above discussion only considers the first-order statistics (i.e. histogram) of DCT coefficients. The DCT coefficients at the same frequency are collected and treated separately. An implicite assumption adopted in this procedure is that the DCT coefficients at the same frequency are i.i.d realizations of a random variable. However, this is not always true in a natural image because DCT coefficients exhibit dependencies (or correlation) between them. There are two fundamental kinds of correlation between DCT coefficients [151], which have been successfully exploited in some applications [126, 151, 152] 1. intra-block correlation: A well-known feature of DCT coefficients in a natural image is that the magnitudes of AC coefficients decrease as the frequency increases along the zig-zag order. This correlation reflects the dependence between DCT coefficients within a same 8×8 block. Typically, this correlation is weak since coefficients at different frequencies correspond to different basis functions. 2. inter-block correlation: Although the DCT base can provide a good decorrelation, resulting coefficients are still correlated slightly with their neighbors at the same frequency. We refer this kind of correlation as inter-block correlation. In general, the correlation between DCT coefficients could be captured by adjacency matrix [126]. 3.4 Conclusion This chapter reviews some statistical image models in spatial domain and DCT domain. In spatial domain, two groups of signal-independent and signal-dependent noise models are discussed. From above statistical analysis, we can draw an important insight: noise in natural images is inherently signal-dependent. In DCT domain, some empirical models of DCT coefficient are presented. However, most of DCT coefficient models are given without mathematical justification. It is still necessary to establish an accurate image model that can be exploited in further applications.Part II Statistical Modeling and Estimation for Natural Images from RAW Format to JPEG FormatChapter 4 Statistical Image Modeling and Estimation of Model Parameters Contents 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.2 Statistical Modeling of RAW Images . . . . . . . . . . . . . . 50 4.2.1 Heteroscedastic Noise Model . . . . . . . . . . . . . . . . . . 50 4.2.2 Estimation of Parameters (a, b) in the Heteroscedastic Noise Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.2.2.1 WLS Estimation . . . . . . . . . . . . . . . . . . . . 53 4.2.2.2 Statistical Properties of WLS Estimates . . . . . . . 54 4.3 Statistical Modeling of TIFF Images . . . . . . . . . . . . . . 57 4.3.1 Generalized Noise Model . . . . . . . . . . . . . . . . . . . . . 57 4.3.2 Estimation of Parameters (˜a, ˜b) in the Generalized Noise Model 59 4.3.2.1 Edge Detection and Image Segmentation . . . . . . 59 4.3.2.2 Maximum Likelihood Estimation . . . . . . . . . . . 60 4.3.3 Application to Image Denoising . . . . . . . . . . . . . . . . . 61 4.3.4 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . 62 4.4 Statistical Modeling in DCT Domain . . . . . . . . . . . . . 65 4.4.1 Statistical Model of Quantized DCT Coefficients . . . . . . . 65 4.4.1.1 Statistical Model of Block Variance and Unquantized DCT Coefficients . . . . . . . . . . . . . . . . . . . . 65 4.4.1.2 Impact of Quantization . . . . . . . . . . . . . . . . 68 4.4.2 Estimation of Parameters (α, β) from Unquantized DCT Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.4.3 Estimation of Parameters (α, β) from Quantized DCT Coeffi- cients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.4.4 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . 71 4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7350 Chapter 4. Statistical Image Modeling and Estimation of Model Parameters 4.1 Introduction Chapter 3 has presented an overview on statistical modeling of natural images in spatial domain and DCT domain. Most of existing models in the literature were provided empirically. The goal of this chapter is to establish a mathematical framework of studying statistical properties of natural images along image processing pipeline of a digital camera. The study is performed in spatial domain and DCT domain. In the spatial domain, the heteroscedastic noise model is firstly recalled, and a method for estimating the parameters of the heteroscedastic noise model following the Weighted Least Square (WLS) approach is proposed in Section 4.2. The analytic establishment of WLS estimates allows us to study their statistical properties, which is of importance for designing statistical tests. The WLS estimation of parameters (a, b) has been presented in [134]. Next, Section 4.3 presents the study of noise statistics in a TIFF image by starting from the heteroscedastic noise model and take into account the effect of gamma correction, resulting in the generalized signal-dependent noise model. It is shown that the generalized noise model is also relevent to characterize JPEG images with moderate-to-high quality factors (Q ≥ 70). This section also proposes a method that can estimate the parameters of the generalized noise model accurately from a single image. Numerical results on a large image database show the relevance of the proposed method. The generalized noise model could be useful in many applications. A direct application for image denoising is proposed in this section. The foundation of generalized noise model and estimation of model parameters have been presented in [153]. Section 4.4 describes the mathematical framework of modeling the statistical distribution of DCT coefficients. To simplify the study, the approach is based on the main assumption that the pixels are identically distributed (not necessarily independent) within a 8 × 8 block. Consequently, the statistical distribution of block variance can be approximated, thus the model of unquantized DCT coefficients is provided. Moreover, it is proposed to take into account the quantization operation to provide a final model of quantized DCT coefficients. The parameters of DCT coefficient model can be estimated following the ML approach. Numerical results show that the proposed model outperforms other existing models including Laplacian, GG, and GΓ model. Section 4.5 concludes the chapter. The foundation of DCT coefficient model has been presented in [154]. 4.2 Statistical Modeling of RAW Images 4.2.1 Heteroscedastic Noise Model The RAW image acquisition has been discussed in Section 2.2.1. Let Z = (zi)i∈I denote a RAW image acquired by the image sensor. Typically, the model of RAW pixel consists of a Poissonian part that addresses the photon shot noise and dark current and a Gaussian part for the remaining stationary disturbances, e.g. read-4.2. Statistical Modeling of RAW Images 51 estimated expectation: µˆk estimated variance: ˆσ 2 k 0 0.05 0.1 0.15 0.2 0.25 0 0.4 0.8 1.2 1.6 2 ×10−5 Nikon D70: Estimated data Nikon D70: Fitted data Nikon D200: Estimated data Nikon D200: Fitted data Figure 4.1: Scatter-plot of pixels’ expectation and variance from a natural RAW image with ISO 200 captured by Nikon D70 and Nikon D200 cameras. The image is segmented into homogeneous segments. In each segment, the expectation and variance are calculated and the parameters (a, b) are estimated as proposed in Section 4.2.2. The dash line is drawn using the estimated parameters (a, b). Only the red channel is used in this experiment. out noise. For the sake of simplification, the Gaussian approximation of the Poisson distribution can be exploited because of a large number of collected electrons, which leads to the heteroscedastic noise model [133, 134] zi ∼ N µi , aµi + b  , (4.1) where µi denotes the expectation of the pixel zi . The heteroscedastic noise model, which gives the noise variance as a linear function of pixel’s expectation, characterizes a RAW image more accurately than the conventional AWGN model. The heteroscedastic noise model (4.1) is illustrated in Figure 4.1. It is assumed that the noise corrupting each RAW pixel is statistically independent of those of neighbor pixels [133, 136]. In this section it is assumed that the phenomenon of clipping is absent from a natural RAW image for the sake of simplification, i.e. the probability that one observation zi exceeds over the boundary 0 or B = 2ν−1 is negligible. More details about the phenomenon of clipping are given in [133, 155] and in Chapter 8. In practice, the PRNU weakly affects the parameter a in the heteroscedastic noise model (4.1). Nevertheless, in the problem of camera model identification, the PRNU is assumed to be negligible, i.e. the parameter a remains constant for every pixel.52 Chapter 4. Statistical Image Modeling and Estimation of Model Parameters 4.2.2 Estimation of Parameters (a, b) in the Heteroscedastic Noise Model Estimation of noise model parameters can be performed from a single image or multiple images. From a practical point of view, we mainly focus on noise model parameter estimation from a single image. Several methods have been proposed in the literature for estimation of signal-dependent noise model parameters, see [133–135, 139, 156]. They rely on similar basic steps but differ in details. The common methodology starts from obtaining local estimates of noise variance and image content, then performing the curve fitting to the scatter-plot based on the prior knowledge of noise model. The existing methods involve two main difficulties: influence of image content and spatial correlation of noise in a natural image. In fact, homogeneous regions where local expectations and variances are estimated are obtained by performing edge detection and image segmentation. However, the accuracy of those local estimates may be contaminated due to the presence of outliers (textures, details and edges) in the homogeneous regions. Moreover, because of the spatial correlation between pixels, the local estimates of noise variance can be overestimated. Overall, the two difficulties may result in inaccurate estimation of noise parameters. For the design of subsequent tests, the parameters (a, b) should be estimated following the ML approach and statistical properties of ML estimates should be analytically established. One interesting method is proposed in [133] for ML estimation of parameters (a, b). However, that method can not provide an analytic expression of ML estimates due to the difficulty of resolving the complicated system of partial derivatives. Therefore, ML estimates are only numerically solved by using the Nelder-Mead optimization method [157]. Although ML estimates given by that method are relatively accurate, they involve three main drawbacks. First, the convergence of the maximization process and the sensitivity of the solution to initial conditions have not been analyzed yet. Second, the Bayesian approach used in [133] with a fixed uniform distribution might be doubtful in practice. Finally, it seems impossible to establish statistical properties of the estimates. This section proposes a method for estimation of parameters (a, b) from a single image. The proposed method relies on the same technique of image segmentation used in [133] in order to obtain local estimates in homogeneous regions. Subsequently, the proposed method is based on the WLS approach to take into account heteroscedasticity and statistical properties of local estimates. One important advantage is that WLS estimates can be analytically provided, which allows us to study statistical properties of WLS estimates. Moreover, the WLS estimates are asymptotically equivalent to the ML estimates in large samples when the weights are consistently estimated, as explained in [158, 159].4.2. Statistical Modeling of RAW Images 53 4.2.2.1 WLS Estimation The RAW image Z is first transformed into the wavelet domain and then segmented into K non-overlapping homogeneous segments, denoted Sk, of size nk, k ∈ {1, . . . , K}. The readers are referred to [133] for more details of segmentation technique. In each segment Sk, pixels are assumed to be i.i.d, thus they have the same expectation and variance. Let z wapp k = (z wapp k,i )i∈{1,...,nk} and z wdet k = (z wdet k,i )i∈{1,...,nk} be respectively the vector of wavelet approximation coefficients and wavelet detail coefficients representing the segment Sk. Because the transformation is linear, the coefficients z wapp k,i and z wdet k,i also follow the Gaussian distribution z wapp k,i ∼ N µk, kϕk 2 2 σ 2 k  , (4.2) z wdet k,i ∼ N 0, σ2 k  , (4.3) where µk denotes expectation of all pixels in the segment Sk, σ 2 k = aµk + b, and ϕ is the 2-D normalized wavelet scaling function. Hence, the ML estimates of local expectation µk and local variance σ 2 k are given by µˆk = 1 nk Xnk i=1 z wapp k,i , (4.4) σˆ 2 k = 1 nk − 1 Xnk i=1 z wdet k,i − z wdet k 2 , with z wdet k = 1 nk Xnk i=1 z wdet k,i . (4.5) The estimate µˆk is unbiased and follows the Gaussian distribution µˆk ∼ N  µk, kϕk 2 2 nk σ 2 k  , (4.6) while the estimate σˆ 2 k follows a scaled chi-square distribution with nk − 1 degrees of freedom. This distribution can also be accurately approximated as the Gaussian distribution for large nk [160]: σˆ 2 k ∼ N  σ 2 k , 2 nk − 1 σ 4 k  , (4.7) Figure 4.1 illustrates a scatter-plot of all the pairs {(ˆµk, σˆ 2 k )} extracted from real natural RAW images of Nikon D70 and Nikon D200 cameras. The parameters (a, b) are estimated by considering all the pairs {(ˆµk, σˆ 2 k )} K k=1 where the local variance σˆ 2 k is treated as a heteroscedastic model of the local expectation µˆk. This model is formulated as follows σˆ 2 k = aµˆk + b + skk, (4.8) where k are independent and identically distributed as standard Gaussian variable and sk is a function of local mean µk. A direct calculation from (4.8) shows that s 2 k = Var σˆ 2 k − Var aµˆk + b = 2 nk − 1 σ 4 k − a 2 kϕk 2 2 nk σ 2 k = 2 nk − 1 (aµk + b) 2 − a 2 kϕk 2 2 nk (aµk + b). (4.9) Inférence d’invariants pour le model checking de systèmes paramétrés Alain Mebsout To cite this version: Alain Mebsout. Inf´erence d’invariants pour le model checking de syst`emes param´etr´es. Other. Universit´e Paris Sud - Paris XI, 2014. French. . HAL Id: tel-01073980 https://tel.archives-ouvertes.fr/tel-01073980 Submitted on 11 Oct 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destin´ee au d´epˆot et `a la diffusion de documents scientifiques de niveau recherche, publi´es ou non, ´emanant des ´etablissements d’enseignement et de recherche fran¸cais ou ´etrangers, des laboratoires publics ou priv´es.Université Paris-Sud École Doctorale d’Informatiqe Laboratoire de Recherche en Informatiqe Discipline : Informatiqe Thèse de doctorat soutenue le 29 septembre 2014 par Alain Mebsout Inférence d’Invariants pour le Model Checking de Systèmes Paramétrés Directeur de thèse : M. Sylvain Conchon Professeur (Université Paris-Sud) Co-encadrante : Mme Fatiha Zaïdi Maître de conférences (Université Paris-Sud) Composition du jury : Président du jury : M. Philippe Dague Professeur (Université Paris-Sud) Rapporteurs : M. Ahmed Bouajjani Professeur (Université Paris Diderot) M. Silvio Ranise Chercheur (Fondazione Bruno Kessler) Examinateurs : M. Rémi Delmas Ingénieur de recherche (ONERA) M. Alan Schmitt Chargé de recherche (Inria Rennes)À Magali.Résumé Cette thèse aborde le problème de la vérication automatique de systèmes paramétrés complexes. Cette approche est importante car elle permet de garantir certaines propriétés sans connaître a priori le nombre de composants du système. On s’intéresse en particulier à la sûreté de ces systèmes et on traite le côté paramétré du problème avec des méthodes symboliques. Ces travaux s’inscrivent dans le cadre théorique du model checking modulo théories et ont donné lieu à un nouveau model checker : Cubicle. Une des contributions principale de cette thèse est une nouvelle technique pour inférer des invariants de manière automatique. Le processus de génération d’invariants est intégré à l’algorithme de model checking et permet de vérier en pratique des systèmes hors de portée des approches symboliques traditionnelles. Une des applications principales de cet algorithme est l’analyse de sûreté paramétrée de protocoles de cohérence de cache de taille industrielle. Enn, pour répondre au problème de la conance placée dans le model checker, on présente deux techniques de certication de notre outil Cubicle utilisant la plate-forme Why3. La première consiste à générer des certicats dont la validité est évaluée de manière indépendante tandis que la seconde est une approche par vérication déductive du cœur de Cubicle. Abstract This thesis tackles the problem of automatically verifying complex parameterized systems. This approach is important because it can guarantee that some properties hold without knowing a priori the number of components in the system. We focus in particular on the safety of such systems and we handle the parameterized aspect with symbolic methods. This work is set in the theoretical framework of the model checking modulo theories and resulted in a new model checker: Cubicle. One of the main contribution of this thesis is a novel technique for automatically inferring invariants. The process of invariant generation is integrated with the model checking algorithm and allows the verication in practice of systems which are out of reach for traditional symbolic approaches. One successful application of this algorithm is the safety analysis of industrial size parameterized cache coherence protocols. Finally, to address the problem of trusting the answer given by the model checker, we present two techniques for certifying our tool Cubicle based on the framework Why3. The rst consists in producing certicates whose validity can be assessed independently while the second is an approach by deductive verication of the heart of Cubicle. vRemerciements Mes remerciements vont en tout premier lieu à mes encadrants de thèse, qui m’ont accompagné tout au long de ces années. Ils ont su orienter mes recherches tout en m’accordant une liberté et une autonomie très appréciables. Les discussions aussi bien professionnelles que personnelles ont été très enrichissantes et leur bonne humeur a contribué à faire de ces trois (et cinq) années un plaisir. Merci à Fatiha pour sa disponibilité et son soutien. Je tiens particulièrement à remercier Sylvain d’avoir cru en moi et pour son optimisme constant. Il m’apparaît aujourd’hui que cette dernière qualité est essentielle à tout bon chercheur et j’espère en avoir tiré les enseignements. Merci également à mes rapporteurs, Ahmed Bouajjani et Silvio Ranise, d’avoir accepté de relire une version préliminaire de ce document et pour leurs corrections et remarques pertinentes. Je remercie aussi les autres examinateurs, Philippe Dague, Rémi Delmas et Alan Schmitt, d’avoir accepté de faire partie de mon jury. Le bon déroulement de cette thèse doit aussi beaucoup aux membres de l’équipe VALS (anciennement Proval puis Toccata). Ils ont su maintenir une ambiance chaleureuse et amicale tout au long des divers changements de noms, déménagements et remaniements politiques. Les galettes des rois, les gâteaux, les paris footballistiques, et les discussions du coin café resteront sans aucun doute gravés dans ma mémoire. Pour tout cela je les en remercie. Je tiens à remercier tout particulièrement Mohamed pour avoir partagé un bureau avec moi pendant près de cinq années. J’espère avoir été un aussi bon co-bureau qu’il l’a été, en tout cas les collaborations et les discussions autour d’Alt-Ergo furent très fructueuses. Merci également à Régine pour son aide dans la vie de doctorant d’un laboratoire de recherche et pour sa capacité à transformer la plus intimidante des procédures administratives en une simple tâche. Merci à Jean-Christophe, Guillaume, François, Andrei, Évelyne et Romain pour leurs nombreux conseils et aides techniques. Merci aussi à Sava Krstic et Amit Goel d’Intel pour ` leur coopération scientique autour de Cubicle. Enn un grand merci à toute ma famille pour leur soutien moral tout au long de ma thèse. Je souhaite remercier en particulier mes parents qui ont cru en moi et m’ont soutenu dans mes choix. Merci également à mes sœurs pour leurs nombreux encouragements. Pour nir je tiens à remercier ma moitié, Magali, pour son soutien sans faille et pour son aide. Rien de ceci n’aurait été possible dans elle. viiTable des matières Table des matières 1 Table des figures 5 Liste des Algorithmes 7 1 Introduction 9 1.1 Model checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.1.1 Systèmes nis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.1.2 Systèmes innis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2 Model checking de systèmes paramétrés . . . . . . . . . . . . . . . . . . . 14 1.2.1 Méthodes incomplètes . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.2.2 Fragments décidables . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.4 Plan de la thèse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2 Le model checker Cubicle 19 2.1 Langage d’entrée . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2 Exemples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2.1 Algorithme d’exclusion mutuelle . . . . . . . . . . . . . . . . . . . 23 2.2.2 Généralisation de l’algorithme de Dekker . . . . . . . . . . . . . . 25 2.2.3 Boulangerie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.2.4 Cohérence de Cache . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.3 Non-atomicité . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.4 Logique multi-sortée et systèmes à tableaux . . . . . . . . . . . . . . . . . 36 2.4.1 Syntaxe des formules logiques . . . . . . . . . . . . . . . . . . . . . 37 2.4.2 Sémantique de la logique . . . . . . . . . . . . . . . . . . . . . . . . 39 2.4.3 Systèmes de transition à tableaux . . . . . . . . . . . . . . . . . . . 41 2.5 Sémantique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.5.1 Sémantique opérationnelle . . . . . . . . . . . . . . . . . . . . . . . 44 2.5.2 Atteignabilité . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.5.3 Un interpréteur de systèmes à tableaux . . . . . . . . . . . . . . . . 47 1TABLE DES MATIÈRES 3 Cadre théorique : model checking modulo théories 49 3.1 Analyse de sûreté des systèmes à tableaux . . . . . . . . . . . . . . . . . . 50 3.1.1 Atteignabilité par chaînage arrière . . . . . . . . . . . . . . . . . . 50 3.1.2 Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.1.3 Eectivité . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.2 Terminaison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.2.1 Indécidabilité de l’atteignabilité . . . . . . . . . . . . . . . . . . . . 57 3.2.2 Conditions pour la terminaison . . . . . . . . . . . . . . . . . . . . 59 3.2.3 Exemples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.3 Gardes universelles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3.1 Travaux connexes . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3.2 Calcul de pré-image approximé . . . . . . . . . . . . . . . . . . . . 69 3.3.3 Exemples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.3.4 Relation avec le modèle de panne franche et la relativisation des quanticateurs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.4.1 Exemples sans existence d’un bel ordre . . . . . . . . . . . . . . . . 74 3.4.2 Résumé . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4 Optimisations et implémentation 79 4.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.2 Optimisations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.2.1 Appels au solveur SMT . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.2.2 Tests ensemblistes . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.2.3 Instantiation ecace . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.3 Suppressions a posteriori . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.4 Sous-typage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.5 Exploration parallèle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.6 Résultats et conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5 Inférence d’invariants 105 5.1 Atteignabilité approximée avec retour en arrière . . . . . . . . . . . . . . . 107 5.1.1 Illustration sur un exemple . . . . . . . . . . . . . . . . . . . . . . 107 5.1.2 Algorithme abstrait . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 5.1.3 Algorithme complet pour les systèmes à tableaux . . . . . . . . . . 113 5.2 Heuristiques et détails d’implémentation . . . . . . . . . . . . . . . . . . . 117 5.2.1 Oracle : exploration avant bornée . . . . . . . . . . . . . . . . . . . 117 5.2.2 Extraction des candidats invariants . . . . . . . . . . . . . . . . . . 119 5.2.3 Retour en arrière . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 2TABLE DES MATIÈRES 5.2.4 Invariants numériques . . . . . . . . . . . . . . . . . . . . . . . . . 123 5.2.5 Implémentation dans Cubicle . . . . . . . . . . . . . . . . . . . . . 124 5.3 Évaluation expérimentale . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 5.4 Étude de cas : Le protocole FLASH . . . . . . . . . . . . . . . . . . . . . . . 130 5.4.1 Description du protocole FLASH . . . . . . . . . . . . . . . . . . . 131 5.4.2 Vérication du FLASH : État de l’art . . . . . . . . . . . . . . . . . 133 5.4.3 Modélisation dans Cubicle . . . . . . . . . . . . . . . . . . . . . . . 135 5.4.4 Résultats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 5.5 Travaux connexes sur les invariants . . . . . . . . . . . . . . . . . . . . . . 139 5.5.1 Génération d’invariants . . . . . . . . . . . . . . . . . . . . . . . . 139 5.5.2 Cutos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 5.5.3 Abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 5.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 6 Certification 145 6.1 Techniques de certication d’outils de vérication . . . . . . . . . . . . . . 145 6.2 La plateforme de vérication déductive Why3 . . . . . . . . . . . . . . . . 147 6.3 Production de certicats . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 6.3.1 Invariants inductifs pour l’atteignabilité arrière . . . . . . . . . . . 147 6.3.2 Invariants inductifs et BRAB . . . . . . . . . . . . . . . . . . . . . . 152 6.4 Preuve dans Why3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 6.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 7 Conclusion et perspectives 161 7.1 Résumé des contributions et conclusion . . . . . . . . . . . . . . . . . . . . 161 7.2 Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 A Syntaxe et typage des programmes Cubicle 165 A.1 Syntaxe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 A.2 Typage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Bibliographie 173 Index 187 3Table des figures 2.1 Algorithme d’exclusion mutuelle . . . . . . . . . . . . . . . . . . . . . . . . 24 2.2 Code Cubicle du mutex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3 Graphe de Dekker pour le processus i . . . . . . . . . . . . . . . . . . . . . 26 2.4 Code Cubicle de l’algorithme de Dekker . . . . . . . . . . . . . . . . . . . . 27 2.5 Code Cubicle de l’algorithme de la boulangerie de Lamport . . . . . . . . . 29 2.6 Diagramme d’état du protocole German-esque . . . . . . . . . . . . . . . . 31 2.7 Code Cubicle du protocole de cohérence de cache German-esque . . . . . . 32 2.8 Évaluation non atomique des conditions globales par un processus i . . . . 34 2.9 Encodage de l’évaluation non atomique de la condition globale ∀j , i. c(j) 35 2.10 Grammaire de la logique . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.1 Machine de Minsky à deux compteurs . . . . . . . . . . . . . . . . . . . . 58 3.2 Traduction d’un programme et d’une machine de Minsky dans un système à tableaux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.3 Dénition des congurations M et N . . . . . . . . . . . . . . . . . . . . . 61 3.4 Plongement de M vers N . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.5 Séquence nie d’idéaux inclus calculée par l’algorithme 4 . . . . . . . . . . 64 3.6 Trace fallacieuse pour un système avec gardes universelles . . . . . . . . . 71 3.7 Transformation avec modèle de panne franche . . . . . . . . . . . . . . . . 72 3.8 Trace fallacieuse pour la transformation avec modèle de panne franche . . 73 3.9 Relations entre les diérentes approches pour les gardes universelles . . . 74 3.10 Séquence innie de congurations non comparables pour l’encodage des machines de Minksy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.11 Séquence innie de congurations non comparables pour une relation binaire quelconque . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.12 Diérences de restrictions entre la théorie du model checking modulo théories et Cubicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.1 Architecture de Cubicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.2 Benchmarks et statistiques pour une implémentation naïve . . . . . . . . . 83 4.3 Arbre préxe représentant V . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.4 Benchmarks pour tests ensemblistes . . . . . . . . . . . . . . . . . . . . . . 87 5TABLE DES FIGURES 4.5 Benchmarks pour l’instantiation ecace . . . . . . . . . . . . . . . . . . . 91 4.6 Graphes d’atteignabilité arrière pour diérentes stratégies d’exploration sur l’exemple Dijkstra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.7 Suppression a posteriori . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.8 Préservation de l’invariant après suppression d’un nœud . . . . . . . . . . 95 4.9 Benchmarks pour la suppression a posteriori . . . . . . . . . . . . . . . . . 95 4.10 Système Cubicle annoté avec les contraintes de sous-typage . . . . . . . . 97 4.11 Benchmarks pour l’analyse de sous-typage . . . . . . . . . . . . . . . . . . 98 4.12 Une mauvaise synchronisation des tests de subsomption eectués en parallèle 100 4.13 Utilisation CPU pour les versions séquentielle et parallèle de Cubicle . . . 101 4.14 Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.1 Système de transition à tabeaux du protocole German-esque . . . . . . . . 108 5.2 Exécution partielle de BRAB sur le protocole German-esque . . . . . . . . 110 5.3 Transition sur un état avec variable non-initialisée . . . . . . . . . . . . . . 119 5.4 Apprentissage à partir d’une exploration supplémentaire avant redémarrage 122 5.5 Architecture de Cubicle avec BRAB . . . . . . . . . . . . . . . . . . . . . . 125 5.6 Résultats de BRAB sur un ensemble de benchmarks . . . . . . . . . . . . . 128 5.7 Architecture FLASH d’une machine et d’un nœud . . . . . . . . . . . . . . 130 5.8 Structure d’un message dans le protocole FLASH . . . . . . . . . . . . . . . 132 5.9 Description des transitions du protocole FLASH . . . . . . . . . . . . . . . 134 5.10 Résultats pour la vérication du protocole FLASH avec Cubicle . . . . . . 138 5.11 Protocoles de cohérence de cache hiérarchiques . . . . . . . . . . . . . . . 143 6.1 Invariants inductifs calculés par des analyses d’atteignabilité avant et arrière 148 6.2 Vérication du certicat Why3 de German-esque par diérents prouveurs automatiques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 6.3 Invariant inductif calculé par BRAB . . . . . . . . . . . . . . . . . . . . . . 154 6.4 Vérication par diérents prouveurs automatiques du certicat (Why3) de German-esque généré par BRAB . . . . . . . . . . . . . . . . . . . . . . . . 154 6.5 Vérication de certicats sur un ensemble de benchmarks . . . . . . . . . 155 6.6 Informations sur le développent Why3 . . . . . . . . . . . . . . . . . . . . 158 6.7 Aperçu d’une technique d’extraction . . . . . . . . . . . . . . . . . . . . . 159 A.1 Grammaire des chiers Cubicle . . . . . . . . . . . . . . . . . . . . . . . . 167 A.2 Règles de typage des termes . . . . . . . . . . . . . . . . . . . . . . . . . . 169 A.3 Règles de typage des formules . . . . . . . . . . . . . . . . . . . . . . . . . 170 A.4 Règles de typage des actions des transitions . . . . . . . . . . . . . . . . . 170 A.5 Vérication de la bonne formation des types . . . . . . . . . . . . . . . . . 171 A.6 Règles de typage des déclarations . . . . . . . . . . . . . . . . . . . . . . . 171 6Liste des Algorithmes 1 Code de Dekker pour le processus i . . . . . . . . . . . . . . . . . . . . . . . 26 2 Pseudo-code de la boulangerie de Lamport pour le processus i . . . . . . . . 29 3 Interpréteur d’un système à tableaux . . . . . . . . . . . . . . . . . . . . . . 47 4 Analyse d’atteignabilité par chaînage arrière . . . . . . . . . . . . . . . . . . 53 5 Test de satisabilité naïf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 6 Analyse d’atteignabilité abstraite avec approximations et retour en arrière (BRAB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 7 Analyse d’atteignabilité avec approximations et retour en arrière (BRAB) . . 115 8 Oracle : Exploration avant limitée en profondeur . . . . . . . . . . . . . . . 116 71 Introduction Sommaire 1.1 Model checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.1.1 Systèmes nis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.1.2 Systèmes innis . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2 Model checking de systèmes paramétrés . . . . . . . . . . . . . . . 14 1.2.1 Méthodes incomplètes . . . . . . . . . . . . . . . . . . . . . . . 14 1.2.2 Fragments décidables . . . . . . . . . . . . . . . . . . . . . . . . 15 1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.4 Plan de la thèse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Les systèmes informatiques sont aujourd’hui omniprésents, aussi bien dans les objets anodins de la vie courante que dans les systèmes critiques comme les contrôleurs automatiques utilisés par l’industrie aéronautique. Tous ces systèmes sont généralement très complexes, il est donc particulièrement dicile d’en construire qui ne comportent pas d’erreurs. On peut notamment constater le succès récent des architectures multi-cœurs, multi-processeurs et distribuées pour les serveurs haut de gamme mais aussi pour les terminaux mobiles personnels. Un des composants les plus complexes de telles machines est leur protocole de cohérence de cache. En vaut pour preuve le célèbre dicton : « Il y a seulement deux choses compliquées en informatique : l’invalidation des caches et nommer les choses. » — Phil Karlton En eet pour fonctionner de manière optimale chaque composant qui partage la mémoire (processeur, cœur, etc.) possède son propre cache (une zone mémoire temporaire) lui permettant de conserver les données auxquelles il a récemment accédé. Le protocole en question assure que tous les caches du système se trouvent dans un état cohérent, ce qui en 9Chapitre 1 Introduction fait un élément vital. Les méthodes les plus courantes pour garantir la qualité des systèmes informatiques sont le test et la simulation. Cependant, ces techniques sont très peu adaptées aux programmes concurrents comme les protocoles de cohérence de cache. Pour garantir une ecacité optimale, ces protocoles sont souvent implantés au niveau matériel et fonctionnent par échanges de messages, de manière entièrement asynchrone. Le moment auquel un message arrivera ou un processeur accédera à la mémoire centrale est donc totalement imprévisible [36]. Pour concevoir de tels systèmes on doit alors considérer de nombreuses « courses critiques » (ou race conditions en anglais), c’est-à-dire des comportements qui dépendent de l’ordre d’exécution ou d’arrivée des messages. Ces scénarios sont par ailleurs très compliqués, faisant intervenir plusieurs dizaines d’échanges de messages. Ainsi, la rareté de leurs apparitions fait qu’il est très dicile de reproduire ces comportements par des méthodes de test et de simulation. Une réponse à ce problème est l’utilisation de méthodes formelles pouvant garantir certaines propriétés d’un système par des arguments mathématiques. De nombreuses techniques revendiquent leur appartenance à cette catégorie, comme le model checking, qui s’attache à considérer tous les comportements possibles d’un système an d’en vérier diérentes propriétés. 1.1 Model checking La technique du model checking a été inventée pour résoudre le problème dicile de la vérication de programmes concurrents. Avant 1982, les recherches sur ce sujet intégraient systématiquement l’emploi de la preuve manuelle [131]. Pnueli [137], Owicki et Lamport [132] proposèrent, vers la n des années 70, l’usage de la logique temporelle pour spécier des propriétés parmi lesquelles : — la sûreté : un mauvais comportement ne se produit jamais, ou — la vivacité : un comportement attendu nira par arriver. Le terme model dans l’expression « model checking » peut faire référence à deux choses. Le premier sens du terme renvoie à l’idée qu’on va représenter un programme par un modèle abstrait. Appelons ce modèle M. Le deuxième sens du terme fait référence au fait qu’on va ensuite essayer de vérier des propriétés de M, autrement dit que M est un modèle de ces propriétés. Le model checking implique donc de dénir « simplement » ce qu’est (1) un modèle, et (2) une propriété d’un modèle. 1. Un modèle doit répondre aux trois questions suivantes : a) À tout instant, qu’est-ce qui caractérise l’état d’un programme ? b) Quel est l’état initial du système ? c) Comment passe-t-on d’un état à un autre ? 101.1 Model checking 2. Les propriétés qu’on cherche à vérier sont diverses : Est-ce qu’un mauvais état peut être atteint (à partir de l’état inital) ? Est-ce qu’un certain état sera atteint quoiqu’il advienne ? Plus généralement, on souhaite vérier des propriétés qui dépendent de la manière dont le programme (ou le modèle) évolue. On exprime alors ces formules dans des logiques temporelles. 1.1.1 Systèmes finis Les travaux fondateurs de Clarke et Emerson [38] et de Queille et Sifakis [140] intègrent cette notion de logique temporelle à l’exploration de l’ensemble des états du système (espace d’état). Dans leur article de 1981 [38], Clarke et Emerson montrent comment synthétiser des programmes à partir de spécications dans une logique temporelle (appelée CTL, pour Computational Tree Logic). Mais c’est surtout la seconde partie de l’article qui a retenu l’attention de la communauté. Dans celle-ci, ils construisent une procédure de décision fondée sur des calculs de points-xes pour vérier des propriétés temporelles de programmes nis. L’avantage principal du model checking par rapport aux autres techniques de preuve est double. C’est d’abord un processus automatique et rapide, de sorte qu’il est souvent considéré comme une technique « presse-bouton ». C’est aussi une méthode qui fonctionne déjà avec des spécications partielles. De cette façon, l’eort de vérication peut être commencé tôt dans le processus de conception d’un système complexe. Un de ses désavantages, qui est aussi inhérent à toutes les autres techniques de vérication, est que la rédaction des spécications est une tâche dicile qui requiert de l’expertise. Mais l’inconvénient majeur du model checking est apparu dans les années 80, et est connu sous le nom du phénomène d’explosion combinatoire de l’espace d’états. Seuls les systèmes avec un nombre petit d’états (de l’ordre du million) pouvaient être analysés à cette époque, alors que les systèmes réels en possèdent beaucoup plus. Par conséquent, un important corpus de recherche sur le model checking aborde le problème du passage à l’échelle. Une avancée notoire pour le model checking a été l’introduction de techniques symboliques pour résoudre ce problème d’explosion. Alors que la plupart des approches avant 1990 utilisaient des représentations explicites, où chaque état individuel est stocké en mémoire, Burch et al. ont montré qu’il était possible de représenter de manière symbolique et compacte des ensembles d’états [29]. McMillan reporte dans sa thèse [116] une représentation de la relation des états de transition avec BDD [27] (diagrammes de décision binaire) puis donne un nombre d’algorithmes basés sur des graphes dans le langage du µ-calcul. L’utilisation de structures de données compactes pour représenter de larges ensembles d’états permet de saisir certaines des régularités qui apparaissent naturellement dans les circuits ou autres systèmes. La complexité en espace du model checking symbolique a été grandement diminuée et, pour la première fois, des systèmes avec 1020 états ont été vériés. Cette limite a encore été repoussée avec divers ranements du model checking 11Chapitre 1 Introduction symbolique. Le point faible des techniques symboliques utilisant des BDD est que la quantité d’espace mémoire requise pour stocker ces structures peut augmenter de façon exponentielle. Biere et al. décrivent une technique appelée le model checking borné (ou BMC pour Bounded Model Checking) qui sacrie la correction au prot d’une recherche d’anomalies ecace [19]. L’idée du BMC est de chercher les contre-exemples parmi les exécutions du programme dont la taille est limitée à un certain nombre d’étapes k. Ce problème peut être encodé ecacement en une formule propositionnelle dont la statisabilité mène directement à un contre-exemple. Si aucune erreur n’est trouvée pour des traces de longueur inférieure à k, alors la valeur de la borne k est augmentée jusqu’à ce qu’une erreur soit trouvée, ou que le problème devienne trop dicile à résoudre 1 . La force de cette technique vient principalement de la mise à prot des progrès fait par les solveurs SAT (pour la satisabilité booléenne) modernes. Elle a rencontré un succès majeur dans la vérication d’implémentations de circuits matériels et fait aujourd’hui partie de l’attirail standard des concepteurs de tels circuits. Comme l’encodage vers des contraintes propositionnelles capture la sé- mantique des circuits de manière précise, ces model checkers ont permis de découvrir des erreurs subtiles d’implémentation, dues par exemple à la présence de débordements arithmétiques. Une extension de cette technique pour la preuve de propriétés, plutôt que la seule recherche de bogues, consiste à ajouter une étape d’induction. Cette extension de BMC s’appelle la k-induction et dière d’un schéma d’induction classique par le point suivant : lorsqu’on demande à vérier que la propriété d’induction est préservée, on suppose qu’elle est vraie dans les k étapes précédentes plutôt que seulement dans l’étape précédente [52, 151]. Une autre vision du model checking est centrée sur la théorie des automates. Dans ces approches, spécications et implémentations sont toutes deux construites avec des automates. Vardi et Wolper notent une correspondance entre la logique temporelle et les automates de Büchi [164] : chaque formule de logique temporelle peut être considérée comme un automate à états ni (sur des mots innis) qui accepte précisément les séquences satisfaites par la formule. Grâce à cette connexion, ils réduisent le problème du model checking à un test de vacuité de l’automate AM ∩A¬φ (où AM est l’automate du programme M et A¬φ est l’automate acceptant les séquences qui violent la propriété φ) [161]. 1.1.2 Systèmes infinis Le problème d’explosion combinatoire est encore plus frappant pour des systèmes avec un nombre inni d’états. C’est le cas par exemple lorsque les types des variables du 1. Dans certains cas un majorant sur la borne k est connu (le seuil de complétion) qui permet d’armer que le système vérie la propriété donnée. 121.1 Model checking programme sont innis (e.g. entiers mathématiques) ou les structures de données sont innies (e.g. buers, les d’attente, mémoires non bornés). Pour traiter de tels systèmes, deux possibilités s’orent alors : manipuler des représentations d’ensembles d’états innis directement, ou construire une abstraction nie du système. Dans la première approche, des représentations symboliques adaptées reposent par exemple sur l’utilisation des formules logiques du premier ordre. Pour cela, les techniques utilisant précédemment les solveurs SAT (traitant exclusivement de domaines nis encodés par de la logique propositionnelle) ont été peu à peu migrées vers les solveurs SMT (Satisabilité Modulo Théories). Ces derniers possèdent un moteur propositionnel (un solveur SAT) couplé à des méthodes de combinaison de théories. La puissance de ces solveurs vient du grand nombre de théories qu’ils supportent en interne, comme la théorie de l’arithmétique linéaire (sur entiers mathématiques), la théorie de l’égalité et des fonctions non interprétées, la théorie des tableaux, la théorie des vecteurs de bits, la théorie des types énumérés, etc. Certains solveurs SMT supportent même les quanticateurs. Par exemple, une application récente de la technique de k-induction aux programmes Lustre (un langage synchrone utilisé notamment dans l’aéronautique) utilisant des solveurs SMT est disponible dans le model checker Kind [85, 86]. La seconde approche pour les systèmes innis – qui peut parfois être utilisée en complément de la technique précédente – consiste à sacrier la précision de l’analyse en simpliant le problème an de se ramener à l’analyse d’un système ni. La plupart de ces idées émergent d’une certaine manière du cadre de l’interprétation abstraite dans lequel un programme est interprété sur un domaine abstrait [46,47]. Une forme d’abstraction particulière est celle de l’abstraction par prédicats. Dans cette approche, la fonction d’abstraction associe chaque état du système à un ensemble prédéni de prédicats. L’utilisateur fournit des prédicats booléens en nombre ni pour décrire les propriétés possibles d’un système d’états innis. Ensuite une analyse d’accessibilité est eectuée sur le modèle d’états nis pour fournir l’invariant le plus fort possible exprimable à l’aide de ces prédicats [81]. Généralement, les abstractions accélèrent la procédure de model checking lorsqu’elles sont les plus générales possibles. Parfois trop grossières, ces abstractions peuvent empêcher l’analyse d’un système pourtant sûr en exposant des contre-exemples qui n’en sont pas dans le système réel. Il devient alors intéressant de raner le domaine abstrait utilisé précédemment de manière automatique an d’éliminer ces mauvais contre-exemples. Cette idée a donné naissance à la stratégie itérative appelée CEGAR (pour Counter-Example Guided Abstraction Renement, i.e. ranement d’abstraction guidé par contre-exemples) [13, 35, 147]. Elle est utilisée par de nombreux outils et plusieurs améliorations ont été proposées au l des années. On peut mentionner par exemple les travaux de Jahla et al. [89, 94] sur l’abstraction paresseuse implémentés dans le model checker Blast [18] ainsi que ceux de McMillan [119] qui combine cette technique avec un ranement par interpolation de Craig [48] permettant ainsi de capturer les relations entre variables utiles à la preuve de la propriété souhaitée. 13Chapitre 1 Introduction Un système peut aussi être inni, non pas parce que ses variables sont dans des domaines innis, mais parce qu’il est formé d’un nombre non borné de composants. Par exemple, un protocole de communication peut être conçu pour fonctionner quelque soit le nombre de machines qui y participent. Ces systèmes sont dits paramétrés, et c’est le problème de leur vérication qui nous intéresse plus particulièrement dans cette thèse. 1.2 Model checking de systèmes paramétrés Un grand nombre de systèmes réels concurrents comme le matériel informatique ou les circuits électroniques ont en fait un nombre ni d’états possibles (déterminés par la taille des registres, le nombre de bascules, etc.). Toutefois, ces circuits et protocoles (e.g. les protocoles de bus ou les protocoles de cohérence de cache) sont souvent conçus de façon paramétrée, dénissant ainsi une innité de systèmes. Un pour chaque nombre de composants. Souvent le nombre de composants de tels systèmes n’est pas connu à l’avance car ils sont conçus pour fonctionner quelque soit le nombre de machines du réseau, quelque soit le nombre de processus ou encore quelque soit la taille des buers (mémoires tampons). En vérier les propriétés d’une manière paramétrée est donc indispensable pour s’assurer de leur qualité. Dans d’autres cas ce nombre de composants est connu mais il est tellement grand (plusieurs milliers) que les techniques traditionnelles sont tout bonnement incapables de raisonner avec de telles quantités. Il est alors préférable dans ces circonstances de vérier une version paramétrée du problème. Apt et Kozen montrent en 1986 que, de manière générale, savoir si un programme P paramétré par n, P (n) satisfait une propriété φ(n), est un problème indécidable [10]. Pour mettre en lumière ce fait, ils ont simplement créé un programme qui simule n étapes d’une machine de Turing et change la valeur d’une variable booléenne à la n de l’exécution si la machine simulée ne s’est pas encore arrêtée 2 . Ce travail expose clairement les limites intrinsèques des systèmes paramétrés. Même si le résultat est négatif, la vérication automatique reste possible dans certains cas. Face à un problème indécidable, il est coutume de restreindre son champ d’application en imposant certaines conditions jusqu’à tomber dans un fragment décidable. Une alternative consiste à traiter le problème dans sa globalité, mais avec des méthodes non complètes. 1.2.1 Méthodes incomplètes Le premier groupe à aborder le problème de la vérication paramétrée fut Clarke et Grumberg [39] avec une méthode fondée sur un résultat de correspondance entre des 2. Ce programme est bien paramétré par n. Bien qu’il ne fasse pas intervenir de concurrence, le résultat qui suit peut être aussi bien obtenu en mettant n processus identiques P (n) en parallèle. 141.2 Model checking de systèmes paramétrés systèmes de diérentes tailles. Notamment, ils ont pu vérier avec cette technique un algorithme d’exclusion mutuelle en mettant en évidence une bisimulation entre ce système de taille n et un système de taille 2. Toujours dans l’esprit de se ramener à une abstraction nie, de nombreuses techniques suivant ce modèle ont été développées pour traiter les systèmes paramétrés. Par exemple Kurshan et McMillan utilisent un unique processus Q qui agrège les comportements de n processus P concurrents [105]. En montrant que Q est invariant par composition parallèle asynchrone avec P, on déduit que Q représente bien une abstraction du système paramétré. Bien souvent l’enjeu des techniques utilisées pour vérier des systèmes paramétrés est de trouver une représentation à même de caractériser des familles innies d’états. Par exemple si la topologie du système (i.e. l’organisation des processus) n’a pas d’importance, il est parfois susant de « compter » le nombre de processus se trouvant dans un état particulier. C’est la méthode employée par Emerson et al. dans l’approche dite d’abstraction par compteurs [64]. Si l’ordre des processus importe (e.g. un processus est situé « à gauche » d’un autre) une représentation adaptée consiste à associer à chaque état un mot d’un langage régulier. Les ensembles d’états sont alors représentés par les expressions régulières de ce langage et certaines relations de transition peuvent être exprimées par des transducteurs nis [1, 99]. Une généralisation de cette idée a donné naissance au cadre du model checking régulier [24] qui propose des techniques d’accélération pour calculer les clôtures transitives [96, 128]. Des extensions de ces approches ont aussi été adaptées à l’analyse de systèmes de topologies plus complexes [22]. Plutôt que de chercher à construire une abstraction nie du système paramétré dans son intégralité, l’approche dite de cuto (coupure) cherche à découvrir une borne supérieure dépendant à la fois du système et de la propriété à vérier. Cette borne k, appelée la valeur de cuto, est telle que si la propriété est vraie pour les système plus petits que k alors elle est aussi vraie pour les systèmes de taille supérieure à k [62]. Parfois cette valeur peut-être calculée statiquement à partir des caractéristiques du système. C’est l’approche employée dans la technique des invariants invisibles de Pnueli et al. alliée à une génération d’invariants inductifs [12, 138]. L’avantage de la technique de cuto est qu’elle s’applique à plusieurs formalismes et qu’elle permet simplement d’évacuer le côté paramétré d’un problème. En revanche le calcul de cette borne k donne souvent des valeurs rédhibitoires pour les systèmes de taille réelle. Pour compenser ce désavantage, certaines techniques ont été mises au point pour découvrir les bornes de cuto de manière dynamique [4, 98]. 1.2.2 Fragments décidables Il est généralement dicile d’identier un fragment qui soit à la fois décidable et utile en pratique. Certaines familles de problèmes admettent cependant des propriétés qui en font des cadres théoriques intéressants. Partant du constat que les systèmes synchrones sont souvent plus simples à analyser, Emerson et Namjoshi montrent que certaines propriétés 15Chapitre 1 Introduction temporelles sont en fait decidables pour ces systèmes [63]. Leur modèle fonctionne pour les systèmes composés d’un processus de contrôle et d’un nombre arbitraire de processus homogènes. Il ne s’applique donc pas, par exemple, aux algorithmes d’exclusion mutuelle. L’écart de méthodologie qui existe entre les méthodes non complètes et celles qui identient un fragment décidable du model checking paramétré n’est en réalité pas si grand. Dans bien des cas, les chercheurs qui mettent au point ces premières mettent aussi en évidence des restrictions susantes sur les systèmes considérés pour rendre l’approche complète et terminante. C’est par exemple le cas des techniques d’accélération du model checking régulier ou encore celui du model checking modulo théories proposé par Ghilardi et Ranise [77, 79]. À la diérence des techniques pour systèmes paramétrés mentionnées précédemment, cette dernière approche ne construit pas d’abstraction nie mais repose sur l’utilisation de quanticateurs pour représenter les ensembles innis d’états de manière symbolique. En eet lorsqu’on parle de systèmes paramétrés, on exprime naturellement les propriétés en quantiant sur l’ensemble des éléments du domaine paramétré. Par exemple, dans un système où le nombre de processus n’est pas connu à l’avance, on peut avoir envie de garantir que quelque soit le processus, sa variable locale x n’est jamais nulle. Le cadre du model checking modulo théories dénit une classe de systèmes paramétrés appelée systèmes à tableaux permettant de maîtriser l’introduction des quanticateurs. La sûreté de tels systèmes n’est pas décidable en général mais il existe des restrictions sur le problème d’entrée qui permettent de construire un algorithme complet et terminant. Un autre avantage de cette approche est de tirer parti de la puissance des solveurs SMT et de leur grande versatilité. Toutes ces techniques attaquent un problème intéressant et important. Celles qui sont automatiques (model checking) ne passent pourtant pas à l’échelle sur des problèmes réalistes. Et celles qui sont applicables en pratique demandent quant à elles une expertise humaine considérable, ce qui rend le processus de vérication très long [33,49,118,134,135]. Le problème principal auquel répond cette thèse est le suivant. Comment vérier automatiquement des propriétés de sûreté de systèmes paramétrés complexes ? Pour répondre à cette question, on utilise le cadre théorique du model checking modulo théories pour concevoir de nouveaux algorithmes qui infèrent des invariants de manière automatique. Nos techniques s’appliquent avec succès à l’analyse de sûreté paramétrée de protocoles de cohérence de cache conséquents, entre autres. 1.3 Contributions Nos contributions sont les suivantes : 161.4 Plan de la thèse — Un model checker open source pour systèmes paramétrés : Cubicle. Cubicle implémente les techniques présentées dans cette thèse et est librement disponible à l’adresse suivante : http://cubicle.lri.fr. — Un ensemble de techniques pour l’implémentation d’un model checker reposant sur un solveur SMT. — Un nouvel algorithme pour inférer des invariants de qualité de manière automatique : BRAB. L’idée de cet algorithme est d’utiliser les informations extraites d’un modèle ni du système an d’inférer des invariants pour le cas paramétré. Le modèle ni est en fait mis à contribution en tant qu’oracle dont le rôle se limite à émettre un jugement de valeur sur les candidats invariants qui lui sont présentés. Cet algorithme fonctionne bien en pratique car dans beaucoup de cas les instances nies, même petites, exhibent déjà la plupart des comportements intéressants du système. — Une implémentation de BRAB dans le model checker Cubicle. — L’application de ces techniques à la vérication du protocole de cohérence de cache de l’architecture multi-processeurs FLASH. À l’aide des invariants découverts par BRAB, la sûreté de ce protocole a pu être vériée entièrement automatiquement pour la première fois. — Deux approches pour la certication de Cubicle à l’aide de la plate-forme Why3. La première est une approche qui fonctionne par certicats (ou traces) et vérie le résultat produit par le model checker. Ce certicat prend la forme d’un invariant inductif. La seconde approche est par vérication déductive du cœur de Cubicle. Ces deux approches mettent en avant une autre qualité de BRAB : faciliter ce processus de certication, par réduction de la taille des certicats pour l’une, et grâce à son ecacité pour l’autre. 1.4 Plan de la thèse Ce document de thèse est organisé de la façon suivante : Le chapitre 2 introduit le langage d’entrée du model checker Cubicle à travers diérents exemples d’algorithmes et de protocoles concurrents. Une seconde partie de ce chapitre présente la représentation formelle des systèmes de transitions utilisés par Cubicle ainsi que leur sémantique. Le chapitre 3 présente le cadre théorique du model checking modulo théories conçu par Ghilardi et Ranise. Les résultats et théorèmes associés sont également donnés et illustrés dans ce chapitre, qui constitue le contexte dans lequel s’inscrivent nos travaux. Le chapitre 4 donne un ensemble d’optimisations nécessaires à l’implémentation d’un model checker reposant sur un solveur SMT comme Cubicle. Nos travaux autour de l’inférence d’invariants sont exposés dans le chapitre 5. On y présente et illustre l’algorithme BRAB. Les détails pratiques de son fonctionnement sont également expliqués et son intérêt est appuyé par une 17Chapitre 1 Introduction évaluation expérimentale sur des problèmes diciles pour la vérication paramétrée. En particulier on détaille le résultat de la preuve du protocole de cohérence de cache FLASH. Le chapitre 6 présente deux techniques de certication que nous avons mis en œuvre pour certier le model checker Cubicle. Ces deux techniques reposent sur la plate-forme de vérication déductive Why3. Enn le chapitre 7 donne des pistes d’amélioration et d’extension de nos travaux et conclut ce document. 182 Le model checker Cubicle Sommaire 2.1 Langage d’entrée . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2 Exemples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2.1 Algorithme d’exclusion mutuelle . . . . . . . . . . . . . . . . . 23 2.2.2 Généralisation de l’algorithme de Dekker . . . . . . . . . . . . 25 2.2.3 Boulangerie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.2.4 Cohérence de Cache . . . . . . . . . . . . . . . . . . . . . . . . 30 2.3 Non-atomicité . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.4 Logique multi-sortée et systèmes à tableaux . . . . . . . . . . . . . 36 2.4.1 Syntaxe des formules logiques . . . . . . . . . . . . . . . . . . . 37 2.4.2 Sémantique de la logique . . . . . . . . . . . . . . . . . . . . . . 39 2.4.3 Systèmes de transition à tableaux . . . . . . . . . . . . . . . . . 41 2.5 Sémantique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.5.1 Sémantique opérationnelle . . . . . . . . . . . . . . . . . . . . . 44 2.5.2 Atteignabilité . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.5.3 Un interpréteur de systèmes à tableaux . . . . . . . . . . . . . . 47 L’outil issu des travaux présentés dans ce document est un model checker pour systèmes paramétrés, dénommé Cubicle. Le but de ce chapitre est de familiariser le lecteur avec l’outil, tout particulièrement son langage d’entrée et sa syntaxe concrète. Quelques exemples variés de tels systèmes sont formulés dans ce langage an d’orir un premier aperçu des possibilités du model checker. On donne également une description plus formelle de la sémantique des systèmes décrits dans le langage de Cubicle. 19Chapitre 2 Le model checker Cubicle 2.1 Langage d’entrée Cubicle est un model checker pour systèmes paramétrés. C’est-à-dire qu’il permet de vérier statiquement des propriétés de sûreté d’un programme concurrent pour un nombre quelconque de processus 1 . Pour représenter ces programmes, on utilise des systèmes de transition car ils permettent facilement de modéliser des comportements asynchrones ou non déterministes. Ces systèmes décrivent les transitions possibles d’un état du programme à un autre. Ils peuvent être considérés comme un langage bas niveau pour la vérication. Étant donné que les systèmes manipulés par Cubicle sont paramétrés, les transitions qui le composent le sont également. Dans cette section, on présente de manière très informelle le langage d’entrée de Cubicle 2 . La description d’un système commence par des déclarations de types, de variables globales et de tableaux. Cubicle connaît quatre types en interne : le type des entiers (int), le type des réels (real), le type des booléens (bool) et le type des identicateurs de processus (proc). Ce dernier est particulièrement important car c’est par les éléments de ce type que le système est paramétré. Le paramètre du système est la cardinalité du type proc. L’utilisateur a aussi la liberté de déclarer ses propres types abstraits ou ses propres types énumérés. L’exemple suivant dénit un type énuméré state à trois constructeurs Idle, Want et Crit ainsi qu’un type abstrait data. type state = Idle | Want | Crit type data Les tableaux et variables représentent l’état du système ou du programme. Tous les tableaux ont la particularité d’être uniquement indexés par des éléments du type proc, leur taille est par conséquent inconnue. C’est une limitation de l’implémentation actuelle du langage de Cubicle qui existe pour des raisons pratiques et rien n’empêcherait d’ajouter la possibilité de déclarer des tableaux indexés par des entiers. Dans ce qui suit on déclare une variable globale Timer de type real et trois tableaux indexés par le type proc. var Timer : real array State[proc] : state array Chan[proc] : data array Flag[proc] : bool Les tableaux peuvent par exemple être utilisés pour représenter des variables locales ou des canaux de communication. 1. Un programme concurrent est souvent paramétré par son nombre de processsus ou threads mais ce paramètre peut aussi être la taille de certains buers ou le nombre de canaux de communication par exemple. 2. Une description plus précise de la syntaxe et de ce langage est faite en annexe A de ce document. 202.1 Langage d’entrée Les états initiaux du système sont dénis à l’aide du mot clef init. Cette déclaration qui vient en début de chier précise quelles sont les valeurs initiales des variables et tableaux pour tous leurs indices. Notons que certaines variables peuvent ne pas être initialisées et on a le droit de mentionner seulement les relations entres elles. Dans ce cas tout état dont les valeurs des variables et tableaux respectent les contraintes xées dans la ligne init sera un état initial correct du système. Par exemple, la ligne suivante dénit les états initiaux comme ceux ayant leurs tableaux Flag à false et State à Idle pour tout processus z, ainsi que leur variable globale Timer valant 0.0. Le paramètre z de init représente tous les processus du système. init (z) { Flag[z] = False && State[z] = Idle && Timer = 0.0 } Remarque. On ne précise pas le type des paramètres car seuls ceux du type proc sont autorisés Le reste du système est donné comme un ensemble de transitions de la forme garde/action. Chaque transition peut être paramétrée (ou non) par un ou plusieurs identicateurs de processus comme dans l’exemple suivant. transition t (i j) requires { i < j && State[i] = Idle && Flag[i] = False && forall_other k. (Flag[k] = Flag[j] || State[k] <> Want) } { Timer := Timer + 1.0; Flag[i] := True; State[k] := case | k = i : Want | State[k] = Crit && k < i : Idle | _ : State[k]; } Dans cet exemple, la transition t est déclenchable s’il existe deux processus distincts d’identicateurs i et j tels que i est inférieur à j. Les tableaux State et Flag doivent contenir respectivement la valeur Idle et la valeur false à l’indice i. En plus de cette garde locale (aux processus i et j), on peut mentionner l’état des autres processus du système. La partie globale de la garde est précédée du mot clef forall_other. Ici, on dit que tous les autres processus (sous-entendu diérents de i et j) doivent soit avoir la même valeur de Flag que j, soit contenir une valeur diérente de Want dans le tableau State. Remarque. Dans Cubicle, les processus sont diérenciés par leur identicateurs. Ici, les paramètres i et j de la transition sont implicitement quantiés existentiellement et doivent 21Chapitre 2 Le model checker Cubicle être des identicateurs de processus deux à deux distincts. L’ensemble des identicateurs de processus est seulement muni d’un ordre total. La présence de cet ordre induit une topologie linéaire sur les processus. La comparaison entre identicateurs est donc autorisée et on peut par exemple écrire i < j dans une garde. La garde de la transition est donnée par le mot clef requires. Si elle est satisfaite, les actions de la transition sont exécutées. Chaque action est une mise à jour d’une variable globale ou d’un tableau. La sémantique des transitions veut que l’ensemble des mises à jour soit réalisé de manière atomique et chaque variable qui apparaît à droite d’un signe := dénote la valeur de cette variable avant la transition. L’ordre des aectations n’a donc pas d’importance. La première action Timer := Timer + 1.0 incrémente la variable globale de 1 lorsque la transition est prise. Les mises à jour de tableaux peuvent être codées comme de simples aectations si une seule case est modiée. L’action Flag[i] := True modie le tableau Flag à l’indice i en y mettant la valeur true. Le reste du tableau n’est pas modié. Si la transition modie plusieurs cases d’un même tableau, on utilise une construction case qui précise les nouvelles valeurs contenues dans le tableau à l’aide d’un ltrage. State[k] := case ... mentionne ici les valeurs du tableau State pour tous les indices k (k doit être une variable fraîche) selon les conditions suivantes. Pour chaque indice k, le premier cas possible du ltrage est exécuté. Le cas | k = i : Want nous demande de vérier tout d’abord si k vaut i, c’est-à-dire de mettre de la valeur Want à l’indice i du tableau State. Le deuxième cas | State[k] = Crit && k < i : Idle est plus compliqué : si le premier cas n’est pas vérié (i.e. k est diérent de i), que State contenait Crit à l’indice k, et que k est inférieur à i alors la nouvelle valeur de State[k] est Idle. Plus simplement, cette ligne change les valeurs Crit du tableau State se trouvant à gauche de i (k < i) en Idle. Enn tous les ltrages doivent se terminer par un cas par défaut noté _. Dans notre exemple, le cas par défaut du ltrage | _ : State[k] dit que toutes les autres valeurs du tableau restent inchangées (on remet l’ancienne valeur de State[k]). La relation de transition, décrite par l’ensemble des transitions, dénit l’exécution du système comme une boucle innie qui à chaque itération : 1. choisit de manière non déterministe une instance de transition dont la garde est vraie dans l’état courant du système ; 2. met à jour les variables et tableaux d’état conformément aux actions de la transition choisie. 222.2 Exemples Les propriétés de sûreté à vérier sont exprimées sous forme négative, c’est-à-dire qu’on caractérise les états dangereux du système. On les exprime dans Cubicle à l’aide du mot clef unsafe, éventuellement suivi d’un ensemble de variables de processus distinctes. La formule dangereuse suivante exprime que les mauvais états du système sont ceux où il existe au moins deux processus distincts x et y tels que le tableau State contienne la valeur Crit à ces deux indices. unsafe (x y) { State[x] = Crit && State[y] = Crit } On dira qu’un système est sûr si aucun des états dangereux ne peut être atteint à partir d’un des états initiaux. Remarque. Bien que Cubicle soit conçu pour vérier des systèmes paramétrés, il est tout de même possible de l’utiliser pour vérier des systèmes dont le nombre de processus est xé à l’avance. Pour cela, il sut d’inclure la ligne “number_procs n” dans le chier, où n est le nombre de processus. Dans ce cas, on pourra mentionner explicitement les processus 1 à n en utilisant les constantes #1 à #n. Remarque. Le langage d’entrée de Cubicle est une version moins riche mais paramétrée du langage de Murφ [57] et similaire à Uclid [28]. Bien que limité pour l’instant, il est assez expressif pour permettre de décrire aisément des systèmes paramétrés conséquents (75 transitions, 40 variables et tableaux pour le protocole FLASH [106] par exemple). 2.2 Exemples Dans cette section, on montre comment utiliser Cubicle et son expressivité pour modéliser diérents algorithmes et protocoles de la littérature. Ces exemples sont donnés à titre didactique et sont choisis de manière à illustrer les caractéristiques fondamentales du langage. Le but de cette section est de donner au lecteur une bonne intuition des possibilités oertes par l’outil de manière à pouvoir modéliser et expérimenter le model checker sur ses propres exemples. 2.2.1 Algorithme d’exclusion mutuelle L’exclusion mutuelle est un problème récurrent de la programmation concurrente. L’algorithme décrit ci-dessous résout ce problème, i.e. il permet à plusieurs processus de partager une ressource commune à accès unique sans conit, en communiquant seulement au travers de variables partagées. C’est une version simpliée de l’algorithme de Dekker (présenté en 2.2.2) qui fonctionne pour un nombre arbitraire de processus identiques. Sur la gure 2.1, on matérialise n processus concurrents qui exécutent tous le même protocole. Chaque processus est représenté par le graphe de transition entre ses états : Idle, 23Chapitre 2 Le model checker Cubicle Process 1 Idle Want Crit Turn = 1 Turn := ? Process 2 Idle Want Crit Turn = 2 Turn := ? Process n Idle Want Crit Turn = n Turn := ? . . . Process 3 Idle Want Crit Turn = 3 Turn := ? Figure 2.1 – Algorithme d’exclusion mutuelle Want, et Crit, l’état initial étant Idle. Un processus peut demander l’accès à la section critique à tout moment en passant dans l’état Want. La synchronisation est eectuée au travers de la seule variable partagée Turn. La priorité est donnée au processus dont l’identiant i est contenu dans la variable Turn, qui peut dans ce cas passer en section critique. Un processus en section critique peut en sortir sans contrainte si ce n’est celle de « donner la main » à un de ses voisins en changeant la valeur de Turn. Cette dernière est modiée de façon non-déterministe (Turn := ? dans le schéma), donc rien n’interdit à un processus de se redonner la main lui-même. La propriété qui nous intéresse ici est de vérier la sûreté du système, c’est-à-dire qu’à tout moment, au plus un processus est en section critique. Le code Cubicle correspondant à ce problème est donné ci-dessous en gure 2.2. Pour modéliser l’algorithme on a choisi de représenter l’état des processus par un tableau State contenant les valeur Idle, Want, ou Crit. On peut aussi voir State[i] comme la valeur d’une variable locale au processus d’identicateur i. La variable partagée Turn est simplement une variable globale au système. On dénit les états initiaux comme ceux où tous les processus sont dans l’état Idle, quelque soit la valeur initiale de la variable Turn. Prenons l’exemple de la transition req. Elle correspond au moment où un processus demande l’accès à la section critique, passant de l’état Want à Idle. Elle se lit : la transition req est possible s’il existe un processus d’identiant i tel que State[i] a pour valeur Idle. Dans ce cas, la case State[i] prend pour valeur Want. La formule unsafe décrit les mauvais états du système comme ceux dans lesquels il existe (au moins) deux processus distincts en section critique (i.e. pour lesquels la valeur de State est Crit). Autrement dit, on veut s’assurer que la formule suivante est un invariant du système : ∀i,j. (i , j ∧ State[i] = Crit) =⇒ State[j] , Crit 242.2 Exemples mutex.cub type state = Idle | Want | Crit array State[proc] : state var Turn : proc init (z) { State[z] = Idle } unsafe (z1 z2) { State[z1] = Crit && State[z2] = Crit } transition req (i) requires { State[i] = Idle } { State[i] := Want } transition enter (i) requires { State[i] = Want && Turn = i } { State[i] := Crit; } transition exit (i) requires { State[i] = Crit } { Turn := ? ; State[i] := Idle; } Figure 2.2 – Code Cubicle du mutex Pour vérier que le système décrit dans le chier mutex.cub précédent est sûr, la manière la plus simple d’invoquer Cubicle est de lui passer seulement le nom de chier en argument : cubicle mutex.cub On peut voir ci-après la trace émise par le model checker sur la sortie standard. Lorsque ce dernier ache sur la dernière ligne “The system is SAFE”, cela signie qu’il a été en mesure de vérier que l’état unsafe n’est jamais atteignable dans le système. node 1: unsafe[1] 5 (2+3) remaining node 2: enter(#2) -> unsafe[1] 7 (2+5) remaining node 3: req(#2) -> enter(#2) -> unsafe[1] 8 (1+7) remaining The system is SAFE 2.2.2 Généralisation de l’algorithme de Dekker L’algorithme de Dekker est en réalité la première solution au problème d’exclusion mutuelle. Cette solution est attribuée au mathématicien Th. J. Dekker par Edsger W. Dijkstra qui en donnera une version fonctionnant pour un nombre arbitraire de processus [54]. En 1985, Alain J. Martin présente une version simple de l’algorithme généralisé à n processus [115]. C’est cette version qui est donnée ici sous forme d’algorithme (Algorithme 1) et sous forme de graphe de ot de contrôle (Figure 2.3). La variable booléenne x (i) est utilisée par un processus pour signaler son intérêt à entrer en section critique. La principale diérence avec l’algorithme original est l’utilisation d’une valeur spéciale pour réinitialiser t. Dans Cubicle, les constantes de processus ne sont pas explicites donc on ne peut pas matérialiser l’identiant spécial 0 dans le type proc. Pour 25Chapitre 2 Le model checker Cubicle Algorithme 1 : Code de Dekker pour le processus i [115] Variables : x (i) : variable booléenne, initialisée à false t : variable partagée, initialisée à 0 p(i) : NCS while true do x (i) := true; WANT while ∃j , i. x (j) do AWAIT x (i) := false; await [ t = 0 ∨ t = i ]; TURN t := i; x (i) := true; CS // Section critique; x (i) := false; t := 0; NCS CS WANT ∃j , i. x (j) AWAIT t = 0 t = i TURN x (i):= true x (i):= false x (i):= false t:= 0 t:= i x (i):= true Figure 2.3 – Graphe de Dekker pour le processus i modéliser t, on a choisit d’utiliser deux variables globales : T représente t lorsque celle-ci est non nulle et la variable booléenne T_set vaut False lorsque t a été réinitialisée (à 0). Le tableau P est utilisé pour représenter le compteur de programme et peut prendre les valeurs des étiquettes de l’algorithme 1. On peut remarquer que dans notre modélisation, la condition de la boucle while est évaluée de manière atomique. Les transitions wait et enter testent en une seule étape l’existence (ou non) d’un processus j dont la variable x (j) est vraie. De la même manière que précédemment, on veut s’assurer qu’il y ait au plus un processus au point de programme correspondant à l’étiquette CS, i.e. en section critique. Cubicle est capable de prouver la sûreté de ce système. À vue d’œil, la correction de l’algorithme n’est pas triviale lorsqu’un nombre arbitraire de processus s’exécutent de manière concurrente. Pour illustrer ce propos, admettons que le concepteur ait fait une erreur et qu’un processus i puisse omettre de passer sa variable x (i) à true lorsqu’il arrive à l’étiquette TURN. Il sut alors de rajouter la transition suivante au système pour modéliser les nouveaux comportements introduits : 262.2 Exemples dekker_n.cub type location = NCS | WANT | AWAIT | TURN | CS array P[proc] : location array X[proc] : bool var T : proc var T_set : bool init (i) { T_set = False && X[i] = False && P[i] = NCS } unsafe (i j) { P[i] = CS && P[j] = CS } transition start (i) requires { P[i] = NCS } { P[i] := WANT; X[i] := True; } transition wait (i j) requires { P[i] = WANT && X[j] = True } { P[i] := AWAIT; X[i] := False; } transition enter (i) requires { P[i] = WANT && forall_other j. X[j] = False } { P[i] := CS } transition awaited_1 (i) requires { P[i] = AWAIT && T_set = False } { P[i] := TURN } transition awaited_2 (i) requires { P[i] = AWAIT && T_set = True && T = i } { P[i] := TURN } transition turn (i) requires { P[i] = TURN } { P[i] := WANT; X[i] := True; T := i; T_set := True; } transition loop (i) requires { P[i] = CS } { P[i] := NCS; X[i] := False; T_set := False; } Figure 2.4 – Code Cubicle de l’algorithme de Dekker transition turn_buggy (i) requires { P[i] = TURN } { P[i] := WANT; T := i; T_set := True; } Si on exécute Cubicle sur le chier maintenant obtenu, il nous fait savoir que la propriété n’est plus vériée en exposant une trace d’erreur. On peut voir sur la trace suivante qu’un mauvais état est atteignable en dix étapes avec deux processus. Error trace: start(#2) -> enter(#2) -> start(#1) -> wait(#1, #2) -> loop(#2) -> awaited_1(#1) -> turn_buggy(#1) -> enter(#1) -> start(#2) -> enter(#2) -> unsafe[1] 27Chapitre 2 Le model checker Cubicle UNSAFE ! Des constantes sont introduites pour chaque processus entrant en jeu dans la trace et sont notées #1, #2, #3, . . . . La notation “. . . wait(#1, #2) -> . . . ” signie que pour reproduire la trace, il faut prendre la transition wait instanciée avec les processus #1 et #2. Remarque. Un état dangereux de ce système est en réalité atteignable en seulement huit étapes mais il faut pour cela qu’au moins trois processus soient impliqués. On peut s’en rendre compte en forçant Cubicle à eectuer une exploration purement en largeur grâce à l’option -postpone 0. De cette façon, on est assuré d’obtenir la trace d’erreur la plus courte possible : Error trace: start(#1) -> start(#3) -> wait(#1, #3) -> awaited_1(#1) -> turn_buggy(#1) -> enter(#1) -> start(#2) -> enter(#2) -> unsafe[1] 2.2.3 Boulangerie L’algorithme de la boulangerie a été proposé par Leslie Lamport en réponse au problème d’exclusion mutuelle [110]. Contrairement aux approches précédentes, la particularité de cet algorithme est qu’il est résistant aux pannes et qu’il fonctionne même lorsque les opérations de lecture et d’écriture ne sont pas atomiques. Son pseudo-code est donné dans l’algorithme 2 (voir page suivante). On peut faire le parallèle entre le principe de base de l’algorithme et celui d’une boulangerie aux heures de pointe, d’où son nom. Dans cette boulangerie, chaque client (matérialisant un processus) choisi un numéro en entrant dans la boutique. Le client qui souhaite acheter du pain ayant le numéro le plus faible s’avance au comptoir pour être servi. La propriété d’exclusion mutuelle de cette boulangerie se manifeste par le fait que son fonctionnement garantit qu’un seul client est servi à la fois. Il est possible que deux clients choisissent le même numéro, celui dont le nom (unique) est avant l’autre a alors la priorité. La modélisation faite dans Cubicle est reprise de la version modélisée dans PFS par Abdulla et al. [3] et est donnée ci-dessous. C’est une version simpliée de l’algorithme original de Lamport dans laquelle le calcul du maximum à l’étiquette Choose et l’ensemble des tests de la boucle for à l’étiquette Wait sont atomiques. Les lectures et écritures sont aussi considérées comme étant instantanées. Cet algorithme est tolérant aux pannes, c’est à dire qu’il continue de fonctionner même si un processus s’arrête. Un processus i a le droit de tomber en panne et de redémarrer en section non critique tout en réinitialisant ses variables locales. Ce comportement peut être modélisé dans Cubicle en rajoutant un point de programme spécial Crash représentant le fait qu’un processus est en panne. On ajoute une transition sans garde qui dit qu’un processus peut tomber en panne à tout moment, ainsi qu’une transition lui permettant de redémarrer (voir page 30). 282.2 Exemples Algorithme 2 : Pseudo-code de la boulangerie de Lamport pour le processus i [110] Variables : choosing[i] : variable booléenne, initialisée à false number[i] : variable entière initialisée à 0 p(i) : NCS begin choosing[i] := true; Choose number[i] := 1 + max(number[1], . . . , number[N]); choosing[i] := false; for j = 1 to N do Wait await [ ¬ choosing[j] ]; await [ number[j] = 0 ∨ (number[i], i) < (number[j], j) ]; CS // Section critique; number[i] := 0; goto NCS; bakery_lamport.cub type location = NCS | Choose | Wait | CS array PC[proc] : location array Ticket[proc] : int array Num[proc] : int var Max : int init (x) { PC[x] = NCS && Num[x] = 0 && Max = 1 && Ticket[x] = 0 } invariant () { Max < 0 } unsafe (x y) { PC[x] = CS && PC[y] = CS } transition next_ticket () { Ticket[j] := case | _ : Max; Max := Max + 1; } transition take_ticket (x) requires { PC[x] = NCS && forall_other j. Num[j] < Max } { PC[x] := Choose; Ticket[x] := Max; } transition wait (x) requires { PC[x] = Choose } { PC[x] := Wait; Num[x] := Ticket[x]; } transition turn (x) requires { PC[x] = Wait && forall_other j. (PC[j] <> Choose && Num[j] = 0 || PC[j] <> Choose && Num[x] < Num[j] || PC[j] <> Choose && Num[x] = Num[j] && x < j) } { PC[x] := CS; } transition exit (x) requires { PC[x] = CS } { PC[x] := NCS; Num[x] := 0; } Figure 2.5 – Code Cubicle de l’algorithme de la boulangerie de Lamport 29Chapitre 2 Le model checker Cubicle type location = NCS | Choose | Wait | CS | Crash ... transition fail (x) { PC[x] := Crash } transition recover (x) requires { PC[x] = Crash } { PC[x] := NCS; Num[x] := 0; } 2.2.4 Cohérence de Cache Dans une architecture multiprocesseurs à mémoire partagée, l’utilisation de caches est requise pour réduire les eets de la latence des accès mémoire et pour permettre la coopération. Les architectures modernes possèdent de nombreux caches ayant des fonctions particulières, mais aussi plusieurs niveaux de cache. Les caches qui sont les plus près des unités de calcul des processeurs orent les meilleures performances. Par exemple un accès en lecture au cache (succès de cache ou cache hit) L1 consomme seulement 4 cycles du processeur sur une architecture Intel Core i7, alors qu’un accès mémoire (défaut de cache ou cache miss) consomme en moyenne 120 cycles [113]. Cependant l’utilisation de caches introduit le problème de cohérence de cache : toutes les copies d’un même emplacement mémoire doivent être dans des états compatibles. Un protocole de cohérence de cache fait en sorte, entre autres, que les écritures eectuées à un emplacement mémoire partagé soient visibles de tous les autres processeurs tout en garantissant une absence de conits lors des opérations. Il existe plusieurs types de protocoles de cohérence de cache : — cohérence par espionnage 3 : la communication se fait au travers d’un bus central sur lequel les diérentes transactions sont visibles par tous les processeurs. — cohérence par répertoire 4 : chaque processeur est responsable d’une partie de la mémoire et garde trace des processeurs ayant une copie locale dans leur cache. Ces protocoles fonctionnent par envoi de messages sur un réseau de communication. Bien que plus dicile à mettre en œuvre, cette dernière catégorie de protocoles est aujourd’hui la plus employée, pour des raisons de performance et de passage à l’échelle. Les architectures réelles sont souvent très complexes, elles implémentent plusieurs protocoles de cohérence de manière hiérarchique, et utilisent plusieurs dizaines voire centaines de variables. Néanmoins leur fonctionnement repose sur le même principe fondamental. Un protocole simple mais représentatif de cette famille à été donné par Steven German à 3. Ces protocoles sont aussi parfois qualiés de snoopy. 4. On qualie parfois ces protocoles par le terme anglais directory based. 302.2 Exemples la communauté académique [138]. L’exemple suivant dénommé German-esque est une version simpliée de ce protocole. E S I Shr[i] := true Exg := true Exg := true Shr[i] := true Shr[i] := false Exg := false Exg := false Shr[i] := false Figure 2.6 – Diagramme d’état du protocole German-esque L’état d’un processeuri est donné par la variable Cache[i] qui peut prendre trois valeurs : (E)xclusive (accès en lecture et écriture), (S)hared (accès en lecture seulement) ou (I)nvalid (pas d’accès à la mémoire). Les clients envoient des requêtes au responsable lorsqu’un défaut de cache survient : RS pour un accès partagé (défaut de lecture), RE pour un accès exclusif (défaut d’écriture). Le répertoire contient quatre informations : une variable booléene Exg signale par la valeur true qu’un des clients possède un accès exclusif à la mémoire principale, un tableau de booléens Shr, tel que Shr[i] est vrai si le client i possède une copie (avec un accès en lecture ou écriture) de la mémoire dans son cache, Cmd contient la requête courante (ϵ marque l’absence de requête) dont l’émetteur est enregistré dans Ptr. Les états initiaux du système sont représentés par la formule logique suivante : ∀i. Cache[i] = I ∧ ¬Shr[i] ∧ ¬Exg ∧ Cmd = ϵ signiant que les caches de tous les processeurs sont invalides, aucun accès n’a encore été donné et il n’y a pas de requête à traiter. La Figure 2.6 donne une vue assez haut niveau de l’évolution d’un seul processeur. Les èches pleines montrent l’évolution du cache du processeur selon ses propres requêtes, alors que les èches pointillées représentent les transitions résultant d’une requête d’un autre client. Par exemple, un cache va de l’état I à S lors d’un défaut de lecture : le répertoire lui accorde un accès partagé tout en enregistrant cette action dans le tableau Shr[i] := true. De 31Chapitre 2 Le model checker Cubicle germanesque.cub type msg = Epsilon | RS | RE type state = I | S | E (* Client *) array Cache[proc] : state (* Directory *) var Exg : bool var Cmd : msg var Ptr : proc array Shr[proc] : bool init (z) { Cache[z] = I && Shr[z] = False && Exg = False && Cmd = Epsilon } unsafe (z1 z2) { Cache[z1] = E && Cache[z2] <> I } transition request_shared (n) requires { Cmd = Epsilon && Cache[n] = I } { Cmd := RS; Ptr := n ; } transition request_exclusive (n) requires { Cmd = Epsilon && Cache[n] <> E } { Cmd := RE; Ptr := n; } transition invalidate_1 (n) requires { Shr[n]=True && Cmd = RE } { Exg := False; Cache[n] := I; Shr[n] := False; } transition invalidate_2 (n) requires { Shr[n]=True && Cmd = RS && Exg=True } { Exg := False; Cache[n] := S; Shr[n] := True; } transition grant_shared (n) requires { Ptr = n && Cmd = RS && Exg = False } { Cmd := Epsilon; Shr[n] := True; Cache[n] := S; } transition grant_exclusive (n) requires { Cmd = RE && Exg = False && Ptr = n && Shr[n] = False && forall_other l. Shr[l] = False } { Cmd := Epsilon; Exg := True; Shr[n] := True; Cache[n] := E; } Figure 2.7 – Code Cubicle du protocole de cohérence de cache German-esque 322.3 Non-atomicité façon similaire, si un défaut en écriture survient dans un autre cache, le répertoire invalide tous les clients enregistrés dans Shr avant de donner l’accès exclusif. Cette invalidation générale a pour eet de passer les caches dans les états E et S à l’état I. La modélisation qui est faite de ce protocole est assez immédiate et est donnée ci-dessous. On peut toutefois remarquer qu’on s’intéresse ici seulement à la partie contrôle du protocole et on oublie les actions d’écriture et lecture réelles de la mémoire. La seule propriété qu’on souhaite garantir dans ce cas est que si un processeur a son cache avec accès exclusif alors tous les autres sont invalides : ∀i,j. (i , j ∧ Cache[i] = E) =⇒ Cache[j] = I Le responsable du répertoire est abstrait et on ne s’intéresse qu’à une seule ligne de mémoire donc l’état du répertoire est représenté avec des variables globales. 2.3 Non-atomicité Dans la plupart des travaux existants sur les systèmes paramétrés, on fait la supposition que l’évaluation des conditions globales est atomique. La totalité de la garde est évaluée en une seule étape. Cette hypothèse est raisonnable lorsque les conditions globales masquent des détails d’implémentation qui permettent de faire cette évaluation de manière atomique. En revanche, beaucoup d’implémentations réelles d’algorithmes concurrents et de protocoles n’évaluent pas ces conditions instantanément mais plutôt par une itération sur une structure de donnée (tableau, liste chaînée, etc.). Par exemple, l’algorithme de Dekker (voir Section 2.2.2) implémente le test ∃j , i.x (j) de l’étiquette WANT par une boucle qui recherche un j tel que x (j) soit vrai. De même, on a modélisé la boucle d’attente de l’algorithme de la boulangerie (Section 2.2.3, algorithme 2, étiquette Choose) par une garde universelle dans la transition turn. En supposant que des conditions sont atomiques, on simplie le problème mais cette approximation n’est pas conservatrice. En eet, l’évaluation d’une condition peut être entrelacée avec les actions des autres processus. Il est possible par exemple, qu’un processus change la valeur de sa variable locale alors que celle-ci à déjà été comptabilisée pour une condition globale avec son ancienne valeur. Il peut dans ce cas exister dans l’implémentation des congurations qui ne sont représentées par aucun état du modèle atomique. Si on veut prendre en compte ces éventualités dans notre modélisation, on se doit de reéter tous les comportements possibles dans le système de transition. La vérication de propriétés d’exclusion mutuelle dans des protocoles paramétrés comme l’algorithme de la boulangerie à déjà été traitée par le passé [32, 114]. Ces preuves ne sont souvent que partiellement automatisées et font usage d’abstractions ad-hoc. La première approche permettant une vérication automatique de protocoles avec gardes non atomiques a été développée en 2008 [5]. Les auteurs utilisent ici un protocole annexe de ranement 33Chapitre 2 Le model checker Cubicle pour modéliser l’évaluation non atomique des conditions globales. L’approche qu’on présente dans la suite de cette section est identique en substance à celle de [5]. On montre cependant qu’on peut rester dans le cadre déni par Cubicle pour modéliser ces conditions non atomiques. Comme les tests non atomiques sont implémentés par des boucles, on choisit de modéliser les implémentations de la gure 2.8. La valeur N représente ici le paramètre du système, ∀j , i. c(j) ∃j , i. c(j) j := 1 ; while (j ≤ N) do if j , i ∧ ¬c(j) then return false ; j := j + 1 end ; return true j := 1 ; while (j ≤ N) do if j , i ∧ c(j) then return true ; j := j + 1 end ; return false Figure 2.8 – Évaluation non atomique des conditions globales par un processus i c’est-à-dire la cardinalité du type proc. Pour une condition universelle, on parcourt tous les éléments (de 1 à N) et on s’assurent qu’ils vérient la condition c. Si on trouve un processus qui ne respecte pas cette condition on sort de la boucle et on renvoie false. Malheureusement dans Cubicle, on ne peut pas mentionner ce paramètre N explicitement (car le solveur SMT ne peut pas raisonner sur la valeur de la cardinalité des modèles). On ne peut dès lors pas utiliser de compteur entier. Pour s’en sortir, on construit une boucle avec un compteur abstrait pour lequel on peut seulement tester si sa valeur est 0 ou N 5 . On peut également incrémenter, décrémenter, et aecter ce compteur à ces mêmes valeurs. Dans Cubicle on modélise ce compteur par un tableau de booléens représentant un encodage unaire de sa valeur entière. Le nombre de cellules contenant la valeur true correspond à la valeur du compteur. Incrémenter ce compteur revient donc à passer une case de la valeur false à la valeur true. Pour modéliser l’évaluation non atomique de la condition ∀j , i. c(j) 6 , on utilise un compteur Cpt. Comme on veut qu’il soit local à un processus, on a besoin d’un tableau bi-dimensionnel. Le tableau en Cpt[i] matérialise le compteur local au processusi. Le résultat de l’évaluation de cette condition sera stocké dans la variable Res[i] à valeurs dans le type énuméré {E,T,F}. Res[i] contient E initialement ce qui signie que la condition n’est pas nie d’être évaluée. Lorsqu’elle contient T alors la condition globale a été évalué à vrai, et si elle contient F la condition globale est fausse. Les transitions correspondantes sont données gure 2.9. Avec cette modélisation, on spécie juste que la condition locale c(j) doit être vériée 5. On peut généraliser à tester si sa valeur est k ou N − k où k est une constante positive entière. 6. L’encodage de l’évaluation de la condition duale ∃j , i. c(j) est symétrique. 342.3 Non-atomicité Transition Commentaire type result = E | T | F array Cpt[proc,proc] : bool array Res[proc] : result transition start (i) requires { ... } { Res[i] := E Cpt[x,y] := case | x=i : False | _ : Cpt[x,y] } On initialise le compteur à 0 et on signale que la condition est en cours d’évaluation avec la variable locale Res. transition iter (i j) requires { Cpt[i,j] = False && c(j) } { Cpt[i,j] := True } La condition c(j) n’a pas encore été vériée. Comme elle est vraie on incrémente le compteur. transition abort (i j) requires { Cpt[i,j] = False && ¬c(j) } { Res[i] := F } La condition c(j) n’a pas encore été vériée mais elle est fausse. On sort de la boucle, la condition globale est fausse. transition exit (i) requires { forall_other j. Cpt[i,j] = True } { Res[i] := T } Toutes les conditionsc(j) ont étés véri- ées. On sort de la boucle, la condition globale est vraie. Figure 2.9 – Encodage de l’évaluation non atomique de la condition globale ∀j , i. c(j) 35Chapitre 2 Le model checker Cubicle pour tous les processus, indépendamment de l’ordre. Cette sous-spécication reste conservatrice. Il est toutefois possible d’ajouter des contraintes d’ordre sur la variable j si c’est important pour la propriété de sûreté. On peut remarquer qu’on se sert d’une garde universelle dans la transition exit. Ceci n’inue en rien la « non-atomicité » de l’évaluation de la condition car le quanticateur est seulement présent pour encoder le test Cpt[i] = N. Remarque. Avec cette garde universelle, on risque de tomber dans le cas d’une fausse alarme à cause du traitement détaillé en section 3.3 du chapitre suivant. L’intuition apportée par le modèle de panne franche, nous permet d’identier ces cas problématiques. Si la sûreté dépend du fait qu’un processus puisse disparaître du système pendant l’évaluation d’une condition globale, alors on risque de tomber dans ce cas défavorable. Les versions non atomiques des algorithmes sont bien plus compliquées que leurs homologues atomiques. Les raisonnements à mettre en œuvre pour dérouler la relation de transition (aussi bien en avant qu’en arrière) sont plus longs. C’est ce qui en fait des candidats particulièrement coriaces pour les procédures de model checking. 2.4 Logique multi-sortée et systèmes à tableaux On a montré dans la section précédente que le langage de Cubicle permet de représenter des algorithmes d’exclusion mutuelle ou des protocoles de cohérence de cache à l’aide de systèmes de transitions paramétrés. Le lecteur intéressé par la dénition précise de la syntaxe concrète et des règles de typage de ce langage trouvera leur description en annexe A. Dans cette section, on dénit formellement la sémantique des programmes Cubicle. Pour ce faire, on utilise le formalisme des systèmes à tableaux conçu par Ghilardi et Ranise [77]. Ainsi, on montre dans un premier temps que chaque transition d’un système paramétré peut simplement être interprétée comme une formule dans un fragment de la logique du premier ordre multi-sortée. Ce fragment est déni par l’ensemble des types et variables déclarés au début du programme (ainsi que ceux prédénis dans Cubicle). Dans un deuxième temps, on donne une sémantique opérationnelle à ces formules à l’aide d’une relation de transition entre les modèles logiques de ces formules. On rappelle ici brièvement les notions usuelles de la logique du premier ordre multisortée et de la théorie des modèles qui sont utilisées pour dénir les systèmes à tableaux de Cubicle. Des explications plus amples peuvent être trouvées dans des manuels traitant du sujet [92, 152]. Le lecteur familier avec ces concepts peut ne lire que la section 2.4.3 qui présente plus en détail la construction des ces systèmes. 362.4 Logique multi-sortée et systèmes à tableaux 2.4.1 Syntaxe des formules logiques La logique du premier ordre multi-sortée est une extension classique qui possède essentiellement les mêmes propriétés que la logique du premier ordre sans sortes [149]. Son intérêt est qu’elle permet de partitionner les éléments qu’on manipule selon leur type. Dénition 2.4.1. Une signature Σ est un tuple (S,F ,R) où : — S est un ensemble non vide de symboles de sorte — F est un ensemble de symboles de fonction, chacun étant associé à un type de la forme — s pour les symboles d’arité 0 (zéro) avec s ∈ S. On appelle ces symboles des constantes. — s1 × . . . × sn → s pour les symboles d’arité n, avec s1,. . . ,sn,s ∈ S — R est un ensemble de symboles de relation (aussi appelé prédicats). On associe à chaque prédicat d’arité n un type de la forme s1 × . . . × sn avec s1,. . . ,sn ∈ S. Dans la suite on supposera que le symbole d’égalité = est inclus dans toutes les signatures qui seront considérées et qu’il a les types s × s quelque soit la sorte s ∈ S 7 . On notera par exemple par f : s1 × . . . × sn → s un symbole de fonction f dans F dont le type est s1 × . . . × sn → s. L’en-tête du chier Cubicle est en fait une façon de donner cette signature. Par exemple l’en-tête du code de l’exemple du mutex de la section 2.2.1 correspond à la dénition de la signature Σ suivante : en-tête Cubicle signature Σ = (S,F ,R) type state = Idle | Want | Crit array State[proc] : state var Turn : proc S = {state,proc} F = {Idle : state,Want : state, Crit : state, State : proc → state, Turn : proc} R = {= : state × state : proc × proc} Remarque. Le mot-clef var ne permet pas d’introduire une variable dans la logique mais permet de déclarer une constante logique avec son type. Le choix des mots-clefs var et array reète une vision plus intuitive des systèmes Cubicle comme des programmes. 7. On pourrait éviter cette formulation avec un symbole d’égalité polymorphe mais cela demande d’introduire des variables de type. 37Chapitre 2 Le model checker Cubicle Remarque. Cubicle connaît en interne les symboles de sorte proc, bool, int et real ainsi que les symboles de fonction +, −, les constantes 0,1,2,. . . ,0.0,1.0,. . . et les relations < et ≤. Par convention, ils apparaîtront dans la signature Σ seulement s’ils sont utilisés dans le système. On distinguera parmi ces signatures celles qui ne permettent pas d’introduire de nouveaux termes car il est important pour Cubicle de maîtriser la création des processus. Dénition 2.4.2. Une signature est dite relationnelle si elle ne contient pas de symboles de fonction. Dénition 2.4.3. Une signature est dite quasi-relationnelle si les symboles de fonction qu’elle contient sont tous des constantes. On dénit les Σ-termes, Σ-atomes (ou Σ-formules atomiques), Σ-littéraux et Σ-formules comme les expressions du langage déni par la grammaire décrite en gure 2.10. Les types des variables quantiées ne sont pas indiqués par commodité. Σ-terme : hti ::= c où c est une constante de Σ | f (hti, . . . , hti) où f est un symbole de fonction de Σ d’arité > 0 | ite (hφi, hti, hti) Σ-atome : hai ::= false | true | p(hti, . . . , hti) où p est un symbole de relation de Σ Σ-littéral : hli ::= hai | ¬hai Σ-formule : hφi ::= hli | hφi ∧ hφi | hφi ∨ hφi | ∀i. hφi | ∃i. hφi Figure 2.10 – Grammaire de la logique En plus de cette restriction syntaxique, on impose que les Σ-termes et Σ-atomes soient bien typés en associant des sortes aux termes : 1. Chaque constante c de sorte s ∈ S est un terme de sorte s. 2. Soient f un symbole de fonction de type s1 × . . . × sn → s, t1 un terme de sorte s1, . . . , et tn un terme de sorte sn alors f (t1,. . . tn) est un terme de sorte s. 3. Soit p un symbole de relation de type s1 × . . . ×sn. L’atome p(t1,. . . ,tn) est bien typé ssi t1 est un terme de sorte s1, . . . , et tn est un terme de sorte sn. 382.4 Logique multi-sortée et systèmes à tableaux On appelle une Σ-clause une disjonction de Σ-littéraux. On dénote par, Σ-CNF (resp. Σ-DNF) une conjonction de disjonction de littéraux (resp. une disjonction de conjonction de littéraux). Conventions et notations Pour éviter la lourdeur de la terminologie on dénotera par termes, atomes (ou formules atomiques), littéraux, formules, clauses, CNF et DNF respectivement les Σ-termes, Σ-atomes (ou Σ-formules atomiques), Σ-littéraux, Σ-formules, Σ-clauses, Σ-CNF et Σ-DNF lorsque le contexte ne permet pas d’ambiguïté. Les termes de la forme ite (φcond ,tthen,telse ) correspondent à la construction conditionnelle classique if φcond then tthen else telse . On suppose l’existence d’une fonction elim_ite qui prend en entrée une formule sans quanticateurs dont les termes peuvent contenir le symbole ite et renvoie une formule équivalente sansite. De plus on suppose que elim_ite renvoie une formule en forme normale disjonctive (Σ-DNF). Par exemple, elim_ite(x = ite (φ,t1,t2)) = (φ ∧ x = t1) ∨ (¬φ ∧ x = t2) On notera par i¯ une séquence i1,i2,. . . ,in. Pour les formules quantiées, on écrira ∃x,y. φ (resp. ∀x,y. φ) pour ∃x∃y. φ (resp. ∀x∀y. φ). En particulier, on écrira ∃i¯. φ pour ∃i1∃i2 . . . ∃in. φ. 2.4.2 Sémantique de la logique Dénition 2.4.4. Une Σ-structure A est une paire (D,I) où D est un ensemble appelé le domaine de A (ou l’univers de A) et dénoté par dom(A). Les éléments de D sont appelés les éléments de la structure A. On notera |dom(A)| le cardinal du domaine de A et on dira qu’une structure A est nie si |dom(A)| est ni. I est l’interprétation qui : 1. associe à chaque sorte s ∈ S de Σ un sous ensemble non vide de D. 2. associe à chaque constante de Σ de type s un élément du sous-domaine I(s). 3. associe à chaque symbole de fonction f ∈ F d’arité n > 0 et de type s1 × . . . × sn → s une fonction totale I(f ) : I(s1) × . . . × I(sn) → I(s). 4. associe à chaque symbole de relation p ∈ R d’arité n > 0 et de type s1 × . . . × sn une fonction totale I(p) : I(s1) × . . . × I(sn) → {true,false}. Cette interprétation peut être étendue de manière homomorphique aux Σ-termes et Σ- formules – elle associe à chaque terme t de sorte s un élément I(t) ∈ I(s) et à chaque formule φ une valeur I(φ) ∈ {true,false}. Dénition 2.4.5. On appelle une Σ-théorie T un ensemble (potentiellement inni) de Σ-structures. Ces structures sont aussi appelées les modèles de T . 39Chapitre 2 Le model checker Cubicle Dénition 2.4.6. On dit qu’un Σ-modèle M = (A,I) satisfait une formule φ ssi I(φ) = true, dénoté par M |= φ. Dénition 2.4.7. Une formule φ est dite satisable dans une théorie T (ou T -satisable) ssi il existe un modèle M ∈ T qui satisfait φ. Une formule φ est dite conséquence logique d’un ensemble Γ de formules dans une théorie T (et noté par Γ |=T φ) ssi tous les modèles de T qui satisfont Γ satisfont aussi φ. Dénition 2.4.8. Une formule φ est dite valide dans une théorie T (ou T -valide) ssi sa négation est T -insatisable, dénoté par T |= φ ou ∅ |=T φ. Dénition 2.4.9. Soient A et B deux Σ-structures. A est une sous-structure de B, noté A ⊆ B si dom(A) ⊆ dom(B). Dénition 2.4.10. Soient A une Σ-structure et X ⊆ dom(A), alors il existe une unique plus petite sous-structure B de A tel que dom(B) ⊆ X. On dit que B est la sous-structure de A générée par X et on note B = hXiA. Les deux dénitions suivantes seront importantes pour caractériser la théorie des processus supportée par Cubicle. Dénition 2.4.11. Une Σ-théorie T est localement nie ssi Σ est nie, chaque sous ensemble ni d’un modèle de T génère une sous-structure nie. Remarque. Si Σ est relationnelle ou quasi-relationnelle alors toute Σ-théorie est localement nie. Dénition 2.4.12. Une Σ-théorie T est close par sous-structure ssi chaque sous-structure d’un modèle de T est aussi un modèle de T . Exemple. La théorie ayant pour modèle la structure de domaine N et signature ({int}, {0,1}, {=,≤}) où ces symboles sont interprétés de manière usuelle (comme dans la théorie de l’arithmétique de Presburger) est localement nie et close par sous-structure. Si on étend cette signature avec le symbole de fonction + alors elle n’est plus localement nie mais reste close par sous-structure. Une théorie ayant pour modèle une structure nie, avec une signature (_,∅,{=,R}) où R est interprétée comme une relation binaire qui caractériserait un anneau est localement nie mais n’est pas close par sous-structure. Le problème de la satisabilité modulo une Σ-théorie T (SMT) consiste à établir la satisabilité de formules closes sur une extension arbitraire de Σ (avec des constantes). Une extension de ce problème, beaucoup plus utile en pratique, est d’établir la satisabilité modulo la combinaison de deux (ou plus) théories. 402.4 Logique multi-sortée et systèmes à tableaux Exemples de théories. La théorie de l’égalité (aussi appelée théorie vide, ou EUF) est la théorie qui a comme modèles tous les modèles possibles pour une signature donnée. Elle n’impose aucune restriction sur l’interprétation faite de ses symboles (ses symboles sont dits non-interprétés). Les fonctions non-interprétées sont souvent utilisées comme technique d’abstraction pour s’aranchir d’une complexité ou de détails inutiles. La théorie de l’arithmétique est une autre théorie omniprésente en pratique. Elle est utilisée pour modéliser l’arithmétique des programmes, la manipulation de pointeurs et de la mémoire, les contraintes de temps réels, les propriétés physiques de certains systèmes, etc. Sa signature est {0,1,...,+,−,∗,/,≤} étendue à un nombre arbitraire de constantes, et ses symboles sont interprétés de manière usuelle sur les entiers et les réels. Une théorie de types énumérés est une théorie ayant une signature Σ quasi-relationnelle contenant un nombre ni de constantes (constructeurs). L’ensemble de ses modèles consiste en une unique Σ-structure dont chaque symbole est interprété comme un des constructeurs de Σ. Dans ce qui va suivre on verra que ces théories seront utiles pour modéliser les points de programme de processus et les messages échangés par ces processus dans les systèmes paramétrés. 2.4.3 Systèmes de transition à tableaux Cette section introduit le formalisme des systèmes à tableaux conçu par Ghilardi et Ranise [77]. Il permet de représenter une classe de systèmes de transition paramétrés dans un fragment restreint de la logique du premier ordre multi-sortée. Pour ceci on aura besoin des théories suivantes : — une théorie des processus TP sur signature ΣP localement nie dont le seul symbole de type est proc, et telle que la TP -satisabilité est décidable sur le fragment sans quanticateurs — une théorie d’éléments TE sur signature ΣE localement nie dont le seul symbole de type est elem, et telle que la TE-satisabilité est décidable sur le fragment sans quanticateurs. TE peut aussi être l’union de plusieurs théories TE = TE1 ∪ . . . ∪ TEk , dans ce cas TE a plusieurs symboles de types elem1, . . ., elemk . — la théorie d’accès TA sur signature ΣA, obtenue en combinant la théorie TP et la théorie TE de la manière suivante. ΣA = ΣP ∪ ΣE ∪ Q où Q est un ensemble de symboles de fonction de type proc × . . . × proc → elemi (ou constantes de type elemi ). Étant donnée une structure S, on note par S|ty la structure S dont le domaine est restreint aux éléments de type ty. Les modèles de TA sont les structures S où S|proc est un modèle de TP et S|elem est un modèle de TE et S|proc×...×proc→elem est l’ensemble des fonctions totales de proc × . . . × proc → elem. 41Chapitre 2 Le model checker Cubicle On suppose dans la suite que les théories TE et TP ne partagent pas de symboles, i.e. ΣE ∩ ΣP = {=} (seule l’égalité apparaît dans toutes les signatures). Dénition 2.4.13. Un système (de transition) à tableaux est un triplet S = (Q,I,τ ) avec Q partitionné en Q0,. . . ,Qm où — Qi est un ensemble de symboles de fonction d’arité i. Chaque f ∈ Qi a comme type proc × . . . × proc | {z } i fois → elemf . Les fonctions d’arité 0 représentent les variables globales du système. Les fonctions d’arité non nulle représentent quand à elles les tableaux (indicés par des processus) du système. — I est une formule qui caractérise les états initiaux du système (où les variables de Q peuvent apparaître libres). — τ est une relation de transition. La relation τ peut être exprimée sous la forme d’une disjonction de formules quantiées existentiellement par zéro, une, ou plusieurs variables de type proc. Chaque composante de cette disjonction est appelée une transition et est paramétrée par ses variables existentielles. Elle met en relation les variables globales et tableaux d’états avant et après exécution de la transition. Si x ∈ Q est un tableau (ou une variable globale), on notera par x ′ la valeur de x après exécution de la transition et par Q ′ l’ensemble des variables et tableaux après exécution de la transition. La forme générale des transitions qu’on considère est la suivante : t(Q,Q ′ ) = ∃i¯. γ (i¯,Q) | {z } garde ∧ ^ x∈Q ∀j¯.x ′ (j¯) = δx (i¯,j¯,Q) | {z } action où — γ est une formule sans quanticateurs 8 appelée la garde de t — δx est une formule sans quanticateurs appelée la mise à jour de x Les variables de Q peuvent apparaître libres dans γ et les δx . Cette formule est équivalente à la variante suivante où les fonctions x ′ sont écrites sous forme fonctionnelle avec un lambda-terme : t(Q,Q ′ ) = ∃i¯. γ (i¯,Q) ∧ ^ x∈Q x ′ = λj¯. δx (i¯,j¯,Q) 8. On autorise une certaine forme de quantication universelle dans γ en section 3.3. 422.4 Logique multi-sortée et systèmes à tableaux Intuitivement, une transition t met en jeu un ou plusieurs processus (les variables quantiées existentiellement i¯, ses paramètres) qui peuvent modier l’état du système (cf. Section 2.5). Ici γ représente la garde de la transition et les δx sont les mises à jour des variables et tableaux d’état. Un tel système S = (Q,I,τ ) est bien décrit par la syntaxe concrète de Cubicle et de manière analogue, sa sémantique est une boucle innie qui, à chaque tour, exécute une transition choisie arbitrairement dont la garde γ est vraie, et met à jour les valeurs des variables de Q en conséquence. Soit un système S = (Q,I,τ ) et une formule Θ (dans laquelle les variables de Q peuvent apparaître libres), le problème de la sûreté (ou de l’atteignabilité) est de déterminer s’il existe une séquence de transitions t1,. . . ,tn dans τ telle que I (Q 0 ) ∧ t1 (Q 0 ,Q 1 ) ∧ . . . ∧ tn (Q n−1 ,Q n ) ∧ Θ(Q n ) est satisable modulo les théories mises en jeu. S’il n’existe pas de telle séquence, alors S est dit sûr par rapport à Θ. Autrement dit, ¬Θ est une propriété de sûreté ou un invariant du système. Exemple (Mutex). On prend ici l’exemple d’un mutex simple paramétré par son nombre de processus (dans type proc) dont un aperçu plus détaillé a été donnée en section 2.2.1. Pour cet exemple, on prendra TP la théorie de l’égalité de signature ΣP = ({proc},∅,{=}) ayant pour symbole de type proc. On considère comme théorie des éléments, l’union d’une théorie des types énumérés TE de signature ΣE = ({state},{Idle,Want,Crit},{=}) où Idle, Want, et Crit en sont les constructeurs de type state et de la théorie TP . La théorie d’accès TA est dénie comme la combinaison de TP et TE. Elle a pour signature ΣA = ΣP ∪ ΣE ∪ (_,{State,Turn},∅) où State est un symbole de fonction de type proc → state, et Turn est une constante de type proc. Avec le formalisme décrit précédemment, le système S = (Q,I,τ ) représentant le problème du mutex s’exprime de la façon suivante. L’ensemble Q contient les symboles State et Turn. Les états initiaux du système sont décris par la formule I ≡ ∀i. State(i) = Idle La relation de transition τ est la disjonction treq ∨ tenter ∨ texit avec : treq ≡ ∃i. State(i) = Idle ∧ ∀j. State′ (j) = ite (i = j, Want, State(j)) ∧ Turn′ = Turn tenter ≡ ∃i. State(i) = Want ∧ Turn = i ∧ ∀j. State′ (j) = ite (i = j, Crit, State(j)) ∧ Turn′ = Turn texit ≡ ∃i. State(i) = Crit ∧ ∀j. State′ (j) = ite (i = j, Idle, State(j)) 43Chapitre 2 Le model checker Cubicle Enn la formule caractérisant les mauvais états du système est : Θ ≡ ∃i,j. i , j ∧ State(i) = Crit ∧ State(j) = Crit La description de ce système faite dans la syntaxe concrète de Cubicle donnée en section 2.2.1 est l’expression directe de cette représentation. Pour plus de simplicité, on notera les transitions avec le sucre syntaxique (de Cubicle) transition t (i¯) requires { д } { a } dans la section suivante. Par exemple, la première transition treq sera notée par transition treq (i) requires {State[i] = Idle} { State[j]:= case i = j : Want | _ : State′ [j] } . 2.5 Sémantique La sémantique d’un programme décrit par un système de transition à tableaux dans Cubicle est dénie pour un nombre de processus donné. On xe n la cardinalité du domaine proc. C’est-à-dire que tous les modèles qu’on considère dans cette section interprètent le type proc vers un ensemble à n éléments. Remarque. On peut aussi voir un Σ-modèle M comme un dictionnaire qui associe les éléments de Σ aux éléments du domaine de M. Soit S = (Q,I,τ ) un système à tableaux, et TA la combinaison des théories comme dénie en section 2.4.3. Dans la suite de cette section, on verra les modèles de TA comme des dictionnaires associant tous les symboles de ΣA à des éléments de leur domaine respectif. La première partie de ce chapitre donne au lecteur tous les éléments nécessaires pour réaliser un interprète du langage d’entrée de Cubicle et s’assurer qu’il soit correct. 2.5.1 Sémantique opérationnelle On dénit la sémantique opérationnelle du système à tableaux S pour n processus sous forme d’un triplet K n = (Sn,In,−→) où : — Sn est l’ensemble potentiellement inni des modèles de TA où la cardinalité de l’ensemble des éléments de proc est n. — In = {M ∈ Sn | M |= I} est le sous ensemble de Sn dont les modèles satisfont la formule initiale I du système à tableaux S. 442.5 Sémantique — −→ ⊆ Sn × Sn est la relation de transition d’états dénie par la règle suivante : transition t (i¯) requires { д } { a } σ : i¯7→ proc M |= дσ A = all(a,i¯) M −→ update(Aσ,M) oùA = all(a,proc) est l’ensemble des actions de a instanciées avec tous les éléments de proc et σ est une substitution des paramètres i¯ de la transition vers éléments de proc. L’application de substitution de variables sur les actions s’eectue sur les termes de droite. On note Aσ l’ensemble des actions de A auxquelles on a appliqué la substitution σ : x := t ∈ A ⇐⇒ x := tσ ∈ Aσ A[i¯] := case c1 : t1 | . . . | cm : tm | _ : tm+1 ∈ A ⇐⇒ A[i¯] := case c1σ : t1σ | . . . | cmσ : tmσ | _ : tm+1σ ∈ Aσ update(Aσ,M) est le modèle obtenu après application des actions de Aσ. On dénit une fonction d’évaluation ⊢I telle que : M ⊢I e : v ssi l’interprétation de e dans M est v L’union des dictionnaires M1 ∪ M2 est dénie comme l’union des ensembles des liaisons lorsque ceux-ci sont disjoints. Lorsque certaines liaisons apparaissent à la fois dans M1 et M2, on garde seulement celles de M2. On note la fonction update par le symbole ⊢up dans les règles qui suivent. M ⊢up a1 : M1 M ⊢up a2 : M2 M ⊢up a1; a2 : M1 ∪ M2 M ⊢I t : v M ⊢up x := t : M ∪ {x 7→ v} M 6|= c1 M ⊢up A[i¯]:= case c2:t2 | . . . | _ : tn+1 : M′ M ⊢up A[i¯]:= case c1:t1 | c2:t2 | . . . | _ : tn+1 : M′ M |= c1 M ⊢I t1 : v1 M ⊢I A : fA M ⊢I i¯: vi M ⊢up A[i¯]:= case c1:t1 | c2:t2 | . . . | _ : tn+1 : M ∪ {A 7→ fA ∪ {vi 7→ v1}} M ⊢up M ⊢I t : v M ⊢I A : fA M ⊢I i¯: vi M ⊢up A[i¯]:= case _ : t : M ∪ {A 7→ fA ∪ {vi 7→ v}} 45Chapitre 2 Le model checker Cubicle Exécuter l’action x := t revient à changer la liaison de x dans le modèle M par la valeur de t, comme montré par la deuxième règle. L’avant dernière règle dénit quant à elle l’exécution d’une mise à jour du tableau A en position i¯. Si la première condition c1 de la construction case est satisable dans le modèle courant, l’interprétation de la fonction correspondant à A est changée pour qu’elle associe maintenant la valeur de t aux valeurs de i¯. Remarque. La construction case du langage de Cubicle est similaire aux constructions traditionnelles de certains langages de programmation comme le branchement conditionnel switch . . .case . . .break de C, ou match de ML. En eet les instructions d’un cas sont exécutées seulement si toutes les conditions précédentes sont fausses. Le dernier cas _ permet de s’assurer qu’au moins une instruction est exécutée. En utilisant la forme logique de la section 2.4.3, une mise à jour avec case s’exprime sous la forme de termes ite (ifthen-else) imbriqués. Une fois les ite éliminés, l’ensemble des conditions obtenues forme une partition (i.e. elles sont mutuellement insatisables et leur disjonction est valide). Ces règles montrent notamment qu’on évalue les termes des aectations dans le modèle précédent et non celui qu’on est en train de construire. Le caractère ; dans les actions ne représente pas une séquence car les actions d’une transition sont exécutées simultanément et de manière atomique. Un système paramétré a alors toutes les sémantiques {K n | n ∈ N}. On peut aussi voir la sémantique d’un système à tableaux paramétré comme une fonction qui à chaque n associe K n 9 . 2.5.2 Atteignabilité Ici on ne s’intéresse qu’aux propriétés de sûreté des systèmes à tableaux, ce qui revient au problème de l’atteignabilité d’un ensemble d’états. Soit K n = (Sn,In,−→) un triplet exprimant la sémantique d’un système à tableaux S = (Q,I,τ ) comme déni précédemment. On note par −→∗ la clôture transitive de la relation −→ sur Sn. On dit qu’un état s ∈ Sn est atteignable dans K n et on note Reachable(s,K n ) ssi il existe un état s0 ∈ In tel que s0 −→∗ s Soit une propriété dangereuse caractérisée par la formule Θ. On dira qu’un système à tableaux S est non sûr par rapport à Θ ssi ∃n ∈ N. ∃s ∈ Sn. s |= Θ ∧ Reachable (s,K n ) 9. On peut remarquer que K n est en réalité une structure de Kripke innie pour laquelle on omet la fonction de labellisation L des états de la structure de Kripke car elle correspond exactement aux modèles, i.e. L(M) = M. 462.5 Sémantique Au contraire, on dira que S est sûr par rapport à Θ ssi ∀n ∈ N. ∀s ∈ Sn. s |= Θ =⇒ ¬Reachable (s,K n ) 2.5.3 Un interpréteur de systèmes à tableaux À l’aide de la sémantique précédente, on peut dénir diérentes analyses sur un système à tableaux. Par exemple, un interpréteur de systèmes à tableaux est un programme dont les états sont des états de K n = (Sn,In,−→), qui démarre en un état initial de In et qui « suit » une séquence de transitions de −→. Comme dans la section précédente, l’interpréteur pour un système à tableaux S = (Q,I,τ ) représente l’état du programme par un dictionnaire sur Q (i.e. un modèle de TA). On suppose ici que In est non vide, c’est-à-dire qu’il existe au moins un modèle de I où proc est de cardinalité n. Dans ce cas, on note Init la fonction qui prend n en argument et qui retourne un état de In au hasard. L’interpréteur consiste en une procédure run (Algorithme 3) prenant en argument le paramètre n et la relation de transition τ . Elle maintient l’état du système dans le dictionnaire M et exécute une boucle innie qui modie M en fonction des transitions choisies. Algorithme 3 : Interpréteur d’un système à tableaux Variables : M : l’état courant du programme (un modèle de TA) procedure run(n,τ ) : begin M := Init(n); while true do let L = enabled(n,τ ,M) in if L = ∅ then deadlock (); let a,σ = select(L) in M := update(all(a,proc)σ,M); La fonction enabled(n,τ ,M) renvoie l’ensemble des transitions de τ dont la garde est vraie pour M. Autrement dit, (transition t (i¯) requires { д } { a }) ∈ enabled(n,τ ,M) =⇒ M |= ∃i¯. д(i¯) où д est la garde de la transition t et i¯ en sont les paramètres. Si aucune des transitions n’est possible alors on sort de la boucle d’exécution avec la fonction deadlock. Si M 47Chapitre 2 Le model checker Cubicle correspond à un état nal du système, le programme s’arrête. Sinon on est dans une situation d’interblocage (ou deadlock). La fonction select choisit de manière aléatoire une des transitions de L et retourne les actions a correspondantes ainsi qu’une substitution σ des arguments vers les éléments du domaine de proc telle que M |= д(i¯)σ. Ensuite la fonction update(all(a,proc)σ,M) retourne un nouveau dictionnaire où les valeurs des variables et tableaux sont mises à jour en fonction des actions de l’instance de la transition choisie, comme déni précédemment. Remarque. On peut aussi ajouter une instruction assert(¬M |= Θ) à l’algorithme 3 qui lève une erreur dès qu’on passe par un état mauvais, i.e. lorsque M |= Θ. Exemple (Mutex). Prenons l’exemple du mutex de la section précédente avec deux processus #1 et #2. Une conguration de départ satisfaisant la formule initiale I possible est le modèle M ci-dessous. En exécutant la transition treq pour le processus #1, on change la valeur de State(#1) et obtient le nouveau modèle M′ . M = F dom(M) Turn 7→ #2 State 7→ ( #1 7→ Idle #2 7→ Idle . . . tr eq [i\#1] −−−−−−−−−−−−→ M′ = F dom(M′ ) Turn 7→ #2 State 7→ ( #1 7→ Want #2 7→ Idle . . . Maintenant qu’il est aisé de se faire une idée du langage de description et de sa sémantique qu’on utilise pour les systèmes de transition paramétrés, on montre dans la suite de ce document comment construire un model checker capable de prouver la sûreté des exemples de la section 2.2. Les algorithmes présentés dans ce chapitre sont des systèmes jouets, dans le sens où ils représentent des problèmes de vérication formelle de taille assez réduite. Cependant les preuves manuelles de ses systèmes paramétrés ne sont pas pour autant triviales et les outils automatiques sont souvent mis au point sur ce genre d’exemples. On s’eorcera bien sûr d’expliquer comment développer des algorithmes qui passent à l’échelle sur des problèmes plus conséquents. 483 Cadre théorique : model checking modulo théories Sommaire 3.1 Analyse de sûreté des systèmes à tableaux . . . . . . . . . . . . . . 50 3.1.1 Atteignabilité par chaînage arrière . . . . . . . . . . . . . . . . 50 3.1.2 Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.1.3 Eectivité . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.2 Terminaison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.2.1 Indécidabilité de l’atteignabilité . . . . . . . . . . . . . . . . . . 57 3.2.2 Conditions pour la terminaison . . . . . . . . . . . . . . . . . . 59 3.2.3 Exemples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.3 Gardes universelles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3.1 Travaux connexes . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3.2 Calcul de pré-image approximé . . . . . . . . . . . . . . . . . . 69 3.3.3 Exemples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.3.4 Relation avec le modèle de panne franche et la relativisation des quanticateurs . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.4.1 Exemples sans existence d’un bel ordre . . . . . . . . . . . . . . 74 3.4.2 Résumé . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Le cadre théorique sur lequel repose Cubicle est celui du model checking modulo théories proposé par Ghilardi et Ranise [77]. C’est un cadre déclaratif dans lequel les systèmes manipulent un ensemble de tableaux innis d’où le nom de systèmes à tableaux. Un système est décrit par des formules logiques du premier ordre et des transitions gardées (voir 49Chapitre 3 Cadre théorique : model checking modulo théories Section 2.4.3). Une des forces de cette approche réside dans l’utilisation des capacités de raisonnement des solveurs SMT et de leur support interne pour de nombreuses théories. On rappelle dans ce chapitre les fondements de cette théorie et certains résultats et théorèmes associés. Dans la suite de ce chapitre on se donne un système à tableaux S = (Q,I,τ ) et une formule Θ représentant des états dangereux. Rappelons que la relation τ peut s’exprimer sous la forme d’une disjonction de transitions paramétrées de la forme : t(Q,Q ′ ) = ∃i¯. γ (i¯,Q) | {z } garde ∧ ^ x∈Q ∀j¯.x ′ (j¯) = δx (i¯,j¯,Q) | {z } action avecγ et δx des formulessans quanticateurs. Ces restrictions sont très importantes car elles permettent à la théorie exposée dans ce chapitre de caractériser des conditions susantes pour lesquelles l’atteignabilité (ou la sûreté) est décidable. Cubicle implémente le cadre du model checking modulo théories mais va au-delà en relâchant certaines contraintes, au prix de la complétude et de la terminaison, mais reste correct. 3.1 Analyse de sûreté des systèmes à tableaux Cette section présente les conditions sous lesquelles l’analyse de sûreté d’un système à tableau peut être mise en œuvre de façon eective. En particulier on caractérise un fragment de la logique qui est clos par les opérations de l’algorithme d’atteignabilité arrière et on montre que cette approche est correcte. 3.1.1 Atteignabilité par chaînage arrière Plusieurs approches sont possibles pour résoudre les instances du problème de sûreté (ou d’atteignabilité). Une première approche consiste à construire l’ensemble des états atteignables (par chaînage avant) à partir des états initiaux, une autre approche, celle qui sera adoptée dans la suite, consiste à construire l’ensemble des états qui peuvent atteindre les mauvais états du système (atteignabilité par chaînage arrière). Dénition 3.1.1. La post-image d’une formule φ(X) par la relation de transition τ est dénie par Postτ (φ)(X ′ ) = ∃X. φ(X) ∧ τ (X,X ′ ) Elle représente les états atteignables à partir de φ en une étape de τ . 503.1 Analyse de sûreté des systèmes à tableaux Dénition 3.1.2. La pré-image d’une formule φ(X ′ ) par la relation de transition τ est dénie par Preτ (φ)(X) = ∃X ′ . τ (X,X ′ ) ∧ φ(X ′ ) De manière analogue la pré-image d’une formule φ(X ′ ) par une transition t est dénie par Pret (φ)(X) = ∃X ′ . t(X,X ′ ) ∧ φ(X ′ ) et donc Preτ (φ)(X) = W t∈τ Pret (φ)(X). La pré-image Preτ (φ) est donc une formule qui représente les états qui peuvent atteindre φ en une étape de transition de τ . La clôture de la pré-image Pre∗ τ (φ) est dénie par :     Pre0 τ (φ) = φ Pren τ (φ) = Preτ (Pren−1 τ (φ)) Pre∗ τ (φ) = W k∈N Prek τ (φ) La clôture de Θ par Preτ caractérise alors l’ensemble des états qui peuvent atteindre Θ. Une approche générale pour résoudre le problème d’atteignabilité consiste alors à calculer cette clôture et vérier si elle contient un état de la formule initiale. L’algorithme d’atteignabilité par chainage arrière présenté ci-après, fait exactement ceci de manière incrémentale. Si, pendant sa construction de Pre∗ τ (Θ), on découvre qu’un état initital (un modèle de I) est aussi un modèle d’un des Pren τ (Θ), alors le système n’est pas sûr vis-à- vis de Θ. Le système est sûr si, a contrario, aucune des pré-images ne s’intersecte avec I. L’algorithme peut décider une telle propriété seulement si cette clôture est aussi un point-xe. Plusieurs briques de base sont nécessaires pour construire un tel algorithme. La première est d’être capable de calculer les pré-images successives. Pour une recherche en avant, il faut pouvoir calculer Postn (I) ce qui est souvent impossible pour les systèmes paramétrés à cause de la présence de quanticateurs universels dans la formule initiale I. En revanche, le calcul du Pre est eectif dans les systèmes à tableaux si on restreint la forme des formules exprimant les états dangereux. Dénition 3.1.3. On appelle cube une formule de la forme ∃i¯.(∆ ∧ F ), où les i¯sont des variables de la théorie TP , ∆ est une conjonction de diséquations entre les variables de i¯, et F est une conjonction de littéraux. Proposition 3.1.1. Si φ est un cube, la formule Preτ (φ) est équivalente à une disjonction de cubes. 51Chapitre 3 Cadre théorique : model checking modulo théories Démonstration. Soit φ = ∃p.(∆ ∧ F ) un cube, Preτ (φ) est la disjonction des Pret (φ) pour chaque transition de t de τ . Prenons une transition t de τ , Pret (φ)(Q) = ∃Q ′ . t(Q,Q ′ ) ∧ ∃p.(∆ ∧ F (Q ′ )) se réécrit en ∃Q ′ . ∃i¯. γ (i¯,Q) ∧ ^ x∈Q x ′ = λj¯. δx (i¯,j¯,Q) ∧ ∃p.(∆ ∧ F (Q ′ )) (3.1) Par la suite, on utilisera la notation traditionnellement employée en combinatoire  A k  pour l’ensemble des combinaisons de A de taille k. Soit j¯ = j1,. . . jk . Étant donné x ∈ Q et x ′ = λj¯. δx (i¯,j¯,Q) une des mises à jour de la transition t, on notera σx ′ la substitution x ′ (pσ )\δx (i¯,pσ ,Q) pour tout pσ ∈  p k  . Maintenant, la formule ∃p.(∆ ∧ F (Q ′ ))[σx ′] correspond en essence à la réduction de l’application d’une lambda expression apparaissant dans l’équation (3.1). La réduction de toutes les lambda expressions conduit à la formule suivante : ∃Q ′ . ∃i¯. γ (i¯,Q) ∧ ∃p.(∆ ∧ F (Q ′ )[σx ′,x ′ ∈ Q ′ ]) F (Q ′ )[σx ,x ∈ Q] n’est pas tout à fait sous forme de cube car les σx peuvent contenir des termes ite. Notons Fd1 (i¯,Q) ∨ . . . ∨ Fdh (i¯,Q) = elim_ite(F (Q ′ )[σx ,x ∈ Q]) la disjonction résultant de l’élimination des ite des mises à jour. Les Fd (i¯,Q) sont des conjonctions de littéraux. En sortant les disjonctions, on obtient : ∃Q ′ . _ Fd (i¯,Q) ∈elim_ite(F (Q′ )[σx ,x∈Q]) ∃i¯. γ (i¯,Q) ∧ ∃p.(∆ ∧ Fd (i¯,Q)) et donc : Pret (φ)(Q) = _ Fd (i¯,Q) ∈ elim_ite(F (Q′ )[σx ,x∈Q]) ∃i¯.∃p. (∆ ∧γ (i¯,Q) ∧ Fd (i¯,Q)) Dans ce qui suit on supposera que la formule caractérisant les états dangereux Θ est un cube. La clôture construite par l’algorithme sera donc une disjonction de cubes et pourra être vue comme un ensemble de cubes V. On donne une analyse d’atteignabilité par chaînage arrière classique dans l’algorithme 4. Cet algorithme maintient une le à priorité Q de cubes à traiter et la clôture partielle ou l’ensemble des cubes visités V dont la disjonction des éléments caractérise les états pouvant atteindre Θ. On démarre avec en ensemble V vide matérialisant la formule false et une le Q où seule la formule Θ est présente. La fonction BWD calcule itérativement la clôture Pre∗ τ (Θ) et si elle termine, renvoie un des deux résultats possibles : 52 Segmentation supervisée d’actions à partir de primitives haut niveau dans des flux vid´eos Adrien Chan-Hon-Tong To cite this version: Adrien Chan-Hon-Tong. Segmentation supervis´ee d’actions `a partir de primitives haut niveau dans des flux vid´eos. Signal and Image Processing. Universit´e Pierre et Marie Curie - Paris VI, 2014. French. . HAL Id: tel-01084604 https://tel.archives-ouvertes.fr/tel-01084604 Submitted on 19 Nov 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destin´ee au d´epˆot et `a la diffusion de documents scientifiques de niveau recherche, publi´es ou non, ´emanant des ´etablissements d’enseignement et de recherche fran¸cais ou ´etrangers, des laboratoires publics ou priv´es.Université Pierre et Marie Curie École doctorale des Sciences mécaniques, acoustique, électronique et robotique de Paris Laboratoire de Vision et Ingénierie des Contenus (CEA, LIST, DIASI) Segmentation supervisée d’actions à partir de primitives haut niveau dans des flux vidéos Par Adrien CHAN-HON-TONG Thèse de doctorat de Génie informatique, automatique et traitement du signal Dirigée par Catherine ACHARD et encadrée par Laurent LUCAT Présentée et soutenue publiquement le 29/09/2014 Devant un jury composé de : CAPLIER Alice, Professeur, Rapporteur CANU Stéphane, Professeur, Rapporteur CORD Matthieu, Professeur, Examinateur EL YACOUBI Mounim A., Maitre de conférences, Examinateur ACHARD Catherine, Maitre de conférences, Directeur de thèse LUCAT Laurent, Ingénieur – Chercheur, Encadrant de thèseA mes parents. A ma femme et mes enfants. Factorisation matricielle, application `a la recommandation personnalis´ee de pr´ef´erences Julien Delporte To cite this version: Julien Delporte. Factorisation matricielle, application `a la recommandation personnalis´ee de pr´ef´erences. Other. INSA de Rouen, 2014. French. . HAL Id: tel-01005223 https://tel.archives-ouvertes.fr/tel-01005223 Submitted on 12 Jun 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destin´ee au d´epˆot et `a la diffusion de documents scientifiques de niveau recherche, publi´es ou non, ´emanant des ´etablissements d’enseignement et de recherche fran¸cais ou ´etrangers, des laboratoires publics ou priv´es.THÈSE Présentée à : L’Institut National des Sciences Appliquées de Rouen En vue de l’obtention du titre de : Docteur en Informatique Par : Julien DELPORTE Intitulée : Factorisation Matricielle, Application à la Recommandation Personnalisée de Préférences  soutenue le 3 février 2013 Devant le jury composé de : Rapporteurs : Younès BENNANI - Université Paris-Nord Gérard GOVAERT - Université de Technologie de Compiègne Examinateurs : Anne BOYER - Université de Lorraine Michèle SEBAG - Université Paris-Sud Dominique FOURDRINIER - Université de Rouen Directeur : Stéphane CANU - INSA de Rouen Lh*rs p2p : une nouvelle structure de donn´ees distribu´ee et scalable pour les environnements Pair `a Pair Hanafi Yakouben To cite this version: Hanafi Yakouben. Lh*rs p2p : une nouvelle structure de donn´ees distribu´ee et scalable pour les environnements Pair `a Pair. Other. Universit´e Paris Dauphine - Paris IX, 2013. French. . HAL Id: tel-00872124 https://tel.archives-ouvertes.fr/tel-00872124 Submitted on 11 Oct 2013 Services de r´epartition de charge pour le Cloud : application au traitement de donn´ees multim´edia. Sylvain Lefebvre To cite this version: Sylvain Lefebvre. Services de r´epartition de charge pour le Cloud : application au traitement de donn´ees multim´edia.. Computers and Society. Conservatoire national des arts et metiers - CNAM, 2013. French. . HAL Id: tel-01062823 https://tel.archives-ouvertes.fr/tel-01062823 Submitted on 10 Sep 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destin´ee au d´epˆot et `a la diffusion de documents scientifiques de niveau recherche, publi´es ou non, ´emanant des ´etablissements d’enseignement et de recherche fran¸cais ou ´etrangers, des laboratoires publics ou priv´es.CONSERVATOIRE NATIONAL DES ARTS ET MÉTIERS École Doctorale Informatique, Télécommunications et Électronique (Paris) CEDRIC THÈSE DE DOCTORAT présentée par : Sylvain LEFEBVRE soutenue le : 10 Décembre 2013 pour obtenir le grade de : Docteur du Conservatoire National des Arts et Métiers Spécialité : INFORMATIQUE Services de répartition de charge pour le Cloud : Application au traitement de données multimédia THÈSE dirigée par M. GRESSIER-SOUDAN Eric Professeur, CEDRIC-CNAM, Paris Encadrée par Mme. CHIKY Raja Enseignant-Chercheur, LISITE-ISEP, Paris RAPPORTEURS Mme. MORIN Christine Chercheur Titulaire, INRIA-IRISA, Rennes M. PIERSON Jean-Marc Professeur, IRIT, Université Paul Sabatier, Toulouse EXAMINATEURS M. ROOSE Philippe Directeur de recherche, LIUPPA, Pau M. SENS Pierre Directeur de recherche, LIP6, UPMC, Paris M. PAWLAK Renaud Directeur technique, IDCapture, Paris Mme. SAILHAN Françoise Maitre de conférence, CEDRIC-CNAM, Paris"When they first built the University of California at Irvine they just put the buildings in. They did not put any sidewalks, they just planted grass. The next year, they came back and put the sidewalks where the trails were in the grass." Larry WallRemerciements Les travaux menés durant ces trois ans ont été possible grâce à la confiance et aux encouragements que mes encadrants Eric Gressier-Soudan, Renaud Pawlak puis Raja Chiky m’ont accordé, à la fois lors du recrutement et tout au long de ces 36 derniers mois. Je tiens à remercier particulièrement Raja Chiky, pour son optimisme, ses conseils et ses encouragements en toutes circonstances. Je tiens aussi à remercier tous les membres de l’équipe de Recherche et Développement Informatique du LISITE : Yousra pour les filtres de Bloom, Zakia, Bernard et Matthieu pour leurs conseils et commentaires avisés, et Olivier pour les probabilités. En restant à l’ISEP je remercie aussi mes collègues de bureaux : Giang, Sathya, Adam, Ahmad, Fatima, Maria et Ujjwal. Mes sincères remerciements vont aux membres du jury qui ont accepté de juger mon travail : MM. Pierson et Roose, Mme Morin ainsi que le professeur P. Sens pour ses conseils lors de ma soutenance de mi-parcours et pour m’avoir donné accès à la plateforme GRID5000. Je dois aussi adresser des remerciements à ceux qui m’ont encouragé dans ma poursuite d’études : Professeurs B. Marchal, F. Pommereau, M. Bernichi de l’ESIAG et A. Ung d’Essilor sans lesquels je ne me serais pas lancé dans cette aventure. Enfin, ce travail n’aurait pu être mené sans les encouragements de ma famille, de mes amis, et de Clara. iiiREMERCIEMENTS ivRésumé Les progrès conjugués de l’algorithmique et des infrastructures informatiques permettent aujourd’hui d’extraire de plus en plus d’informations pertinentes et utiles des données issues de réseaux de capteurs ou de services web. Ces plateformes de traitement de données "massives" sont confrontées à plusieurs problèmes. Un problème de répartition des données : le stockage de ces grandes quantités de données impose l’agrégation de la capacité de stockage de plusieurs machines. De là découle une seconde problématique : les traitements à effectuer sur ces données doivent être à leur tour répartis efficacement de manière à ne surcharger ni les réseaux, ni les machines. Le travail de recherche mené dans cette thèse consiste à développer de nouveaux algorithmes de répartition de charge pour les systèmes logiciels de traitement de données massives. Le premier algorithme mis au point, nommé "WACA" (Workload and Cache Aware Algorithm) améliore le temps d’exécution des traitements en se basant sur des ré- sumés des contenus stockés sur les machines. Le second algorithme appelé "CAWA" (Cost Aware Algorithm) tire partie de l’information de coût disponible dans les plateformes de type "Cloud Computing" en étudiant l’historique d’exécution des services. L’évaluation de ces algorithmes a nécessité le développement d’un simulateur d’infrastructures de "Cloud" nommé Simizer, afin de permettre leur test avant le déploiement en conditions réelles. Ce déploiement peut se faire de manière transparente grâce au système de distribution et de surveillance de service web nommé "Cloudizer", développé aussi dans le cadre de cette thèse, qui a servi à l’évaluation des algorithmes sur l’Elastic Compute Cloud d’Amazon. Ces travaux s’inscrivent dans le cadre du projet de plateforme de traitement de données Multimédia for Machine to Machine (MCube). Ce projet requiert une infrastructure logicielle adaptée au stockage et au traitement d’une grande quantité de photos et de sons, issus de divers réseaux de capteurs. La spécificité des traitements utilisés dans ce projet a nécessité le développement d’un service d’adaptation automatisé des programmes vers leur environnement d’exécution. Mots clés : Répartition de charge, Cloud, Données Massives vAbstract Nowadays, progresses in algorithmics and computing infrastructures allow to extract more and more adequate and useful information from sensor networks or web services data. These big data computing platforms have to face several challenges. A data partitioning issue ; storage of large volumes imposes aggregating several machines capacity. From this problem arises a second issue : to compute these data, tasks must be distributed efficiently in order to avoid overloading networks and machines capacities. The research work carried in this thesis consists in developping new load balancing algorithms for big data computing software. The first designed algorithm, named WACA (Workload And Cache Aware algorithm) enhances computing latency by using summaries for locating data in the system. The second algorithm, named CAWA for "Cost AWare Algorithm", takes advantage of cost information available on so-called "Cloud Computing" platforms by studying services execution history. Performance evaluation of these algorithms required designing a Cloud Infrastructure simulator named "Simizer", to allow testing prior to real setting deployment. This deployement can be carried out transparently thanks to a web service monitoring and distribution framework called "Cloudizer", also developped during this thesis work and was used to assess these algorithms on the Amazon Elastic Compute Cloud platform. These works are set in the context of data computing platform project called "Multimedia for Machine to Machine" (MCube). This project requires a software infrastructure fit to store and compute a large volume of pictures and sounds, collected from sensors. The specifics of the data computing programs used in this project required the development of an automated execution environement adaptation service. Keywords : Load balancing, Cloud Computing, Big Data viiAvant propos Cette thèse se déroule dans l’équipe Recherche et Développement en Informatique (RDI) du laboratoire Laboratoire d’Informatique, Signal et Image, Électronique et Télécommunication (LISITE) de l’Institut Supérieur d’Électronique de Paris. L’équipe RDI se focalise principalement sur la thématique des systèmes complexes avec deux axes forts. Le premier axe vise la fouille de données et l’optimisation dans ces systèmes. L’optimisation est en effet pour la plupart du temps liée à la traçabilité et à l’analyse des données, en particulier les données liées aux profils d’utilisation. Le deuxième axe concerne l’étude de langages, de modèles, et de méthodes formelles pour la simulation et la validation des systèmes complexes, en particulier les systèmes biologiques et les systèmes embarqués autonomes sous contraintes fortes. Les travaux de cette thèse s’inscrivent dans le cadre d’un projet européen de mise au point de plateforme de services Machine à Machine permettant d’acquérir et d’assurer l’extraction et l’analyse de données multimédia issues de réseaux de capteurs. Ce projet, baptisé Mcube pour Multimedia for machine to machine (Multimédia pour le machine à machine) est développé en partenariat avec le fond Européen de Développement Régional et deux entreprises de la région Parisienne : CAP 2020 et Webdyn. Il permet de réunir les différentes problématiques de recherches de l’équipe RDI au sein d’un projet commun, en partenariat avec les membres de l’équipe de recherche en traitement du signal du LISITE. La particularité de ce projet est qu’il se concentre sur la collecte de données multimédias. Les difficultés liées au traitement de ces données nécessitent de faire appel à des technologies spécifiques pour leur stockage. Cette plateforme ne se limite pas au stockage des données, et doit fournir aux utilisateurs la possibilité d’analyser les données récupérées en ligne. Or, ces traitements peuvent s’exécuter dans un environnement dynamique soumis à plusieurs contraintes comme le coût financier, ou la consommation énergétique. La répartition des traitements joue donc un rôle prépondérant dans la garantie de ces contraintes. La répartition des données sur les machines joue aussi un rôle important dans la performance des traitements, en permettant de limiter la quantité de données à transférer. Ces travaux ont été encadrés par Dr. Renaud Pawlak puis suite à son départ de l’ISEP en septembre 2011, par Dr. Raja Chiky, sous la responsabilité du professeur Eric Gressier-Soudan de l’équipe Systèmes Embarqués et Mobiles pour l’Intelligence Ambiante du CEDRIC-CNAM.RÉSUMÉ xTable des matières 1 Introduction 1 2 MCube : Une plateforme de stockage et de traitement de données multimédia 9 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1.1 Architecture du projet MCube . . . . . . . . . . . . . . . . . . . . . 9 2.1.2 Problématiques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Composants des plateformes de stockage de données multimédia . . . . . . . 12 2.2.1 Stockage de données multimédia . . . . . . . . . . . . . . . . . . . . 13 2.2.2 Distribution des traitements . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.3 Plateformes génériques de traitements de données multimédias . . . 20 2.3 Analyse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.4 Architecture de la plateforme MCube . . . . . . . . . . . . . . . . . . . . . . 23 2.4.1 Architecture matérielle . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.4.2 Architecture logicielle . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.4.3 Architecture de la plateforme MCube . . . . . . . . . . . . . . . . . . 24 2.4.4 Description des algorithmes de traitement de données multimédia . . 26 2.5 Développement du projet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 xiTABLE DES MATIÈRES 3 Le framework Cloudizer : Distribution de services REST 31 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2 Architecture du canevas Cloudizer . . . . . . . . . . . . . . . . . . . . . . . 32 3.2.1 Les nœuds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.2.2 Le répartiteur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.2.3 Déploiement de services . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.3 Intégration au projet MCube . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.4 Considérations sur l’implémentation . . . . . . . . . . . . . . . . . . . . . . 38 3.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4 Simizer : un simulateur d’application en environnement cloud 41 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.1.1 Simulateurs de Cloud existants . . . . . . . . . . . . . . . . . . . . . 42 4.2 Architecture de Simizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.2.1 Couche Evénements . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.2.2 Couche architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.2.3 Fonctionnement du processeur . . . . . . . . . . . . . . . . . . . . . 48 4.2.4 Exécution d’une requête . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.3 Utilisation du simulateur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.4 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.4.1 Génération de nombres aléatoires . . . . . . . . . . . . . . . . . . . . 53 4.4.2 Fonctionnement des processeurs . . . . . . . . . . . . . . . . . . . . . 55 4.5 Discussion et travaux futurs . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5 Algorithmes de répartition de charge 59 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 xiiTABLE DES MATIÈRES 5.2 Classification et composition des algorithmes de répartition de charge . . . . 61 5.2.1 Familles d’algorithmes de répartition de charge . . . . . . . . . . . . 61 5.2.2 Composition d’algorithmes de répartition de charge . . . . . . . . . . 63 5.3 Algorithmes de répartition par évaluation de la charge . . . . . . . . . . . . 65 5.4 Algorithmes de répartition fondés sur la localité . . . . . . . . . . . . . . . . 66 5.4.1 Techniques exhaustives . . . . . . . . . . . . . . . . . . . . . . . . . . 67 5.4.2 Techniques de hachage . . . . . . . . . . . . . . . . . . . . . . . . . . 68 5.4.3 Techniques de résumé de contenus . . . . . . . . . . . . . . . . . . . 69 5.5 Stratégies basées sur le coût . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 6 WACA : Un Algorithme Renseigné sur la Charge et le Cache 77 6.1 Contexte et Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 6.1.1 Localité des données pour les traitements distribués . . . . . . . . . 77 6.2 Filtres de Bloom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 6.2.1 Filtres de Bloom Standards (FBS) . . . . . . . . . . . . . . . . . . . 80 6.2.2 Filtres de Bloom avec compteurs (FBC) . . . . . . . . . . . . . . . . 82 6.2.3 Filtres de Bloom en Blocs (FBB) . . . . . . . . . . . . . . . . . . . . 82 6.3 Principe Général de l’algorithme WACA . . . . . . . . . . . . . . . . . . . . 83 6.3.1 Dimensionnement des filtres . . . . . . . . . . . . . . . . . . . . . . . 84 6.3.2 Choix de la fonction de hachage . . . . . . . . . . . . . . . . . . . . . 85 6.4 Algorithme sans historique (WACA 1) . . . . . . . . . . . . . . . . . . . . . 87 6.4.1 Tests de WACA 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 6.5 Algorithme avec historique . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 6.5.1 Compléxité de l’algorithme et gain apporté . . . . . . . . . . . . . . 92 6.5.2 Experimentation de WACA avec historique . . . . . . . . . . . . . . 93 xiiiTABLE DES MATIÈRES 6.6 Block Based WACA (BB-WACA) . . . . . . . . . . . . . . . . . . . . . . . . 95 6.6.1 Description de l’algorithme . . . . . . . . . . . . . . . . . . . . . . . 95 6.6.2 Analyse de l’algorithme . . . . . . . . . . . . . . . . . . . . . . . . . 98 6.7 Evaluation de la politique BB WACA . . . . . . . . . . . . . . . . . . . . . 102 6.7.1 Évaluation par simulation . . . . . . . . . . . . . . . . . . . . . . . . 102 6.7.2 Évaluation pratique sur l’Elastic Compute Cloud . . . . . . . . . . . 107 6.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 7 CAWA : Un algorithme de répartition basé sur les les coûts 113 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 7.2 Approche proposée : Cost AWare Algorithm . . . . . . . . . . . . . . . . . . 115 7.2.1 Modélisation du problème . . . . . . . . . . . . . . . . . . . . . . . . 116 7.2.2 Avantages de l’approche . . . . . . . . . . . . . . . . . . . . . . . . . 117 7.3 Phase de classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 7.3.1 Représentation et pré-traitement des requêtes . . . . . . . . . . . . . 118 7.3.2 Algorithmes de classification . . . . . . . . . . . . . . . . . . . . . . . 120 7.4 Phase d’optimisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 7.4.1 Matrice des coûts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 7.4.2 Résolution par la méthode Hongroise . . . . . . . . . . . . . . . . . . 122 7.4.3 Répartition vers les machines . . . . . . . . . . . . . . . . . . . . . . 123 7.5 Évaluation par simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 7.5.1 Preuve de concept : exemple avec deux machines . . . . . . . . . . . 124 7.5.2 Robustesse de l’algorithme . . . . . . . . . . . . . . . . . . . . . . . . 125 7.6 Vers une approche hybride : Localisation et optimisation des coûts . . . . . 127 7.7 Conclusion et travaux futurs . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 8 Conclusion 131 xivTABLE DES MATIÈRES A Cloud Computing : Services et offres 139 A.1 Définition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 A.2 Modèles de services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 A.2.1 L’Infrastructure à la Demande (Infrastructure as a Service) . . . . . 140 A.2.2 La Plateforme à la demande (Platform as a Service) . . . . . . . . . 140 A.2.3 Le logiciel à la demande (Software As a Service) . . . . . . . . . . . 141 A.3 Amazon Web Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 B Code des politiques de répartition 143 B.1 Implémentation de WACA . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 B.2 Implémentation de l’algorithme CAWA . . . . . . . . . . . . . . . . . . . . . 145 C CV 147 Bibliographie 149 Glossaire 161 Index 163 xvTABLE DES MATIÈRES xviListe des tableaux 2.1 Propriétés des systèmes de traitements de données multimédias . . . . . . . 23 3.1 Protocole HTTP : exécution de services de traitement de données . . . . . . 37 4.1 Lois de probabilité disponibles dans Simizer . . . . . . . . . . . . . . . . . . 44 4.2 Résultats des tests du Chi-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.3 Comparaison de Simizer et CloudSim, temps d’exécution de tâches . . . . . 56 6.1 Configurations des tests de la stratégie WACA . . . . . . . . . . . . . . . . 103 A.1 Caractéristiques des instances EC2 . . . . . . . . . . . . . . . . . . . . . . . 141 xviiLISTE DES TABLEAUX xviiiTable des figures 2.1 Architecture du projet MCube . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 Architecture pour la fouille de données multimédia . . . . . . . . . . . . . . 12 2.3 Architecture de la plateforme MCube . . . . . . . . . . . . . . . . . . . . . . 25 2.4 Modèle de description d’algorithme . . . . . . . . . . . . . . . . . . . . . . . 27 3.1 Architecture de la plateforme Cloudizer . . . . . . . . . . . . . . . . . . . . 33 3.2 Modèle des stratégies de répartition de charge . . . . . . . . . . . . . . . . . 35 4.1 Architecture de Simizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.2 Entités de gestion des événements . . . . . . . . . . . . . . . . . . . . . . . . 46 4.3 Entités de simulation système . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.4 Comparaison des distributions aléatoires de Simizer . . . . . . . . . . . . . . 54 5.1 Stratégies de sélection possibles [GPS11, GJTP12] . . . . . . . . . . . . . . 64 5.2 Graphique d’ordonnancement, d’après Assunçao et al. [AaC09] . . . . . . . 72 6.1 Relation entre nombre d’entrées/sorties et temps d’éxécution . . . . . . . . 78 6.2 Exemples de Filtres de Bloom . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6.3 Description de l’algorithme . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 6.4 Comparaison des fonctions MD5, SHA-1 et MumurHash . . . . . . . . . . . 86 6.5 Mesures du temps d’exécution des requêtes, WACA version 1 . . . . . . . . 90 xixTABLE DES FIGURES 6.6 Mesures du temps d’exécution des requêtes, WACA version 2 . . . . . . . . 93 6.7 Comparaisons de WACA 1 et 2 . . . . . . . . . . . . . . . . . . . . . . . . . 94 6.8 Flot d’exécution de l’algorithme BB WACA . . . . . . . . . . . . . . . . . . 96 6.9 Schéma du tableau d’index . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.10 Complexité . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 6.11 Distribution des temps de réponses simulés, 1000 requêtes, distribution Uniforme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 6.12 Distribution des temps de réponses simulés, 1000 requêtes, distribution Gaussienne . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 6.13 Distribution des temps de réponses simulés, 1000 requêtes, distribution Zip- fienne . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 6.14 Répartition des requêtes sur 20 machines, (Zipf) . . . . . . . . . . . . . . . 107 6.15 Résultats de simulation : temps de sélection des machines . . . . . . . . . . 108 6.16 Résultats d’exécutions sur le cloud d’Amazon . . . . . . . . . . . . . . . . . 110 7.1 Description de la phase de classification / optimisation . . . . . . . . . . . . 116 7.2 Distribution cumulative des temps de réponses, 2 machines . . . . . . . . . 124 7.3 CAWA : Distribution cumulative des temps de réponse, 10 machines . . . . 126 xxChapitre 1 Introduction Ces dernières années, le besoin de traiter des quantités de données de plus en plus grandes s’est fortement développé. Les applications possibles de ces analyses à grande échelle vont de l’indexation de pages web à l’analyse de données physiques, biologiques ou environnementales. Le développement de nouvelles plateformes de services informatiques comme les réseaux de capteurs et les "nuages", ces plateformes de services de calcul à la demande, accentuent ce besoin tout en facilitant la mise en œuvre de ces outils. Les réseaux de capteurs ne sont pas une technologie récente, mais la capacité de traiter des grandes quantités de données issues de ces réseaux est aujourd’hui facilement accessible à un grand nombre d’organisations, en raison du développement de l’informatique en "nuage" : le Cloud Computing. Ce nouveau modèle de service informatique est, d’après la définition du National Institute of Standards and Technologies (NIST), un service mutualisé, accessible par le ré- seau, qui permet aux organisations de déployer rapidement et à la demande des ressources informatiques [MG11]. Dans ce contexte, les applications combinant plateformes d’acquisition de données autonomes et les "nuages" pour traiter les données acquises, se développent de plus en plus et permettent de fournir de nouveaux services en matière d’analyse de données, dans le domaine commercial [Xiv13], ou académique [BMHS13, LMHJ10]. 1Cadre applicatif Les travaux de cette thèse s’inscrivent dans le cadre d’un projet de recherche financé par le Fond Européen de DEveloppement Régional (FEDER) d’Ile de France 1, nommé Multimedia pour le Machine à Machine (MCube) 2. Ce projet consiste à mettre au point un ensemble de technologies permettant de collecter, transférer, stocker et traiter des données multimédias collectées à partir de passerelles autonomes déployées géographiquement. L’objectif est d’en extraire des informations utiles pour les utilisateurs du service. Par exemple, il peut s’agir de détecter la présence d’une certaine espèce d’insecte nuisible dans un enregistrement sonore, afin de permettre à l’agriculteur d’optimiser le traitement insecticide de ses cultures, en épandant que lorsque cela est nécessaire. Ce projet pose plusieurs difficultés de réalisation : en premier lieu, l’intégration et le stockage de volumes de données importants, pouvant venir de différentes sources comme un réseau de capteurs ou de multiples services WEB, implique de sélectionner les technologies appropriées pour traiter et stocker ces données[Lan01]. À titre d’exemple, les cas d’études utilisés pour le projet MCube consistent en un suivi photographique du développement de plans de tomates et des écoutes sonores pour la détection de nuisibles. Ces deux cas d’études effectués sur un seul champ durant les trois campagnes d’acquisition du projet (2011,2012 et 2013) ont nécessité la collecte d’un téraoctet et demi de données, soit environ 466,6 gigaoctets par utilisateur et par an. Le service définitif devant s’adresser à un nombre plus important de clients, l’espace requis pour ces deux applications représentera donc un volume non négligeable, en utilisation réelle. Le deuxième aspect de ces travaux consiste à mettre au point un système pouvant adapter ces applications à un environnement dynamique tel que les plateformes de cloud computing, de manière à pouvoir bénéficier de ressources importantes à la demande lorsqu’un traitement le requiert. La troisième difficulté concerne les traitements d’images eux mêmes, qui sont réalisés par des scientifique ayant peu ou pas d’expériences de la programmation exceptés dans des langages spécifiques comme MATLAB 3, dont l’utilisation sur les plateforme de type 1. http://www.europeidf.fr/index.php, consulté le 5 octobre 2013 2. http://mcube.isep.fr/mcube, consulté le 5 octobre 2013 3. http://www.mathworks.fr/products/matlab/, consulté le 5 octobre 2013 2"nuage" n’est pas facile. En effet, ces programmes ne sont pas écrits par des experts de la programmation parallèle ou des calculs massifs, mais sont écrits pour traiter directement un ou deux fichiers à la fois. Or, le contexte du projet MCube impose de traiter de grandes quantités de fichiers avec ces mêmes programmes, ce qui requiert un système permettant de les intégrer le plus simplement possible dans un environnement parallèle. Ces trois verrous s’inscrivent dans des problématiques scientifiques plus larges liées aux "nuages" et aux services distribués en général. Problématiques scientifiques Les problématiques scientifiques abordées dans ce document sont au nombre de trois. La première d’entre elle consiste à assurer la répartition de requêtes dans un système distribué déployé sur un "Cloud", en tenant compte des aspects particuliers de ce type de plateformes. La deuxième problématique, plus spécifique au projet MCube, concerne l’adaptation automatique de traitements à leur environnement d’exécution. Troisièmement, Le développement de nouvelles stratégies de répartition a nécessité de nombreux tests en amont du déploiement sur une infrastructure réelle, or il n’existe pas, à notre connaissance, de simulateur d’applications distribuées sur le "cloud" approprié pour tester ce type de développements. Distribution de requêtes dans le "Cloud" Les stratégies de répartition fournies par les services de "Cloud Computing" sont limitées dans leurs fonctionnalités ou leur capacité. Par exemple, le répartiteur élastique de la plateforme Amazon Web Services (ELB : Elastic Load Balancer), distribue les requêtes selon une stratégie en tourniquet (Round Robin) sur les machines les moins chargées du système 4. Seule cette stratégie de répartition est disponible pour les utilisateurs. Or, les applications déployées sur les plateformes d’informatique en nuage sont diverses, par conséquent une stratégie générique n’est pas toujours adaptée pour obtenir la meilleure performance du système. De plus, Liu et Wee [LW09] ont démontré que déployer son propre répartiteur d’applications web sur une machine vir- 4. https://forums.aws.amazon.com/message.jspa?messageID=129199#129199, consulté le 5 octobre 2013 3tuelle dédiée était plus économique et permettait de répondre à un plus grand nombre de requêtes concurrentes que le service de répartition de charge élastique fournit par Amazon. Les fournisseurs de plateformes Cloud imposent souvent à leurs utilisateurs des stratégies de répartition génériques pour pouvoir répondre à un grand nombre de cas d’utilisation, mais ces stratégies ne sont pas toujours les plus efficaces : Un exemple notable est celui de la plateforme Heroku 5, qui utilisait une stratégie de répartition aléatoire pour attribuer les requêtes à différents processeurs. Or, cette répartition aléatoire se faisait sans que les multiples répartiteurs du système ne communiquent entre eux pour surveiller la charge des machines. Par conséquent, certains répartiteurs envoyaient des requêtes vers des machines déjà occupées, ce qui créait des files d’attentes inattendues 6. Utiliser des stratégies de répartition de charge appropriées, tenant compte des spécificités des plateformes de type "Cloud Computing" est donc indispensable. Les spécificités de ces plateformes sont notamment : Des performances difficilement prévisibles : L’infrastructure physique de ces plateformes est partagée par un grand nombre d’utilisateurs (cf. annexe A). La consé- quence de ce partage est qu’en fonction de l’utilisation des ressources faite par les multiples utilisateurs, il peut arriver que deux machines virtuelles identiques sur le papier ne fournissent pas obligatoirement le même niveau de performance [BS10, IOY+11]. Des latences réseaux plus élevées : La taille des centres de données utilisés dans les plateformes de Cloud Computing, ainsi que leur répartition géographique a pour conséquence une latence plus importante dans les communications réseaux, qu’il convient de masquer le plus possible à l’utilisateur final. Une plateforme d’informatique à la demande doit donc permettre aux utilisateurs de choisir la stratégie de distribution la plus adaptée à leur cas d’utilisation. De plus, il est nécessaire de prendre en compte les spécificités de ces environnements pour développer de nouvelles stratégies de répartition adaptées. 5. https://www.heroku.com/, consulté le 5 octobre 2013 6. https://blog.heroku.com/archives/2013/2/16/routing_performance_update, consulté le 5 octobre 2013 4Adaptation d’applications de traitements de données multimédia Ces nouvelles plateformes sont par définition simples d’utilisation, flexibles et économiques. Cependant, pour tirer parti des avantages de ces services, il est nécessaire de mettre en place des processus automatisés de déploiement, de distribution et de configuration des applications [AFG+09, ZCB10, MG11]. Les plateformes de cloud computing promettent à leurs utilisateurs l’accès à une infinité de ressources à la demande, mais toutes les applications ne sont pas nécessairement conçues pour être distribuées dans un environnement dynamique. Quelles sont donc les caractéristiques des applications pouvant être déployées dans les environnements de type "Cloud Computing" ? Quels mécanismes peuvent être mis en œuvre pour assurer le déploiement de ces applications de manière plus ou moins transparente ? Comment amener ces traitements à s’exécuter de manière distribuée ou parallèle pour pouvoir tirer parti des avantages fournis par les plateformes de "Cloud Computing" ? Test et performance A ces questions s’ajoute la problématique du test de l’application. Il n’existe à ce jour pas de simulateur fiable permettant d’évaluer le comportement d’applications déployées dans le Cloud, car les efforts actuels, tels que les simulateurs CloudSim[RNCB11] et SimGrid[CLQ08], se concentrent sur la simulation des infrastructures physiques supportant ces services plutôt que sur les applications utilisant ces services [SL13]. Or, le développement de protocoles et d’algorithmes distribués nécessite des tests sur un nombre important de machines. L’ajustement de certains paramètres peut requé- rir de multiples tests sur diverses configurations (nombre de machines, puissance, type de charge de travail), mais il n’est pas toujours aisé de devoir déployer des tests grandeur nature pour ce type d’ajustements. Il faut donc développer ou adapter les outils existants pour faciliter ces tests, en fournissant une interface de programmation de haut niveau pour développer les protocoles à tester, sans que l’utilisateur n’ait à programmer à la fois la simulation et le protocole qu’il souhaite étudier. Approche développée L’approche développée pour répondre à ces problématiques a consisté à mettre au point une plateforme de distribution de services web destinée aux environnements de type 5Service d’Infrastructure à la Demande (Infrastructure As A Service, IaaS, cf. annexe A), permettant de déployer et de distribuer des services web ou des applications en ligne de commande en les transformant en services Web. Cette application, nommée Cloudizer, a permis l’étude de différentes stratégies de distribution et de leur impact sur les différentes applications déployées à travers plusieurs publications [PLCKA11, SL12, LKC13]. Cette plateforme a été utilisée dans le projet MCUBE, pour distribuer les programmes de traitements des images collectées. Les volumes de données concernés par ce projet sont tels que l’utilisation d’une stratégie de distribution des requêtes fondée sur la localité des données [DG04, PLCKA11] permet de réduire efficacement les temps de traitement. Dans ce domaine, les algorithmes de distribution les plus utilisés sont ceux fondés sur la technique du hachage cohérent [KLL+97, KR04], qui est mise en œuvre dans plusieurs systèmes de stockage de données massives. Le but de ces travaux est de montrer à travers différentes simulations que les filtres de Bloom [Blo70, FCAB98, DSSASLP08] peuvent être facilement combinés à d’autres algorithmes de répartition de charge, pour fournir une alternative performante au hachage cohérent, dans certaines conditions. Ce travail a permis la mise au point d’une stratégie de répartition de charge fondée sur des filtres de Bloom indexés appelée WACA (Workload And Cache Aware Algorithm, Algorithme Renseigné sur la Charge et le Cache). Les Clouds de type Infrastructure à la Demande constituent l’environnement de déploiement cible de ces services. La propriété de ces plateformes est de facturer leurs services à l’utilisation. Afin de prendre ce facteur en compte, nous avons développé une stratégie fondée sur le coût comme indicateur de performance, et nommée CAWA (Cost-AWare Algorithm, Algorithme Renseigné sur le Coût) dont le fonctionnement a été testé à travers des simulations. Tel que montré en section 1 de cette introduction, il n’existe pas encore de simulateur adéquat pour les utilisateurs de plateformes de Cloud Computing. Le logiciel Simizer a donc été développé pour permettre la simulation d’applications orientées services déployées sur des infrastructures Cloud. Ce simulateur a permis de simuler et tester les stratégies de distribution développées pour la plateforme Cloudizer, et d’étudier leur comportement à large échelle. 6Organisation du présent document Le reste de ce document est organisé en deux parties principales. La première partie, composée des chapitres 2 à 4, décrit les développements des différentes plateformes logicielles évoquées dans la section précédente. Le chapitre 2 présente les particularités du projet MCube et ce qui le différencie des plateformes de traitement de données multimédia existantes. Le chapitre 3 décrit le fonctionnement de la plateforme Cloudizer et son utilisation dans le cadre du projet MCUBE, puis le chapitre 4 décrit les particularités et le fonctionnement du simulateur Simizer. La seconde partie de ce document se concentre sur les stratégies de distribution de requêtes mises au point au cours de cette thèse. L’état de l’art en matière de répartition de charge est présenté dans le chapitre 5. les chapitres suivants décrivent respectivement la stratégie WACA, (chapitre 6), qui utilise des résumés compacts pour assurer une distribution des tâches en fonction de la localisation des données dans le système, et la stratégie CAWA (chapitre 7) qui se fonde sur une estimation du coût d’exécution des tâches pour effectuer la répartition. Le chapitre 8 présentera les conclusions et perspectives de l’ensemble de ces travaux. 78Chapitre 2 MCube : Une plateforme de stockage et de traitement de données multimédia 2.1 Introduction L’objectif technique du projet MCube (Multimedia 4 Machine 2 Machine) est de développer une technologie Machine à Machine pour la capture et l’analyse de données multimédia par des réseaux de capteurs avec des problématiques de faible consommation et de transmission GPRS/EDGE. Le projet couvre la chaîne complète : l’acquisition des données, la transmission, l’analyse, le stockage, et le service d’accès par le WEB. Il s’appuie sur les infrastructures M2M actuellement offertes par les opérateurs du secteur comme les réseaux 3G/GPRS pour la communication des données. Le but de cette plateforme est de fournir des services d’analyse de données multimédia à faible coût permettant diverses applications telles que la surveillance de cultures et de sites industriels (détections d’intrusions ou d’insectes nuisibles). 2.1.1 Architecture du projet MCube L’architecture globale de la technologie MCube correspond à l’état de l’art en matière d’architecture M2M et est résumée en figure 2.1. Les "passerelles" sont des systèmes embarqués permettant de piloter des périphériques d’acquisition de données multimédia comme des appareils photos ou des microphones USB. Les passerelles assurent la transmission des 92.1. INTRODUCTION Figure 2.1 – Architecture du projet MCube données collectées vers la plateforme MCube, via une connexion 3G si un accès Ethernet n’est pas disponible. La communication s’effectue via le protocole HTTP. La plateforme est un serveur accessible via le web, dont le rôle consiste à : Stocker les données collectées La plateforme assure le stockage des données multimé- dia capturées et remontées par les passerelles. Ceci nécessite un large espace de stockage redondant et distribué afin de limiter les risques de pertes de données, ainsi que des méthodes efficaces pour retrouver les données stockées. Configurer les passerelles Le système offre la possibilité de communiquer avec les passerelles afin de les mettre à jour dynamiquement et de changer différents paramètres tels que le type de capture à effectuer (sons, images ou vidéos) ou la fréquence de ces captures. Cette communication se fait via une interface de programmation prenant la forme d’un service web REST[Fie00], déployée sur les serveurs de la plateforme. Traiter et analyser les données multimédia Les utilisateurs du système MCube peuvent développer leurs propres programmes d’analyse de données, et les stocker sur la plateforme pour former leur propre bibliothèque de traitements. Ces traitements doivent pouvoir être exécutés par lots, en parallèle sur un grand nombre de fichiers, mais aussi en temps réel au fur et à mesure de la collecte des données. Par conséquent, mis à part le rôle de communication avec les passerelles, la plateforme MCube devrait reposer sur une base de données multimédia. Ces systèmes stockent et 102.1. INTRODUCTION traitent des données en volumes importants (plusieurs giga/téraoctets). Les données concernées peuvent être de types différents (sons, images, vidéos,...) et sont séparées en un nombre important de petits fichiers pouvant provenir de différents périphériques d’acquisition. De plus, ces données sont généralement peu ou pas structurées. Afin d’assurer l’efficacité des traitements sur les données multimédia, il est nécessaire de les exécuter de façon parallèle. À nouveau, le résultat de ces traitements doit être stocké dans le système, ce qui permet la réutilisation des données pour des analyses supplémentaires. La somme de toutes ces données exige une capacité de stockage importante. 2.1.2 Problématiques Une contrainte primordiale du projet MCube est d’aboutir à un service suffisamment flexible pour pouvoir être personnalisé à chaque scénario d’utilisation, dans le but d’optimiser le modèle économique du client en fonction de nombreux critères. Ainsi, les traitements des données multimédia pourront être faits aussi bien côté passerelle que côté plateforme suivant les critères à optimiser. La plateforme fournira des services d’aide à la décision permettant d’optimiser au mieux la solution. Les choix d’architecture logicielle décrits en section 2.4 reflètent ces besoins. La plateforme MCube permet d’exécuter des analyses poussées sur les données reçues des passerelles. Les algorithmes utilisés pour ces analyses sont souvent soit expérimentaux, soit développés spécifiquement pour certains clients, et ne sont pas disponibles dans les librairies de traitements de données multimédias existantes telles que Image Terrier[HSD11] ou OpenCV[Bra00]. La conséquence est que ces algorithmes ne sont pas toujours directement parallélisables. Or, la collecte d’un nombre important de fichiers dans les passerelles MCube impose de pouvoir être en mesure d’exécuter les traitements sur une grande quantité de données en parallèle. Cet état de fait a des conséquences sur les choix d’architecture à effectuer car il est nécessaire de fournir un accès aux données à la fois au niveau fichier pour que les données soient accessibles aux différents programmes de traitements et un accès automatisé, via une interface de programmation, pour permettre aux utilisateurs d’interroger leurs données en temps réel. 112.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES MULTIMÉDIA Ce chapitre expose donc dans un premier temps les composants des systèmes de stockage et de traitement de données multimédias en section 2.2, et analyse les différentes plateformes existantes de ce domaine par rapport aux besoins de la plateforme MCube en section 2.3. La section 2.4 décrit la solution adoptée pour la mise au point de l’architecture de la plateforme MCube et les problématiques qui découlent de ces choix sont discutées en section 2.6. 2.2 Composants des plateformes de stockage de données multimédia Il est possible de définir une plateforme de traitement de données multimédias comme un système logiciel distribué permettant de stocker des fichiers contenant des données photographiques, vidéos ou sonores, et de fournir les services nécessaires à l’extraction et au stockage d’informations structurées issues de ces fichiers. Les systèmes d’analyse de données multimédia possèdent un ensemble hiérarchisé de composants permettant de transformer les données brutes en informations précises [CSC09, HSD11]. Le schéma 2.2 montre l’articulation de ces différents composants, que l’on retrouvera dans la plupart des systèmes décrits dans ce chapitre. Données multimédias Extraction descripteurs Stockage descripteurs Indexation Detection Index informations Figure 2.2 – Architecture pour la fouille de données multimédia Le premier composant de ces systèmes correspond à la couche de stockage des données. Ce composant doit assurer le stockage et la disponibilité des données brutes, c’est à dire les documents audiovisuels dans leur format d’origine (RAW, MPEG, AVI, WAV), mais aussi des données intermédiaires nécessaires pour les traitements (les descripteurs) et les résultats des algorithmes d’indexation et d’analyse. Les descripteurs permettent de décrire le contenu des documents audiovisuels stockés 122.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES MULTIMÉDIA dans le système. Il s’agit de représentations intermédiaires des données multimédias, comme par exemple les valeurs obtenues après l’application d’une transformée de Fourrier à un enregistrement sonore. Il existe un nombre important de représentations différentes, chacune dépendant du type d’information que l’utilisateur souhaite extraire des données collectées. Les systèmes de stockage multimédia fournissent une bibliothèque d’algorithmes divers permettant de procéder à l’extraction de plusieurs types de représentations. À partir de ces descripteurs, l’information obtenue peut être traitée de plusieurs manières : elle peut être utilisée pour construire un index des données afin de faciliter la recherche de contenu dans la banque de données multimédia (indexation), ou bien les données peuvent être traitées pour en extraire des informations précises, à travers l’utilisation d’algorithmes de détection d’objets, d’artefacts ou de mouvements (détection). Les composants génériques décrits dans cette section sont présents dans la plupart des systèmes évoqués dans la suite de ce chapitre. Chaque système doit cependant relever un certain nombre de défis pour parvenir à indexer ou extraire des informations des contenus. 2.2.1 Stockage de données multimédia Les données multimédia sont dans la plupart des cas des données extrêmement riches en informations et donc très volumineuses. Les données ainsi stockées doivent aussi faire l’objet de traitements et pré-traitements, dont il faut stocker les résultats pour pouvoir les analyser à l’aide d’outils appropriés. Les volumes de stockage atteints deviennent très vite handicapant pour les systèmes de gestion de base de données traditionnels. Différentes stratégies de stockage distribué peuvent être utilisées pour résoudre ce problème. Les données multimé- dias recouvrent généralement des espaces disque importants et sont difficilement compressibles sans perte d’information. Par conséquent, la distribution des données sur plusieurs machines à la fois apparaît comme la solution simple et économique. Plusieurs stratégies coexistent dans le domaine pour parvenir à ce but : les données peuvent être stockées dans un système de fichiers distribué [SKRC10], dans une base de données distribuée [Kos05] ou encore dans une base de données distribuée non relationnelle [Cor07, CDG+06]. 132.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES MULTIMÉDIA Système de fichiers distribués Un système de fichiers distribué est un système dans lequel l’espace disque de plusieurs machines est mis en commun et vu par toutes les machines du système comme une seule entité de manière transparente. Cela permet d’augmenter l’espace disponible en partageant la capacité des disques de chaque machine du système, ou d’augmenter la fiabilité du système en autorisant la réplication des données. Des exemples de systèmes de fichiers distribués sont le Network File System (NFS) [SCR+00], XTreemFS[HCK+07] ou encore le Hadoop Distributed File System [SKRC10]. Cette approche est utilisée dans de nombreux projets de stockage et de traitement de données multimédia. Par exemple, les projets RanKloud [Can11] et ImageTerrier [HSD11] reposent sur le système de fichiers distribué issu du projet Hadoop, le Hadoop Distributed File System [SKRC10]. Ce système de fichiers a la particularité de ne permettre que l’ajout et la suppression de fichiers. La modification des données n’est pas possible. Cette limitation ne pose pas de problème dans les systèmes de stockage et traitement de données multimédia, car les fichiers bruts ne font pas l’objet de modifications internes, mais sont copiés puis lus plusieurs fois de suite. Ce type de système est particulièrement adapté lorsque les données doivent être traitées en parallèle sur plusieurs machines par divers programmes, comme dans le système de traitement de données parallèle Hadoop [Whi09]. Stockage en base de données La solution la plus directe pour fournir l’accès à des données dans une plateforme de services est la mise au point d’une base de données relationnelle. D’après [Kos05] l’arrivée dans les années 90 des systèmes de bases de données orientés objets tels qu’Informix 1 ou Oracle 2 a permis l’émergence de systèmes plus efficaces pour représenter les données multimédia. Les standards MPEG-7 [MPE12b] et MPEG- 21 [Mpe12a], par exemple, ont fourni un cadre de représentation des données multimédia pouvant servir de référence pour la mise au point de schémas de base de données spéci- fiques. C’est cette approche qui a été retenue dans le projet de “base de donnée MPEG-7”, développée par M. Döller et décrite dans [Döl04]. En utilisant un système de gestion de bases de données objet, M. Döller a créé un nouveau schéma respectant le format de re- 1. http://www.ibm.com/software/data/informix/, consulté le 5 octobre 2013 2. http://www.oracle.com/us/products/database/overview/index.html, consulté le 5 octobre 2013 142.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES MULTIMÉDIA présentation fourni par le standard MPEG-7. Afin de parvenir à utiliser le langage SQL pour rechercher les données, les méthodes d’indexation ont aussi été étendues pour pouvoir indexer des données multi-dimensionnelles. Ceci permet aux utilisateurs de bénéficier de tous les mécanismes des bases de données traditionnelles et notamment d’utiliser le langage SQL pour la manipulation et la recherche de données multimédia. Les fichiers sont stockés soit en tant qu’objets binaires (BLOBs), comme simples champs d’une table, soit dans le système de fichiers, et seul le chemin vers les données figure dans la base. La principale limite de cette approche réside dans le fait que le système soit implémenté comme une extension du système commercial de bases de données Oracle. Il est donc difficilement envisageable d’ajouter de nouvelles fonctionnalités au système, en raison de l’interface de programmation de bas niveau nécessaire à ces changements. L’autre limite de cette approche est que le système n’est pas distribué, or, le volume de données généralement associé avec les bases de données multimédia peut nécessiter un fonctionnement réparti afin de multiplier la capacité de stockage et de traitement, ce qui n’est pas le cas dans ces travaux. Bases de données distribuées : Les bases de données distribuées permettent d’agréger la capacité de stockage de plusieurs machines au sein d’un seul système de base de données. Retenue par K. Chatterjee et al. [CSC09], cette approche consiste à utiliser les capacités existantes de systèmes de bases de données distribuées, comme par exemple MySql Cluster 3, et à adapter la base de données en ajoutant un index réparti permettant de retrouver rapidement les données recherchées sur l’ensemble des machines. IrisNet [GBKYKS03] est un réseau de capteurs pair à pair sur lequel les images et données des différents capteurs sont stockées dans une base de données XML. Les nœuds sont divisés en deux catégories : les "Sensing Agents" (SA) qui sont des machines de puissance moyenne reliées aux capteurs (webcam, météo, . . . ) et les "Organizing Agents" (OA) qui sont des serveurs sur lesquels sont déployés les services du réseau de capteurs. Les capteurs eux-mêmes sont des machines de puissance moyenne, avec peu ou pas de contraintes environnementales, et un espace de stockage important. 3. http://www.mysql.com, consulté le 5 octobre 2013 152.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES MULTIMÉDIA Ces nœuds appelés “Sensing Agents” sont utilisés pour déployer des “Senselets”, qui filtrent les données issues des capteurs. La base de données est distribuée sur les OA. Les utilisateurs peuvent requêter la base de données en utilisant le langage XPATH (langage de requêtage de données XML). Ces deux approches reposent sur les systèmes de base de données traditionnels fournissant de fortes garanties de cohérence des données, suivant le principe ACID (Atomicité, Cohérence, Isolation, Durabilité). Or, il a été démontré que lorsque la taille d’un système distribué augmente, fournir une garantie sur la cohérence des données du système ne permet pas d’assurer à la fois disponibilité et la performance de celui-ci. C’est le théorème de CAP, Consistency, Availability et Partition Tolerance (Cohérence, Disponibilité et Tolérance aux partitions) démontré par Gilbert et Lynch dans [GL02]. Les systèmes multimédias distribués ne nécessitent pas de supporter de fortes garanties de cohérence sur les données : les fichiers à traiter ne sont écrits qu’une seule fois puis lus à plusieurs reprises pour mener à bien les analyses voulues. Par conséquent, certains acteurs ont construit leur propre système de gestion de données multimédia en s’abstrayant de ces garanties, comme Facebook 4 l’a fait avec le système Haystack [Vaj09]. Les composants de ce système ne reposent pas sur une base de données relationnelle mais sur un index distribué permettant de stocker les images dans des fichiers de grandes tailles (100Go), ce qui permet de grouper les requêtes et de réduire les accès disques nécessaires à l’affichage de ces images. La particularité de ce système est que les images y sont peu recherchées, mais sont annotées par les utilisateurs pour y ajouter des informations sur les personnes présentes dans l’image et la localisation de la prise de vue. Un autre exemple de base de données distribuée utilisée pour stocker les données multimédia est celui de Youtube 5, qui utilise la base de données orientée colonnes BigTable [CDG+06, Cor07], pour stocker et indexer les aperçus de ses vidéos. Agrégation de bases de données : Le canevas AIR créé par F. Stiegmaier et al. [SDK+11] se concentre sur l’extensibilité en utilisant un intergiciel pour connecter plusieurs bases de données entre elles, et utilise des requêtes au format MPEG-7 [MPE12b]. 4. http://facebook.com, consulté le 5 octobre 2013 5. http://youtube.com, consulté le 5 octobre 2013 162.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES MULTIMÉDIA Cependant ces travaux restent dirigés sur la recherche de contenus plutôt que sur la détection d’éléments précis dans les images. Les données et leurs métadonnées (descripteurs) sont réparties sur différents systèmes de gestion de bases de données (SGBD) pouvant fonctionner sur différentes machines. Lorsque le système reçoit une requête, deux cas se présentent : soit les nœuds sont homogènes et la requête est envoyée à chaque machine puis les résultats sont fusionnés et / ou concaténés, soit les nœuds sont des systèmes différents (bases de données différentes) et seule la partie la plus adaptée de la requête est transmise au noeud le plus approprié. Les intergiciels LINDO[BCMS11] et WebLab[GBB+08] fournissent des modèles de représentations de données communs à plusieurs systèmes et permettent les transferts de données entre des services de traitements adoptant ce modèle. Ces systèmes ne reposent sur aucun système de stockage particulier mais permettent d’agréger différentes sources de données et différents services de traitements hétérogènes. La problématique de cette approche est la nécessité de devoir adapter les services et les traitements au modèle exposé par les interfaces d’un tel système. Une grande variété de solutions existent au problème du stockage des données multimédia. Un critère de choix possible est le type d’utilisation qui sera fait des données : par exemple un système d’extraction des connaissances fonctionnera mieux avec un système de fichiers distribué dans lequel les données sont accessibles de manière transparente, au contraire d’une base de donnée SQL qui requiert une interface particulière pour accéder aux données stockées. Dans le cadre du projet MCube l’accès aux données doit être possible de plusieurs manières : il faut pouvoir fournir un accès permettant de traiter les données en parallèle ainsi qu’une interface permettant de requêter les données générées par les ré- sultats des analyses effectuées. Le système se rapprochant le plus de ce cas d’utilisation est donc l’utilitaire HADOOP [Whi09], qui fournit à la fois un système de haut niveau pour requêter les données et un accès de bas niveau via le système de fichiers distribué. 2.2.2 Distribution des traitements Un volume important de données à analyser se traduit par la nécessité de paralléliser le traitement des données, de manière à réduire les temps d’exécution. Cependant, tous 172.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES MULTIMÉDIA les systèmes de stockage de données multimédia ne permettent pas de traitement parallèle des données. Dans le domaine des traitements de données multimédia il est possible de distinguer deux manières de traiter les données[Can11] : les traitements à la chaîne (Multiple Instruction Multiple Data, MIMD) et les traitements parallèles (Single Instruction Multiple Data, SIMD). Traitements à la chaîne : Multiple Instruction Multiple Data Le système est constitué de plusieurs unités, chacune exécutant une étape particulière d’une chaine de traitements. Les données passent d’une étape à l’autre en étant transférées d’une entité à l’autre, de sorte que plusieurs données sont à différents stades de traitement au même instant. Le projet WebLab [GBB+08] est une architecture de traitement de données multimé- dia orientée service qui permet de faciliter le développement d’applications distribuées de traitement multimédia. Cette plateforme repose sur un modèle de données commun à tous ses composants représentant une ressource multimédia. Ce modèle ne repose pas sur un standard mais permet de créer des ressources conformes au standard MPEG-7. La raison de ce choix est que le standard MPEG-7 a été créé en 2002 et ne permet pas de bénéficier des dernières avancées en matière de descripteurs de données multimédias. Cette repré- sentation générique permet d’associer à chaque ressource des “annotations” représentant le contenu extrait de la ressource (appelés “morceaux de connaissance”). Les services disponibles se divisent en services d’acquisition, services de traitements, services de diffusion. Le modèle de données commun permet à ces différents services de pouvoir être composés pour créer des flux de traitements, de l’acquisition à la diffusion du contenu multimédia. Cette solution se veut très flexible et générique mais ne propose pas de solution en ce qui concerne la répartition des traitements. Le système SAPIR (Search in Audio-visual content using Peer-to-peer Information Retrieval) [KMGS09] est un moteur de recherche multimédia implémentant la recherche par l’exemple : il permet de retrouver à partir d’un document multimédia donné les documents similaires ou correspondant au même objet. SAPIR utilise le Framework apache UIMA [UIM12] pour extraire les informations nécessaires à l’indexation des données au 182.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES MULTIMÉDIA format MPEG-7. Les flux ou fichiers reçus sont divisés en plusieurs modalités : par exemple dans le cas d’une vidéo, un “splitter” va séparer le son des images, puis la suite d’images va aussi être divisée en plusieurs images fixes, chaque élément ainsi extrait va ensuite être traité par un composant spécifique pour le son, la vidéo et les images le tout de façon parallèle. Une fois la phase d’extraction terminée le fichier d’origine est recomposé en intégrant les métadonnées qui en ont été extraites. UIMA rend la main au système SAPIR, qui va insérer le fichier dans son index distribué en se basant sur les métadonnées que le fichier contient désormais. Le système utilise une architecture pair à pair pour stocker et répartir les données. Cependant, la dépendance du système SAPIR avec Apache UIMA, destine ce système uniquement à la recherche de contenus multimédia plutôt qu’à servir de plateforme de traitement de données multimédia générique. Traitement en parallèle : Single Instruction Multiple Data (SIMD) Ce type de système sépare les étapes du traitement dans le temps plutôt que dans l’espace : à chaque étape du traitement, toutes les machines du système exécutent le même traitement en parallèle sur l’ensemble des données. C’est le modèle suivi par le framework Hadoop [Whi09] pour traiter les données. Ce framework générique de traitement de données massives en parallèle est utilisé dans plusieurs systèmes de traitements de données multimédia. Par exemple, les travaux menés dans le cadre du projet RanKloud [Can11] utilisent et étendent le framework HADOOP [Whi09] pour répondre à des requêtes de classement, ainsi que pour permettre des jointures dans les requêtes portant sur plusieurs ensembles de données. Pour cela le système échantillonne les données et évalue statistiquement leur répartition pour optimiser certaines procédures (notamment les opérations de jointures). Un index des données locales est construit sur chaque machine du système, permettant à chaque nœud de résoudre les requêtes localement et d’envoyer les résultats à une seconde catégorie de nœuds qui assureront la finalisation des requêtes. Cependant ce système est conçu pour analyser une grande quantité de données et non pas pour répondre à des requêtes en temps réel, et se destine plutôt à des requêtes de recherche de contenus. Le projet Image Terrier [HSD11] est un système d’indexation et de recherche d’images 192.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES MULTIMÉDIA et de vidéos fondé sur le système de recherche textuelle Terrier 6. Ce système utilise lui aussi la technique de distribution des traitements fournie par le canevas HADOOP afin de construire incrémentalement l’index des images. Ce système présente l’avantage d’être très ouvert, mais il est uniquement destiné à la recherche de contenus. Limites des outils existants Ces deux types de répartitions ne sont pas exclusives et les deux approches peuvent être combinées de manière à augmenter l’efficacité d’un système. Il est ainsi possible de créer des flux de travaux enchaînant différentes étapes de traitement ou les exécutant en parallèle. La principale limite des outils existants réside dans le fait que les traitements répartis font partie d’une bibliothèque logicielle utilisant un canevas donné, comme ImageTerrier [HSD11] et Hadoop. Or, dans le projet MCube, les traitements à répartir sont codés et compilés sans suivre d’API ou de canevas logiciel particulier. Il est aussi parfois nécessaire d’exécuter les traitements à la demande au fur et à mesure de l’arrivée des données. Dans ce contexte il apparait donc nécessaire de faire appel à des plateformes de plus haut niveau permettant d’adapter les traitements à leur contexte d’exécution. 2.2.3 Plateformes génériques de traitements de données multimédias La recherche en matière de traitement des données multimédia se concentre aussi pour une large part sur la conception de nouveau algorithmes d’extraction d’informations. Le système doit fournir un catalogue de services permettant de procéder aux extractions de données et aux analyses voulues par l’utilisateur. Cette librairie doit être extensible afin de permettre aux utilisateurs d’y implanter leurs propres analyses. L’ouverture d’un système se mesure à sa disponibilité, et à sa capacité à s’interfacer avec d’autres systèmes de manière à pouvoir être utilisé dans différents contextes. Plusieurs systèmes de stockage évoqués dans ce chapitre sont décrits dans la littérature mais ne sont pas disponibles pour évaluation, du fait du peu de cas d’utilisations qu’ils permettent de résoudre. Par exemple le système Haystack développé par Facebook ne fait pas l’objet d’un développement à l’extérieur de l’entreprise. Le projet RanKloud bien que décrit dans diverses publications, n’est pas 6. http://terrier.org/, consulté le 5 octobre 2013 202.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES MULTIMÉDIA publiquement disponible. De même, peu de projets sortent du simple cadre du prototype de recherche, comme par exemple le projet LINDO [BCMS11]. Le projet LINDO [BCMS11] a pour objectif de développer une infrastructure géné- rique de traitement de données multimédia permettant de distribuer les traitements et les données, en assurant l’interopérabilité de différents systèmes de traitement de données multimédias. Il a permis la mise au point d’une infrastructure d’indexation générique et distribuée. Pour cela, un modèle de données représentatif a été créé pour les données multimédia et pour les algorithmes d’indexation de ces données. Les données sont donc réparties sur un système de fichiers distribué et le système sélectionne l’algorithme d’indexation le plus approprié en fonction des requêtes les plus féquemment envoyées par l’utilisateur. Ce système a été appliqué à la vidéosurveillance, pour détecter des événements dans les flux vidéo. C’est une plateforme multi-utilisateurs, qui peut être utilisée dans plusieurs domaines à la fois, et permet de représenter les données au format MPEG-7 ou MPEG-21. Cependant les informations ne sont extraites qu’à la demande explicite des utilisateurs (à travers une requête) et traversent toutes un serveur central qui contrôle les machines indexant les contenus. Ce système risque donc de limiter l’élasticité de la solution. L’élasticité d’un système est sa capacité à s’adapter à l’apparition ou la disparition dynamique de nouvelles ressources. Cette notion est particulièrement importante dans le contexte des plateformes de Cloud Computing, où les ressources peuvent être allouées et disparaître rapidement. Le projet WebLab est un projet libre 7, il est construit sur le même principe d’interface et de modèle d’échange de données que le projet LINDO. Le projet WebLab ne modélise pas les traitements, mais seulement le format des données à échanger entre différentes plateformes de traitements hétérogènes. Ce système repose sur le bus de service Petals 8 pour ses communications, or ce composant semble posséder des problèmes de performance comme le montre les résultats de différents bancs d’essais successifs 9. Ces deux systèmes servent donc d’intergiciels pour la communication entre différentes plateformes de traitements et permettent l’agrégation de différents sites ou grilles spécialisés 7. http://weblab-project.org/, consulté le 5 octobre 2013 8. http://petals.ow2.org/, consulté le 5 octobre 2013 9. http://esbperformance.org/display/comparison/ESB+Performance+Testing+-+Round+6, consulté le 5 octobre 2013 212.3. ANALYSE dans certaines étapes de traitement. Le propos du projet MCube est au contraire de concentrer les données des différents utilisateurs afin de mutualiser la plateforme de traitement et permettre des économies d’échelles en matière de stockage et de capacité de calcul. Ce projet se situe à un niveau intermédiaire entre les systèmes de traitements de données de bas niveau (ImageTerrier, Hadoop, ...) et ces plateformes de fédération que sont les projets LINDO et WebLab. Il s’agit donc d’un système multi-tenant, car il sert plusieurs clients en mutualisant ses ressources. 2.3 Analyse Les systèmes de stockage et de traitements multimédias décrits ici possèdent diverses architectures. Les architectures fondées sur les bases de données traditionnelles laissent peu à peu place à des systèmes basés sur les technologies issues des plateformes Web à large échelle telles que le canevas Hadoop [Whi09]. Bien que l’adoption de ces solutions se démocratise, il n’existe pas pour le moment d’alternative permettant de tirer parti des différents services de ces plateformes sans nécessiter de développement supplémentaire pour être adapté à l’environnement cible. De plus, hormis les initiatives telles que LINDO, Weblab ou ImageTerrier, peu de projets dépassent le stade du prototype de recherche. Pour une large part, ces projets servent de plateformes d’évaluation des algorithmes d’extraction de connaissances et d’indexation, ou sont orientés vers la recherche de contenus. Le tableau 2.1 résume les propriétés disponibles pour chacun des systèmes listés dans les sections précédentes, par rapport aux défis décrits dans la section 2.1.2 de ce chapitre. L’examen des différentes plateformes de stockage et de traitement de données multimé- dias montre qu’il n’existe pas, actuellement, de projets remplissant le cahier des charges du projet MCube. Il existe des plateformes de traitements de données génériques, qui ne sont pas spécialisées dans les données multimédia, comme le canevas Hadoop, et qui peuvent donc servir de base à la plateforme MCube. 222.4. ARCHITECTURE DE LA PLATEFORME MCUBE Table 2.1 – Propriétés des systèmes de traitements de données multimédias Stockage Traitements Mutualisation ouverture Elasticité Système Données Descripteurs Informations Descripteurs Indexation Informations Mpeg-7 DB oui oui non oui oui non oui non non AIR distribué distribué non oui distribué non oui non limitée Lindo distribué distribué non distribué distribué oui oui limitée limitée Weblab distribué distribué non non non non oui limitée limitée SAPIR distribué distribué non oui oui non oui limité oui IrisNet distribué distribué distribués oui distribué oui oui oui oui [CSC09] distribué distribué non oui distribué non oui non oui RanKloud distribué distribué non distribué distribué non oui non oui Haystack distribué distribué distribués distribué distribué oui non non oui ImageTerrier distribué distribué non distribué distribué non non oui oui 2.4 Architecture de la plateforme MCube 2.4.1 Architecture matérielle Passerelles MCube. Les passerelles MCube sont des appareils mis au point par la société Webdyn 10. Il s’agit d’un système embarqué reposant sur le système Linux et un processeur ARM. La présence du système d’exploitation complet permet de connecter divers périphériques d’acquisition de données multimédias comme des appareils photos ou des micros. Le système de la passerelle exécute un planificateur de tâches pouvant être configuré à distance en recevant des commandes depuis les serveurs MCube. Le rôle des passerelles est d’assurer l’exécution à intervalles réguliers (configurés par l’utilisateur) d’un programme appelé "librairie passerelle" actuellement déployé sur le système. Les passerelles sont équipées d’un port Ethernet et d’un modem 3G permettant de se connecter à internet pour dialoguer avec le serveur MCube. Serveurs MCube. Les serveurs MCube sont trois machines Dell équipées chacune de deux processeurs intel Xeon embarquant 8 cœurs chacun. L’espace de stockage important sur les machines (8 téraoctets par serveur) permet d’anticiper sur la quantité de données à stocker dans le cadre du projet MCube, dont les collectes annuelles peuvent représenter dans certains cas plusieurs centaines de gigaoctets par mois. 10. http://www.webdyn.com, consulté le 5 octobre 2013 232.4. ARCHITECTURE DE LA PLATEFORME MCUBE 2.4.2 Architecture logicielle L’architecture logicielle du projet MCube a été conçue de manière à permettre aux utilisateurs de pouvoir eux-mêmes développer les applications déployées sur les passerelles et la plateforme. Pour cela, une interface de programmation a été mise au point par le constructeur de la passerelle, la société WebDyn 11, permettant de séparer les tâches d’acquisition des données de leur planification. Les composants développés sur cette interface sont appelés "librairies passerelles". Les librairies de passerelles sont des librairies dynamiques implantant les fonctions de l’interface de programmation MCube, ce qui permet de les déployer à la demande depuis la plateforme Web du projet. Ces librairies sont développées pour remplir deux fonctions principales : – L’acquisition des données : C’est la fonction essentielle de ces librairies, elle consiste à piloter les différents périphériques de capture de données multimédia connectés à la passerelle pour acquérir des données brutes : photos, sons, vidéos. – Filtrage des données : Dans certains cas, les données capturées peuvent être volumineuses. Par exemple, un cas d’utilisation possible du service MCube consiste à détecter la présence d’insectes nuisibles à partir d’enregistrements sonores. Dans ce cas précis les données enregistrées ne sont envoyées au serveur que lorsqu’un signal suspect est détecté dans l’enregistrement. Un traitement plus précis exécuté à la demande sur le serveur permet ensuite de confirmer ou d’infirmer la détection. La communication entre les passerelles et les serveurs repose sur un protocole HTTP/ REST qui permet aux passerelles : – d’envoyer des données collectées à la plateforme MCube via le serveur Web, – de recevoir une nouvelle librairie, – de recevoir des commandes à transmettre à la librairie déployée sur la passerelle. 2.4.3 Architecture de la plateforme MCube La plateforme MCube a un rôle de gestion des équipements et d’agrégation d’information. Elle assure la communication avec les passerelles déployées, en permettant aux utilisateurs de les configurer à l’aide d’une interface web. La plateforme fournit aux utilisa- 11. http://www.webdyn.com/, consulté le 5 octobre 2013 242.4. ARCHITECTURE DE LA PLATEFORME MCUBE teurs l’accès aux librairies et programmes d’analyse de données qu’ils ont développés. Les données transférées par les passerelles sont stockées sur les serveurs de la plateforme, et une interface web permet aux utilisateurs de lancer ou de programmer les analyses qu’ils souhaitent exécuter sur les données téléchargées depuis les passerelles. L’architecture finale de la plateforme MCube repose sur trois composants logiciels : Demandes de traitements mcube.isep.fr Cloudizer Utilisateurs Passerelles Envoi de données Consultation des données Configuration HDFS HDFS HDFS Noeuds de stockage / traitement Stockage / consultation données passerelles et traitements Requêtes de traitements Service Web MCube Figure 2.3 – Architecture de la plateforme MCube L’interface Web C’est l’interface utilisateur du système. Elle permet aux agriculteurs de gérer leurs passerelles et d’accéder à leurs données afin de les récupérer dans le format qu’ils souhaitent ou de lancer / planifier les traitements à effectuer. De plus l’interface permet aux utilisateurs de charger de nouveaux programmes de traitements de données. Le canevas Hadoop Le système Hadoop permet de stocker les données de manière répartie via le système de fichiers distribué HDFS [SKRC10]. Il permet d’avoir un système de fichiers redondant pouvant accueillir un grand nombre de machines. De plus, les données ainsi stockées peuvent être accédées à l’aide de langages de plus haut niveau tel que le langage de requêtage HQL (Hive Query Language) [TSJ+09] 252.4. ARCHITECTURE DE LA PLATEFORME MCUBE Le canevas Cloudizer Le framework Cloudizer est un canevas logiciel développé durant cette thèse qui permet de distribuer des requêtes de services web à un ensemble de machines. Ce système est utilisé pour traiter les fichiers envoyés par les passerelles au fur et à mesure de leur arrivée dans le système. La figure 2.3 montre comment s’articulent les différents composants de la plateforme. Les données de configuration, les informations utilisateurs, et les coordonnées des passerelles sont stockées dans une base de données SQL traditionnelle (MySql 12). Les données remontées par les passerelles, ainsi que les librairies et les programmes d’analyses sont stockés dans le système HDFS [SKRC10]. Chaque utilisateur possède un répertoire dédié sur le système de fichiers distribué. Ce répertoire contient les fichiers reçus des passerelles et classés par nom de la librairie d’origine des données. Les résultats des traitements effectués sur ces données sont stockés dans un répertoire dédié et classés par noms de traitements et date d’exécution. Ceci permet aux utilisateurs de pouvoir retrouver leurs données de manière intuitive. Les utilisateurs peuvent choisir deux modes de traitement des données : un mode évé- nementiel et un mode planifié. Lorsque le mode événementiel est sélectionné, l’utilisateur définit le type d’événement déclenchant le traitement voulu : ce peut par exemple être l’arrivée d’un nouveau fichier ou le dépassement d’un certain volume de stockage. Le traitement est alors exécuté sur les données désignées par l’utilisateur. Le mode planifié correspond à une exécution du traitement régulière et à heure fixe. Le déclenchement des traitements est géré par la plateforme Cloudizer décrite dans le chapitre suivant. 2.4.4 Description des algorithmes de traitement de données multimédia Les utilisateurs de la plateforme peuvent charger de nouveaux algorithmes de traitement de données multimédia. Pour que le système puisse utiliser cet algorithme, il est nécessaire d’en fournir une description afin de le stocker dans la base de données. Le terme "algorithme" désigne ici par abus de langage l’implantation concrète d’un algorithme donné, compilée sous la forme d’un programme exécutable. Un modèle type de ces algorithmes a donc été conçu, il est représenté en figure 2.4. Ce schéma est inspiré des travaux sur 12. www.mysql.com, consulté le 5 octobre 2013 262.4. ARCHITECTURE DE LA PLATEFORME MCUBE l’adaptation de composants de [AGP+08, BCMS11] et [GBB+08]. Figure 2.4 – Modèle de description d’algorithme Un programme et l’ensemble de ses fichiers sont décrits par la classe "Algorithm". Cette classe contient une liste qui permet de spécifier les fichiers exécutables et de configurations nécessaires à l’exécution du programme. Un programme est généralement exécuté avec des paramètres fournis sur l’entrée standard, la liste des paramètres (la classe "Parameter") décrit les paramètres à fournir à l’exécutable ainsi que leurs positions et préfixes respectifs si besoin. La classe "OutputFormat" permet de décrire le type de donnée en sortie de l’exécution du programme. Par exemple, la sélection du format ”TextOuput” signifie que la sortie du programme sur l’entrée standard correspond à des données textes tabulaires, séparées par un caractère spécifique. D’autres formats sont disponibles et peuvent être ajoutés à la librairie. Exemple de modèle d’algorithme : Calcul de tailles de tomates Le listing 2.1 présente un exemple de modèle d’algorithme pour un programme développé dans le cadre du projet MCube. Dans ce cas d’utilisation, un dispositif expérimental a été mis au point par les membres de l’équipe de traitement du signal de l’ISEP, afin de prendre en photo un 272.4. ARCHITECTURE DE LA PLATEFORME MCUBE plant de tomates selon deux angles différents. Grâce à ce décalage et un calibrage précis du dispositif, il est possible de donner une estimation de la taille des tomates détectées dans les photos, si leurs positions sont connues. Cet algorithme prend en entrée deux listes de points correspondants respectivement aux contours droit et gauche d’une tomate du plant photographié, et fournit une estimation de la taille du fruit. Listing 2.1 – Exemple d’instance du modèle d’algorithme tomate_measurement 1. 0 U j jwal Verma Detection de visages methode de Viola!Jones , l i b r a i r i e OPEN!CV t x t tomate_measurement . j a r leftImageContours rightImageContours Cet algorithme est implémenté en java sous la forme d’une archive JAR. Cependant le système le voit comme une boite noire et ne connait donc que les paramètres à lui fournir en entrée et le type de données en sortie. Cette information sur les paramètres à fournir permet de pouvoir commander l’exécution du programme à la demande, via un service 282.5. DÉVELOPPEMENT DU PROJET Web utilisant le canevas Cloudizer décrit au chapitre 3. 2.5 Développement du projet La conception détaillée de la plateforme Web MCube et son modèle de données a été faite par l’auteur de ce document. Le développement de l’interface Web permettant la gestion des passerelles et le stockage des librairies ont été fait par trois stagiaires de niveau Master 1 et 2. Le mode de développement s’est fondé sur des itérations courtes (une à deux semaines), en donnant des objectifs de fonctionnalités précises à développer durant ce laps de temps, via une feuille de route établie à l’avance. Chaque nouvelle fonctionnalité était testée manuellement puis déployée sur le serveur de production du projet 13. Cette approche de développement d’application rapide a été facilitée par l’utilisation du canevas Grails comme base au projet. L’interface web de la plateforme représente un volume de 4000 lignes de code en langage Groovy. Mon rôle au niveau du développement de ce système s’est donc limité à la conception de la base de données, puis à la supervision et aux tests fonctionnels des développements réalisés, ainsi qu’à la correction de quelques bogues éventuels trouvés lors des phases de tests. Le code doit encore faire l’objet d’une intégration avec le projet Cloudizer pour assumer l’ensemble des fonctionnalités attendues. Il sera à terme disponible sous forme de programme libre, sous réserve d’acceptation par les parties prenantes du projet. 2.6 Discussion Le projet MCube est une architecture logicielle de collecte et de traitement de données multimédias destinée à fournir des services avancés de détection et de surveillance aux agriculteurs. La mise au point de ce système nécessite de faire interagir différents composants logiciels hétérogènes. La problématique de la distribution et de l’exécution en parallèle de traitements de nature spécifique, a des conséquences sur le choix de la plateforme utilisée pour cette répartition et sur son efficacité. L’architecture finale s’appuie donc sur trois services : un serveur d’applications destiné à la communication avec les utilisateurs du 13. http://mcube.isep.fr, consulté le 5 octobre 2013 292.6. DISCUSSION système et les services extérieurs, un système de fichiers distribués permettant de stocker différents fichiers tout en permettant différents modes d’accès aux données stockées en son sein, et un système de gestion de services et de distribution de requêtes qui assurera la répartition de la charge lors du traitement des données. Ce service, appelé Cloudizer et développé au cours de cette thèse, permet à l’utilisateur de choisir différentes stratégies de répartition de charge, et de comparer leur performances respectives pour un même service. La mise au point de stratégies de répartition adéquates est donc nécessaire. Le chapitre suivant décrit l’architecture technique du projet Cloudizer et son intégration au projet MCube. 30Chapitre 3 Le framework Cloudizer : Distribution de services REST 3.1 Introduction Cloudizer est une plateforme logicielle ouverte permettant de déployer, répartir et superviser des services web REST. Cette plate-forme est destinée à faciliter le déploiement et la répartition d’applications existantes sur les plateformes d’informatique dans les nuages de type "infrastructure à la demande". Cloudizer a été développé avec deux objectifs : premièrement faciliter le développement et le déploiement de stratégies de répartition de charge adaptées aux applications, en fournissant une interface de programmation suffisamment expressive et claire pour le programmeur. Deuxièmement la plateforme doit assurer la disponibilité de plusieurs services différents en permettant de choisir la stratégie de répartition de charge la plus appropriée pour chacun de ces services. Pour assurer cette répartition, le problème de conservation de l’état du service se pose. Lorsqu’un service web maintient des informations sur chacun des clients qui lui sont connectés par exemple, il est nécessaire de toujours renvoyer les requêtes de ce client vers la machine qui possède les informations le concernant, ce qui réduit considérablement les possibilités de répartition de la charge. Ce constat a été établi par Roy T. Fielding dans sa thèse publiée en 2000 [Fie00], où il propose d’organiser les services client-serveur autour du principe "Representational State Transfer" (REST). Selon ce principe, les séquences de requêtes et de réponses des systèmes clients serveurs sont considérées comme une suite 313.2. ARCHITECTURE DU CANEVAS CLOUDIZER d’échanges d’objets appelés "ressources". A chaque échange, une représentation particulière de la ressource considérée est transmise. Le client transmet au serveur les informations né- cessaires pour retrouver la ressource associée à chaque requête, de sorte que le serveur n’ait pas à maintenir d’information sur la séquence d’échange en cours. Le serveur ne maintient donc pas d’état entre les requêtes, ce qui simplifie sa distribution car il n’y a pas d’état à synchroniser entre les machines. Le système Cloudizer permet de gérer et répartir l’exécution de services web sur un ensemble de machines de deux manières : soit en exécutant les requêtes au fil de l’eau, soit en les exécutant en parallèle, par lots de requêtes. Ce comportement est approprié dans le contexte de la plateforme MCube, qui a besoin de ces deux modes de fonctionnement pour gérer les traitements des données multimédia. La première partie de ce chapitre décrit donc l’architecture et le fonctionnement gé- néral de la plateforme Cloudizer 3.2. La seconde partie de ce chapitre est consacrée aux mécanismes permettant d’intégrer Cloudizer dans la plateforme MCube 3.3. La conclusion de ce chapitre (section 3.5) discutera des limites de cette approche et les développements nécessaires pour l’améliorer. 3.2 Architecture du canevas Cloudizer Cloudizer tire partie de l’architecture multi-tiers de la plupart des applications Web pour permettre la répartition de charge. Comme le montre la figure 3.1, le code du service à distribuer est répliqué sur toutes les machines du système. La gestion des données est déléguée à une base de données répartie, et la gestion des fichiers d’application est déléguée à un système de fichiers distribué. Ce découplage permet à la plateforme de n’assurer que la répartition des traitements. Les composants de cette plateforme communiquent via des services REST. Ces composants sont au nombre de trois, il s’agit du répartiteur, du contrôleur et de la librairie déployée sur les machines du système, appelées "nœuds". La répartition est assurée par le répartiteur qui filtre les requêtes. Lorsqu’une requête d’exécution destinée à un service déployé arrive, elle est envoyée vers les nœuds Cloudizer selon la stratégie de répartition sélectionnée par l’utilisateur et appliquée par le répartiteur. 323.2. ARCHITECTURE DU CANEVAS CLOUDIZER Contrôleur Cloudizer Application Serveur HTTP Internet Serveur Maître Système de fichiers distribué Base de données fragmentée Librairie Cloudizer Service Web Répliqué Serveur HTTP Noeud Cloudizer Système de fichiers distribué Base de données fragmentée Librairie Cloudizer Service Web Répliqué Serveur HTTP Noeud Cloudizer Système de fichiers distribué Base de données fragmentée Appel aux services non-répliqués Réplication Répartiteur Appel au service web répliqué Figure 3.1 – Architecture de la plateforme Cloudizer Dans tous les autres cas, la requête est transmise à l’application déployée sur le serveur maître. Le but de ce système est de parvenir à répartir le service de manière transparente. Pour cela le code de l’application est répliqué sur tous les nœuds, mais les fichiers ou les bases de données sont répartis via l’utilisation d’un système de fichiers distribué type “HDFS” ([SKRC10]) ou d’une base de données utilisant la fragmentation horizontale. 3.2.1 Les nœuds Les nœuds de la plateforme Cloudizer sont des machines sur lesquelles la librairie Cloudizer est déployée. Cette librairie fournit les routines nécessaires à l’enregistrement et au déploiement de nouveaux services sur la machine où elle s’exécute. Le seul paramètre de configuration nécessaire au fonctionnement du programme est l’adresse du Contrôleur. Quand un nœud (nœud Cloudizer ) s’enregistre dans le système, il envoie à une fréquence pré-configurée des "battements de coeur" au contrôleur afin que celui-ci puisse connaître 333.2. ARCHITECTURE DU CANEVAS CLOUDIZER les nœuds actifs. Les nœuds enregistrent également la liste des services web actifs sur la machine. Chaque nœud est donc responsable de la supervision des services de la machine sur laquelle il est déployé et reporte son statut au contrôleur. Ce statut contient les informations décrivant la machine telles que le nombre de cœurs, la capacité de stockage, la taille de la mémoire et le taux d’utilisation du processeur. Les nœuds dialoguent avec le contrôleur via une interface de programmation REST. Une seconde interface de programmation est utilisée entre les nœuds pour procéder à des appels de méthodes à distance. Les souches d’appels sont générées dynamiquement par le nœud Cloudizer à partir de l’inspection de l’interface des méthodes cibles par réflexion. Les paramètres effectifs sont automatiquement encodés au format U.R.L (Universal Resource Locator). Les données de résultats de ces invocations sont, quant à elles, converties au format XML et décodées lors de la réception. Les machines peuvent donc servir de mandataires pour appeler des services distribués. 3.2.2 Le répartiteur Le rôle de ce composant est d’appliquer la politique de répartition de charge spécifiée par l’utilisateur pour transférer les requêtes aux services déployés dans le système. Différentes politiques de répartition de charge peuvent être appliquées en fonction de l’application. Une interface Java permet l’implémentation de nouvelles stratégies de répartition. L’utilisateur précise ensuite le nom de la classe implémentant la politique de répartition voulue dans la configuration du répartiteur pour l’appliquer sur le service qu’il souhaite répartir. La figure 3.2 montre le modèle utilisé pour implanter ces stratégies. La classe abstraite Policy fournit la méthode virtuelle loadBalance qui renvoie une machine cible après avoir pris en paramètre la liste des machines disponibles et les paramètres de la requête en cours. Lorsqu’une requête concernant un service déployé sur la plateforme arrive au niveau du répartiteur, la servlet CloudFrontend est appelée et exécute la méthode loadbalance définie pour le service cible. Les stratégies sont chargées lors de l’initialisation du répartiteur, en fonction de la confi- guration définie par l’utilisateur. Lorsque le répartiteur reçoit une nouvelle requête, il vérifie que la requête vise un service pour lequel des machines sont disponibles en interrogeant le 343.2. ARCHITECTURE DU CANEVAS CLOUDIZER Figure 3.2 – Modèle des stratégies de répartition de charge contrôleur. Si aucune machine n’est disponible, une erreur est renvoyée. Si des machines sont disponibles pour le service demandé, le répartiteur peut procéder au transfert de la requête de deux manières différentes : à la volée, c’est à dire que chaque requête reçue est immédiatement transférée à la machine implantant le service, soit en parallèle : la requête et ses paramètres sont découpés de manière à former un ensemble de plusieurs requêtes distinctes, qui sont envoyées en même temps et de manière asynchrone à un ensemble de machines pour être exécutées en parallèle. Exécution requête par requête Pour chaque requête reçue, le répartiteur applique la stratégie de répartition sélectionnée par l’utilisateur pour choisir une machine cible, puis transfère directement la requête à la machine sélectionnée. Exécution des requêtes en parallèle Un service peut être appelé de plusieurs machines / clients à la fois. Toutefois, certains services peuvent être appelés en parallèle sur plusieurs machines en cas de besoin, ce qui permet de réduire le temps d’exécution total. Le résultat sera donc l’agrégation de plusieurs résultats produits sur différentes machines. Pour cela deux modes d’appels sont possibles : un mode d’appel parallèle simple où chaque nœud disponible reçoit une part égale de requêtes à exécuter pour le service appelé, et un mode avec équilibrage de charge où les requêtes d’exécution à agréger sont réparties en 353.3. INTÉGRATION AU PROJET MCUBE suivant une politique de distribution choisie par l’utilisateur. 3.2.3 Déploiement de services Cloudizer permet de superviser et déployer des services web. Lorsqu’un nœud Cloudizer est déployé, le service est configuré via un fichier de propriétés décrivant les différentes informations nécessaires à l’identification du service. Un service est identifié par un nom et un numéro de port. Lors du lancement du nœud cloudizer, la librairie enregistre la machine et le service correspondant auprès du Contrôleur, qui met à jour le statut du service dans sa base de données et fait apparaître cette instance comme disponible. 3.3 Intégration au projet MCube La plupart des algorithmes de traitement d’images ou de sons destinés à la plateforme MCube sont écrits sans considération pour le traitement de gros volumes de données. Or, dans le contexte de ce projet, les programmes de traitement des fichiers multimédias doivent s’exécuter sur un nombre important de fichiers à la fois, qui plus est sur une architecture de stockage distribuée. Afin d’adapter de manière transparente les algorithmes utilisés à ce contexte, le projet Cloudizer a été utilisé pour déployer et exécuter les programmes à la demande en fonction des besoins. Ainsi, un binaire exécutable peut être converti en service web à l’aide d’une description XML du programme suivant le schéma représenté en figure 2.4. Il est plus efficace d’envoyer le programme exécutable vers les données plutôt que de lui faire lire les données depuis le réseau. Ce principe est au centre des plateformes de traitement de données massives comme le système Hadoop[Whi09]. Un fichier de description XML comme celui figurant dans le listing 2.1 doit être créé pour représenter un programme et le déployer dans le système. La librairie Cloudizer contient un utilitaire permettant de créer une arborescence où seront stockés les binaires du programme et tous les dossiers et dépendances nécessaires. Deux autres dossiers sont créés par défaut : un dossier INPUT et un dossier OUTPUT. Le dossier INPUT est destiné à stocker les fichiers en entrée du programme, et le dossier OUTPUT à stocker les fichiers en sortie, à la suite d’une exécution. Une fois l’arborescence créée, il est possible d’y exécuter un serveur 363.3. INTÉGRATION AU PROJET MCUBE Méthode Paramètres Action GET - Nom du service, - Paramètres tels que décrits dans le fichier description.xml Renvoie le contenu d’un fichier de résultat correspondant aux paramètres donnés. Renvoie une erreur 404 si le fichier n’existe pas. POST Nom du service, Paramètres Exécute le programme représenté par le service. Renvoie une erreur 500 en cas de problème à l’exécution. PUT Nom du service, fichier à transférer Copie le fichier donné en paramètre dans le répertoire INPUT du service. DELETE Nom du service, nom de fichier Supprime le fichier indiqué en paramètres des répertoires du service Table 3.1 – Protocole HTTP : exécution de services de traitement de données HTTP, qui sera identifié par le nom du programme précisé dans le fichier XML. Ce serveur répond aux requêtes HTTP GET, POST, PUT et DELETE avec la sémantique détaillée en tableau 3.1. Sécurité des exécutions Les exécutables convertis de cette manière en services web peuvent être exposés à des attaques par des utilisateurs malicieux. Limiter le risque d’attaques est primordial, dans la mesure où les programmes déployés sur la plateforme sont des boites noires dont le fonctionnement est inconnu. Le serveur utilise donc l’utilitaire "Jail" [KW00] qui permet d’exécuter des binaires dans un environnement limité au strict minimum, ce qui réduit les risques d’opérations illégales ou accidentelles sur des fichiers de configuration ou système. De plus, le fichier de description précise rigoureusement les types de données attendus pour les paramètres du programme, ce qui permet au serveur de refuser les requêtes qui ne respectent pas la signature attendue de l’appel. La dernière contre-mesure est de limiter autant que possible l’exposition de ces services en leur interdisant l’accès au réseau, qui n’est donc obtenu que par le serveur généré par Cloudizer. Exécution asynchrone L’exécution du service en mode asynchrone permet d’alléger la charge de la machine appelant le service en raccourcissant le temps de la connexion entre le client et le serveur, via l’utilisation d’une fonction de rappel distribuée. Ceci est particulièrement utile lorsque le service demandé a une latence élevée et met donc un temps 373.4. CONSIDÉRATIONS SUR L’IMPLÉMENTATION significatif à s’exécuter : maintenir une connexion ouverte pendant plusieurs secondes sans que des données ne circulent n’est pas utile. La contrepartie est que la machine appelante doit rester à l’écoute du serveur qui lui enverra le résultat d’une requête GET sur les mêmes paramètres une fois le service achevé. Ce fonctionnement est similaire à celui d’une fonction de rappel distribué. Ce rôle de client appelant peut être assuré par le répartiteur ou le contrôleur en cas de besoin. Ce mécanisme est particulièrement utile en conjonction de l’invocation parallèle décrite en section 3.2.2, car cela permet d’alléger la charge de l’entité appelante. La gestion des erreurs et des résultats retournés fonctionne de la même manière que pour un appel synchrone. 3.4 Considérations sur l’implémentation Au cours de cette thèse, l’effort principal de développement s’est concentré sur la réalisation du répartiteur du système Cloudizer. Les premières versions de ce système de répartition étaient implantées directement dans l’interface web du système et souffraient des problèmes de performance du langage Groovy. Le principal facteur d’amélioration de la performance de ce système a donc été déplacer les classes assurant la répartition des requêtes dans une librairie séparée, permettant de découpler l’interface Web du répartiteur. Le répartiteur en lui-même est un projet de taille modeste, représentant environ 400 lignes de code Java, et 964 lignes de code Groovy pour l’interface Web. Les politiques de répartitions implémentées sur le répartiteur représentent quant à elles environ 2000 lignes de code. Cet aspect quantitatif ne permet pas de mesurer avec certitude l’effort total réalisé car l’ensemble de cette base de code a subi de multiples changements au cours du temps, principalement concernant l’optimisation ou la correction de bogues détectés lors des simulations. Parmi les optimisations réalisées, le passage en pur Java a été le plus efficace en terme de gains de performance brute. De même, le passage d’un modèle de traitement des requêtes événementiel plutôt que par fil d’exécution a permis un fort gain dans la capacité de traitement du répartiteur. Le générateur de service Web représente environ 1800 lignes de code Java entre l’implémentation du serveur, l’intégration avec Cloudizer et un module d’intégration avec le 383.5. DISCUSSION système Hadoop permettant d’utiliser ce système pour répartir les traitements. Ces développements spécifiques et de petite taille n’ont pas fait l’objet d’une méthode de développement particulière. Un soin particulier a été apporté à là modularité de ces projets, en permettant une extension facile de leurs fonctionnalités respectives : ajouts de nouvelles politiques de distribution pour le répartiteur ou de nouveaux environnements d’exécution pour le générateur de services. 3.5 Discussion L’architecture de Cloudizer permet donc de déployer et d’exécuter un grand nombre d’applications différentes en déléguant la gestion des données à d’autres systèmes. Les services pouvant tirer partie de Cloudizer vont de l’application Web standard ou la Web Archive. À cela s’ajoute la possibilité d’utiliser un binaire comme service web ce qui peut être particulièrement utile dans le cadre d’applications héritées. Cette capacité a été développée pour permettre une intégration simple de ce projet dans la plateforme MCube afin d’assurer la répartition des requêtes de services de traitement de données. Ce service est suffisamment générique pour assurer à la fois la distribution des services de traitements de données de la plateforme MCube mais aussi de la partie Web fournissant l’interface utilisateur et la communication avec les passerelles. Le fait de pouvoir sélectionner la stratégie de répartition la plus appropriée pour un service donné dans la plateforme Cloudizer permet d’évaluer et de comparer la performance des différentes stratégies. Cependant le comportement des stratégies mises au point ainsi que leur tests en environnement réels sont longs et fastidieux. Les nouveaux environnements de déploiements tels que les plateformes de Cloud Computing permettent d’utiliser de grands nombres de machines, il est donc nécessaire d’utiliser des outils de simulations appropriés afin de faciliter le développement de ces stratégies. 393.5. DISCUSSION 40Chapitre 4 Simizer : un simulateur d’application en environnement cloud 4.1 Introduction La conception du simulateur Simizer a commencé comme un simple programme de test pour les stratégies de répartition de charge en amont de leur déploiement dans le projet Cloudizer. Ses fonctionnalités ont été étendues par la suite, dans le but de fournir des résultats de plus en plus réalistes et de permettre la réalisation de simulations plus diverses, comme des simulations de protocoles de synchronisation de processus dans les systèmes distribués. Le développement des grilles de calcul, et plus récemment du cloud computing, a donné naissance à de nombreux simulateurs de plateformes de traitement à large échelle. Les simulateurs de grilles, conçus pour simuler de grands nombres de machines partagées par plusieurs utilisateurs, ont été naturellement adaptés pour prendre en compte la virtualisation et simuler les plateformes de calcul à la demande : les "clouds". Par exemple le Grid Sim toolkit [MB02] a servi de base au développement du simulateur CloudSim [RNCB11], et l’utilitaire SimGrid [CLQ08] intègre aujourd’hui de nombreuses options permettant de simuler le fonctionnement des centres de virtualisation. Simizer a donc été conçu de manière à combler un vide dans les solutions de simulation existantes se rapportant au Cloud Computing. 414.1. INTRODUCTION 4.1.1 Simulateurs de Cloud existants Les principaux représentants des simulateurs de clouds sont les projets SimGrid[CLQ08] et CloudSim[RNCB11]. Ces projets permettent de simuler entièrement une infrastructure de type cloud en écrivant un programme utilisant les objets mis à disposition par l’interface de programmation fournie par la librairie. Ces deux utilitaires sont principalement destinés à la recherche en matière de conception et d’évaluation de l’architecture sous-jacente des plateformes de services informatiques à la demande. I ls permettent d’obtenir un modèle du comportement du centre de données dans son ensemble. Par exemple, des travaux menés avec CloudSim par Beloglazov et Buyya [BB12] ont permis d’analyser l’influence de diffé- rentes stratégies d’allocation de ressources sur la qualité de service rendue aux utilisateurs et la consommation électrique d’un centre de données. L’utilitaire SimGrid [BLM+12] a été conçu pour simuler de manière transparente le comportement d’applications de calculs distribués massifs, avant leur déploiement effectif sur une grille. Cependant, l’application de ce logiciel au domaine du Cloud Computing se limite à la simulation du fonctionnement interne des infrastructures. Le simulateur GreenCloud [KBAK10] est destiné à modéliser la consommation électrique des centres de données. Il se fonde sur le simulateur de protocoles réseaux NS-2[MFF], ce qui rend son modèle de simulation de réseau extrêmement fiable. Le projet iCanCloud[NVPC+12] utilise un modèle mathématique pour simuler les coûts engendrés par l’utilisation d’une plateforme de Cloud Computing comme le service EC2 d’Amazon 1. Cependant, ces simulateurs sont destinés aux fournisseurs de services Cloud, et permettent de modéliser facilement des prototypes de plateformes ou les coûts engendrés par les infrastructures en fonction d’un modèle d’utilisation du service. Il n’existe pas à ce jour de simulateurs d’infrastructure cloud permettant de simuler le comportement d’une application déployée dans ce type d’environnement [SL13]. Bien que des efforts aient été récemment faits dans cette direction, comme le simulateur CDOSim[FFH12] ce logiciel n’est pas publiquement disponible. Il n’existe donc pas à ce jour de simulateur permettant d’établir finement l’influence de protocoles de niveau applicatif utilisés par un système. Ce besoin existe car il a été montré à plusieurs reprises que les plateformes d’infrastructures à 1. http://aws.amazon.com, consulté le 5 octobre 2013 424.2. ARCHITECTURE DE SIMIZER la demande souffrent par leur nature partagée, d’une dépendance entre le niveau d’utilisation global de la plateforme et la performance des applications. Pouvoir évaluer à l’avance le comportement d’une application dans ce type de contexte d’exécution présente donc un avantage certain. La conception de Simizer vise donc à remplir ce besoin en permettant l’évaluation de la performance théorique d’applications de type services web déployées sur des plateformes de cloud computing. L’architecture du simulateur Simizer et le fonctionnement de ses différents composants sont décrits en section 4.2, puis l’utilisation pratique du simulateur est détaillée en section 4.3. La section 4.4 fournit à travers deux expérimentations une validation du fonctionnement du simulateur par rapport à la librairie CloudSim, et la section 4.5 discutera des futurs développements de l’utilitaire. 4.2 Architecture de Simizer Simizer est un simulateur à événements discrets écrit en JAVA. Il est fondé sur une architecture à trois couches : une couche de bas niveau fournissant les classes de gestion des événements, une couche Architecture utilisant les classes de gestion d’événements pour simuler le comportement des machines virtuelles et des réseaux, et une couche de niveau Application, qui permet d’implémenter les protocoles à tester dans le simulateur. Couche Evénements - Traitement des événements Couche Architecture - Simulation réseau - Simulation du matériel (machines virtuelles) Couche Application - Haut niveau de simulation - Simulation des protocoles applicatifs Figure 4.1 – Architecture de Simizer 434.2. ARCHITECTURE DE SIMIZER Table 4.1 – Lois de probabilité disponibles dans Simizer loi paramètres Exponentielle espérance (!) Gaussienne moyenne (µ) Poisson espérance (!) Uniforme densité Zipf biais (s) 4.2.1 Couche Evénements La couche de traitement des événements est la couche de plus bas niveau du simulateur. Elle fournit une boucle de traitement d’événements permettant de garantir l’ordre d’exécution des événements et d’optimiser le traitement de ces objets. Elle fournit aussi un ensemble de classes utilitaires permettant la génération de nombre aléatoires selon différentes lois de distribution. Génération de nombres aléatoires La génération des événements produisant l’exécution de la simulation est régie par les lois de distribution aléatoires décrites dans le tableau 4.1. Le générateur de nombre aléatoire utilisé est celui de la Java Virtual Machine. Grâce à cette classe il est possible de générer trois types de valeurs aléatoires : – des entiers distribués uniformément sur les valeurs comprises entre 0 et un nombre donné en paramètre, – des flottants double précision : distribués uniformément sur les valeurs comprises entre 0 et un nombre donné en paramètre, – des flottants double précision distribués selon une loi Gaussienne d’espérance 0 et d’écart type 1. Les distributions utilisées sont donc générées à partir de ces trois types de valeurs. Deux contraintes principales sont posées : les valeurs obtenues pour la simulation doivent être entières, et l’intervalle des valeurs possibles est limité entre 0 et un paramètre spécifié. Les lois Gaussiennes, Uniformes et Zipfiennes sont utilisées principalement pour décrire les distributions de requêtes utilisateurs. La distribution Zipfienne décrit correctement divers phénomènes utilisés dans les simulations informatique [BCC+99, Fei02] tels que la distribution des tailles des fichiers d’un système, ou la distribution des pages les plus fréquemment 444.2. ARCHITECTURE DE SIMIZER accédées sur un site Web. Il est donc intéressant d’étudier le comportement de différentes stratégies de répartition par rapport à des requêtes distribuées selon cette loi. Plus récemment il a été observé que les effets dus à la mise en cache des données les plus fréquemment accédées, en particulier dans les systèmes web, avait des conséquence visibles sur la distribution des requêtes observées, ces dernières ne suivant plus strictement une loi de Zipf [DCGV02, GTC+07]. Il est donc nécessaire de pouvoir tester le comportement des politiques de répartition par rapport à d’autres distributions possibles comme loi Gaussienne ou Uniforme. Les lois de Poisson et Exponentielles sont utilisées pour décrire le comportement des utilisateurs. La loi de Poisson est utilisée pour générer les arrivées de nouveaux clients du système simulé et la loi Exponentielle est utilisée pour déterminer la durée d’utilisation du système par chaque utilisateur, ainsi que l’intervalle de temps séparant les réponses et les requêtes d’un utilisateur [Fei02]. Traitement des événements Le diagramme 4.2 montre les classes composant l’architecture du moteur d’événements utilisé par Simizer. Une simulation est constituée de plusieurs entités appelées "Producteurs d’événements" (EventProducer) qui produisent chacun un ensemble d’événements particuliers (classe ConcreteEvent). Un événement est caractérisé par une donnée, une cible et un horodatage, qui indiquent respectivement à quel instant et à quel objet de la simulation cet événement doit être signalé. Les producteurs d’évé- nements servent à implanter les divers modèles de systèmes nécessaires à la simulation. Chaque Producteur crée des événements et les planifie en choisissant l’horodatage selon les règles du système simulé. C’est la production de nouveaux événements par les Producteurs d’événements qui fait avancer la simulation. Les événements ainsi produits sont stockés dans une file d’attente ordonnée : le Canal (Channel). À intervalles réguliers au cours de la simulation, le Répartiteur d’événements (EventDispatcher) interroge le Canal pour savoir si il y a des événements en attente. Le cas échéant, le Répartiteur retire de la file l’événement ayant la datation la moins avancée, et le signale à la cible correspondante. Les événements sont signalés l’un après l’autre dans l’ordre croissant des horodatages. Ce choix permet d’éviter les conflits dits de causalité qui peuvent se produire lorsqu’un événement 454.2. ARCHITECTURE DE SIMIZER qui n’est pas encore "arrivé" est exécuté avant ou en même temps qu’un événement qui le précède, ce qui peut provoquer un résultat incohérent voire un blocage de la simulation. Les événements sont spécialisés en fonction du type d’action à réaliser. Ces actions sont Figure 4.2 – Entités de gestion des événements implantées dans la couche supérieure du simulateur : la couche d’architecture. 4.2.2 Couche architecture La couche d’architecture de Simizer représente les différentes entités faisant l’objet de la simulation. En l’occurrence, Simizer est conçu pour simuler des systèmes informatiques distribués. Les entités simulées représentées dans le diagramme de classes 4.3 peuvent être définies ainsi : Le Réseau (Network) permet de simuler les délais et erreurs inhérentes au fonctionnement des réseaux modernes comme internet ou les réseaux locaux. Les Nœuds (Nodes) correspondent aux machines composant le système simulé. Ils communiquent via les Réseaux (Network). Il existe trois catégories de Nœuds : les Clients (ClientNode) repré- sentent les machines clientes de services disponibles sur les Serveurs (ServerNode). Lorsque les serveurs et les clients sont sur des réseaux différents, un Répartiteur (LBNode) sert d’intermédiaire entre les clients et les serveurs. Le comportement des clients est géré par la classe ClientManager, qui ordonne la création de nouveaux clients selon une loi de pro- 46 Vers une capitalisation des connaissances orient´ee utilisateur : extraction et structuration automatiques de l’information issue de sources ouvertes Laurie Serrano To cite this version: Laurie Serrano. Vers une capitalisation des connaissances orient´ee utilisateur : extraction et structuration automatiques de l’information issue de sources ouvertes . Computer Science. Universt´e de Caen, 2014. French. HAL Id: tel-01082975 https://hal.archives-ouvertes.fr/tel-01082975 Submitted on 14 Nov 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destin´ee au d´epˆot et `a la diffusion de documents scientifiques de niveau recherche, publi´es ou non, ´emanant des ´etablissements d’enseignement et de recherche fran¸cais ou ´etrangers, des laboratoires publics ou priv´es.Universit´e de Caen Basse-Normandie Ecole doctorale SIMEM ´ Th`ese de doctorat pr´esent´ee et soutenue le : 24/01/2014 par Laurie Serrano pour obtenir le Doctorat de l’Universit´e de Caen Basse-Normandie Sp´ecialit´e : Informatique et applications Vers une capitalisation des connaissances orient´ee utilisateur Extraction et structuration automatiques de l’information issue de sources ouvertes Directrice de th`ese : Maroua Bouzid Jury Laurence Cholvy Directrice de recherche DTIM, ONERA (Rapporteur) Thierry Poibeau Directeur de recherche LaTTiCe, ENS (Rapporteur) Fatiha Sa¨ıs Maˆıtre de conf´erences LRI, Univ. Paris 11 (Examinatrice) Ga¨el Dias Professeur des universit´es GREYC, Univ. de Caen (Examinateur) Stephan Brunessaux Senior expert Cassidian, EADS (Co-directeur de th`ese) Thierry Charnois Professeur des universit´es LIPN, Univ. Paris 13 (Co-encadrant de th`ese) Maroua Bouzid Professeur des universit´es GREYC, Univ. de Caen (Directrice de th`ese)Mis en page avec la classe thloria.Résumé Face à l’augmentation vertigineuse des informations disponibles librement (notamment sur le Web), repérer efficacement celles qui présentent un intérêt s’avère une tâche longue et complexe. Les analystes du renseignement d’origine sources ouvertes sont particulièrement concernés par ce phénomène. En effet, ceux-ci recueillent manuellement une grande partie des informations d’intérêt afin de créer des fiches de connaissance résumant le savoir acquis à propos d’une entité. Dans ce contexte, cette thèse a pour objectif de faciliter et réduire le travail des acteurs du renseignement et de la veille. Nos recherches s’articulent autour de trois axes : la modélisation de l’information, l’extraction d’information et la capitalisation des connaissances. Nous avons réalisé un état de l’art de ces différentes problématiques afin d’élaborer un système global de capitalisation des connaissances. Notre première contribution est une ontologie dédiée à la représentation des connaissances spécifiques au renseignement et pour laquelle nous avons défini et modélisé la notion d’événement dans ce domaine. Par ailleurs, nous avons élaboré et évalué un système d’extraction d’événements fondé sur deux approches actuelles en extraction d’information : une première méthode symbolique et une seconde basée sur la découverte de motifs séquentiels fréquents. Enfin, nous avons proposé un processus d’agrégation sémantique des événements afin d’améliorer la qualité des fiches d’événements obtenues et d’assurer le passage du texte à la connaissance. Celui-ci est fondé sur une similarité multidimensionnelle entre événements, exprimée par une échelle qualitative définie selon les besoins des utilisateurs. Mots-clés: Gestion des connaissances, exploration de données, représentation des connaissances, renseignement d’origine sources ouvertes, ontologies (informatique), Web sémantique. Abstract Due to the considerable increase of freely available data (especially on the Web), the discovery of relevant information from textual content is a critical challenge. Open Source Intelligence (OSINT) specialists are particularly concerned by this phenomenon as they try to mine large amounts of heterogeneous information to acquire actionable intelligence. This collection process is still largely done by hand in order to build knowledge sheets summarizing all the knowledge acquired about a specific entity. Given this context, the main goal of this thesis work is to reduce and facilitate the daily work of intelligence analysts. For this sake, our researches revolve around three main axis : knowledge modeling, text mining and knowledge gathering. We explored the literature related to these different domains to develop a global knowledge gathering system. Our first contribution is the building of a domain ontology dedicated to knowledge representation for OSINT purposes and that comprises a specific definition and modeling of the event concept for this domain. Secondly, we have developed and evaluated an event recognition system which is based on two different extraction approaches : the first one is based on hand-crafted rules and the second one on a frequent pattern learning technique. As our third contribution, we proposed a semantic aggregation process as a necessary post-processing step to enhance the quality of the events extracted and to convert extraction results into actionable knowledge. This is achieved by means of multiple similarity measures between events, expressed according a qualitative scale which has been designed following our final users’ needs. Keywords: Knowledge management, data mining, knowledge representation (information theory), open source intelligence, ontologies (information retrieval), Semantic Web.Remerciements Cette thèse ayant été réalisée dans le cadre d’une convention industrielle, je tiens tout d’abord à remercier les personnes de mon entreprise et de mon laboratoire qui en sont à l’origine : notamment Stephan Brunessaux et Bruno Grilheres pour m’avoir recrutée et encadrée en stage puis proposé cette collaboration, mais aussi mes encadrants académiques, Maroua Bouzid et Thierry Charnois. Je vous suis à tous extrêmement reconnaissante pour le temps que vous m’avez consacré du début à la fin de cette thèse ainsi que pour toutes nos discussions enrichissantes qui ont permis à mes travaux d’avancer. Je tiens par ailleurs à remercier l’ensemble des membres du jury : Laurence Cholvy et Thierry Poibeau pour avoir accepté de rapporter mon travail ainsi que Fatiha Saïs et Gaël Dias pour leur participation au jury de soutenance. Mes remerciements les plus sincères vont également aux deux équipes au sein desquelles j’ai été intégrée pendant ces trois ans. Je remercie tous les membres de l’équipe MAD pour leur accueil et leurs conseils notamment lors des groupes de travail. Mais aussi et bien sûr tous les membres de l’équipe IPCC : vous m’avez accueillie les bras ouverts et je garderai une place pour vous tous dans ma petite tête, je ne pouvais pas rêver mieux comme équipe pour une première expérience professionnelle. Je te remercie encore Stephan de m’avoir recrutée, Bruno pour ton encadrement justement dosé qui m’a guidé tout en favorisant mon autonomie, Khaled mon cher collègue de bureau pour tes conseils mais aussi pour nos rires même dans les moments de rush. Yann, Arnaud, Emilien, Amandine pour votre aide précieuse quand je venais vous embêter avec mes questions et tous les autres bien sûr qui m’ont spontanément apporté leur aide et soutien. Fred (je ne t’oublie pas non non), notre "chef de centre" farceur et toujours là pour entretenir cette bonne humeur qui caractérise notre équipe, je te remercie pour toutes nos rigolades. Toi aussi, Dafni, ma grecque préférée, je te dois énormément, professionnellement et personnellement, tu as su me comprendre et me conseiller à tout moment et je ne l’oublierai pas. Véro, ma tata, mon amie, tu as été là pour moi depuis le tout premier jour quand tu es venue me chercher à la gare et que nous avons tout de suite sympathisé. Je te remercie pour tout ce que tu as fait pour moi et tout ce que tu continues d’apporter à cette équipe avec ton grand cœur. Je pense aussi à mes amis qui ont été présents pendant cette aventure, tout proche ou à distance, ponctuellement ou en continu, dans les moments difficiles ou pour le plaisir, mais tous toujours là. Milie, Manon, Laure, Lucile, Rosario, Chacha, Marlou, je vous remercie mes chers Talistes/Taliens pour votre soutien, votre écoute, votre folie, nos retrouvailles, nos fiestas et bien d’autres moments passés avec vous. Un gros merci également à mes amis et colocs rouennais, Romain, Etienne, mon Aldricou, Nico, Juline, Manuella, Camille et bien d’autres, vous avez su me changer les idées durant nos traditionnelles soirées à l’Oka, à la coloc, nos escapades au ski, à Brighton et ailleurs. Je suis fière de vous avoir rencontrés et je ne vous oublierai pas. Une pensée pour toi aussi ma Lady, toi qui m’a soutenue dans l’un des moments les plus difficiles de cette thèse. Je vous remercie Julien, Xavier, Mariya et Laura, mes chers amis de Caen, avec vous j’ai pu partager mes petits soucis de thésarde et passer de très agréables soirées caennaises. Last but not least, vous mes Trouyiens adorés, mes amis de toujours, éparpillés en France et ailleurs, Bolou, Maelion, Solenou, Loicou, Tildou, Emilie et Quitterie, sans même vous en rendre compte, vous m’avez aidée pendant cette thèse oui oui, car cette thèse n’est qu’une petite partie d’une aventure bien plus enrichissante... Enfin, je veux remercier mes parents, los de qui cau, qui, drets sus la tèrra, m’ont transmis des valeurs chères à mes yeux et m’ont poussée à aller au bout de ce que j’entreprends et de ce que j’aime. iii Copyright c 2013 - CASSIDIAN - All rights reservediv Copyright c 2013 - CASSIDIAN - All rights reservedTable des matières Résumé i Abstract i Remerciements iii Table des figures x Liste des tableaux xi Introduction 1 1 Contexte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Renseignement d’Origine Sources Ouvertes . . . . . . . . . . . . . . . . . . 3 1.2 Media Mining & la plateforme WebLab . . . . . . . . . . . . . . . . . . . . 5 2 Objectifs et axes de recherche . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 Contributions de la thèse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4 Organisation du mémoire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 I État de l’art 13 1 Représentation des connaissances 17 1.1 Données, informations et connaissances . . . . . . . . . . . . . . . . . . . . . . . . 18 1.2 L’information sémantique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.2.1 Le Web sémantique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.2.2 Les ontologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.2.3 Les langages de représentation . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.2.4 Inférence et bases de connaissances . . . . . . . . . . . . . . . . . . . . . . 23 1.2.5 Les éditeurs d’ontologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.3 Modélisation des événements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.3.1 Qu’est-ce qu’un événement ? . . . . . . . . . . . . . . . . . . . . . . . . . 26 v Copyright c 2013 - CASSIDIAN - All rights reservedTable des matières 1.3.1.1 Les événements en extraction d’information . . . . . . . . . . . . . . 27 1.3.1.2 Les ontologies orientées "événement" . . . . . . . . . . . . . . . . . . 28 1.3.2 Modélisation du temps et de l’espace . . . . . . . . . . . . . . . . . . . . . 33 1.3.2.1 Représentation du temps . . . . . . . . . . . . . . . . . . . . . . . . . 33 1.3.2.2 Représentation de l’espace . . . . . . . . . . . . . . . . . . . . . . . . 34 1.3.3 Spécifications dédiées au ROSO . . . . . . . . . . . . . . . . . . . . . . . . 35 1.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2 Extraction automatique d’information 39 2.1 Définition et objectifs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.2 Approches d’extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.2.1 Extraction d’entités nommées et résolution de coréférence . . . . . . . . . . 43 2.2.2 Extraction de relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.2.3 Extraction d’événements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.3 Plateformes et logiciels pour l’EI . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.5 Évaluation des systèmes d’EI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 2.5.1 Campagnes et projets d’évaluation . . . . . . . . . . . . . . . . . . . . . . . 54 2.5.2 Performances, atouts et faiblesses des méthodes existantes . . . . . . . . . . 56 2.6 Problèmes ouverts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 2.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3 Capitalisation des connaissances 61 3.1 Fusion de données . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.1.1 Réconciliation de données . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.1.2 Web de données . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.1.3 Similarité entre données . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.2 Capitalisation appliquée aux événements . . . . . . . . . . . . . . . . . . . . . . . 66 3.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 II Contributions de la thèse 69 4 Modélisation des connaissances du domaine 73 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.2 Notre modèle d’événement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.2.1 La dimension conceptuelle . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.2.2 La dimension temporelle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 vi Copyright c 2013 - CASSIDIAN - All rights reserved4.2.3 La dimension spatiale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.2.4 La dimension agentive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.3 WOOKIE : une ontologie dédiée au ROSO . . . . . . . . . . . . . . . . . . . . . . 78 4.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 5 Extraction automatique des événements 83 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.2 La plateforme GATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.3 Extraction d’entités nommées . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.3.1 Composition de la chaine d’extraction . . . . . . . . . . . . . . . . . . . . . 87 5.3.2 Développement du module de règles linguistiques . . . . . . . . . . . . . . 88 5.4 Extraction d’événements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.4.1 Approche symbolique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.4.2 Apprentissage de patrons linguistiques . . . . . . . . . . . . . . . . . . . . 97 5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 6 Agrégation sémantique des événements 101 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 6.2 Normalisation des entités . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 6.3 Similarité sémantique entre événements . . . . . . . . . . . . . . . . . . . . . . . . 105 6.3.1 Similarité conceptuelle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 6.3.2 Similarité temporelle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 6.3.3 Similarité spatiale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 6.3.4 Similarité agentive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 6.4 Processus d’agrégation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 6.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 7 Expérimentations et résultats 115 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 7.2 Évaluation du système d’extraction . . . . . . . . . . . . . . . . . . . . . . . . . . 116 7.2.1 Protocole d’évaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 7.2.2 Analyse des résultats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 7.2.3 Bilan de l’évaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 7.3 Premières expérimentions sur l’agrégation sémantique . . . . . . . . . . . . . . . . 122 7.3.1 Implémentation d’un prototype . . . . . . . . . . . . . . . . . . . . . . . . 122 7.3.2 Jeu de données . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 7.3.3 Exemples d’observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 7.3.4 Bilan de l’expérimentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 vii Copyright c 2013 - CASSIDIAN - All rights reservedTable des matières 7.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Conclusion et perspectives 131 1 Synthèse des contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 1.1 État de l’art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 1.2 Un modèle de connaissances pour le ROSO . . . . . . . . . . . . . . . . . . 133 1.3 Une approche mixte pour l’extraction automatique des événements . . . . . 134 1.4 Un processus d’agrégation sémantique des événements . . . . . . . . . . . . 134 1.5 Évaluation du travail de recherche . . . . . . . . . . . . . . . . . . . . . . . 135 2 Perspectives de recherche . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Annexes 139 A WOOKIE : taxonomie des concepts 141 B WOOKIE : événements spécifiques au ROSO 143 C WOOKIE : relations entre concepts 145 D WOOKIE : attributs des concepts 147 E GATE : exemple de chaine de traitement 149 F Gazetteer pour la détection de personnes en français 151 G L’ontologie-type pizza.owl 153 H Extrait de l’ontologie pizza.owl au format OWL 155 I Exemple de document WebLab contenant des événements 159 J Exemple de règle d’inférence au formalisme Jena 163 K Extrait d’un document du corpus d’apprentissage 165 L Extrait d’un document du corpus de test 167 M Source s12 : dépêche de presse à l’origine des événements Event1 et Event2 169 N Source s3 : dépêche de presse à l’origine de l’événement Event3 171 Bibliographie 173 viii Copyright c 2013 - CASSIDIAN - All rights reservedTable des figures 1 Architecture de la plateforme WebLab . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2 Système de capitalisation des connaissances proposé . . . . . . . . . . . . . . . . . . . 9 1.1 Linking Open Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.2 L’environnement Protégé . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.3 L’ontologie Event : modélisation des événements . . . . . . . . . . . . . . . . . . . . . 29 1.4 LODE : modélisation des événements . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 1.5 LODE : alignements entre propriétés . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 1.6 SEM : modélisation des événements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 1.7 DUL : modélisation des événements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 1.8 CIDOC CRM : taxonomie des classes . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 1.9 CIDOC CRM : modélisation des événements . . . . . . . . . . . . . . . . . . . . . . . 33 1.10 Algèbre temporel d’Allen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 1.11 Les relations topologiques RCC-8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.1 Le pentagramme du renseignement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.1 Exemple de règle d’extraction exprimée dans le formalisme JAPE . . . . . . . . . . . . 85 5.2 Règle d’extraction de dates en français . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5.3 Règle d’extraction de dates en anglais . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5.4 Extrait du gazetteer org_key.lst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.5 Règle d’extraction d’organisations en anglais . . . . . . . . . . . . . . . . . . . . . . . 91 5.6 Extrait du gazetteer person_pre.lst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.7 Règle d’extraction de personnes en français . . . . . . . . . . . . . . . . . . . . . . . . 92 5.8 Extrait du gazetteer loc_key.lst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.9 Règle d’extraction de lieux en français . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.10 Gazetteer bombings.lst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 5.11 Exemple d’analyse syntaxique en dépendance . . . . . . . . . . . . . . . . . . . . . . . 96 5.12 Extraction des événements : différentes étapes . . . . . . . . . . . . . . . . . . . . . . 96 5.13 Extraction des événements : chaine de traitement GATE pour l’anglais . . . . . . . . . . 97 5.14 Extraction des événements : exemple d’annotation GATE . . . . . . . . . . . . . . . . 97 5.15 Visualisation et sélection des motifs avec l’outil Camelis . . . . . . . . . . . . . . . . . 100 6.1 Désambiguïsation des entités spatiales : exemple de triplets RDF/XML produits . . . . . 104 7.1 Exemples de motifs séquentiels fréquents sélectionnés . . . . . . . . . . . . . . . . . . 118 7.2 Nombre de motifs retournés en fonction des paramètres choisis . . . . . . . . . . . . . . 119 7.3 Un exemple d’événement issu de la base GTD . . . . . . . . . . . . . . . . . . . . . . . 125 ix Copyright c 2013 - CASSIDIAN - All rights reservedTABLE DES FIGURES 7.4 Similarités entre événements : extrait représenté en RDF/XML . . . . . . . . . . . . . . 126 7.5 Visualisation des 3 événements extraits sur une carte géographique . . . . . . . . . . . 128 x Copyright c 2013 - CASSIDIAN - All rights reservedListe des tableaux 5.1 Classes argumentales pour l’attribution des rôles sémantiques . . . . . . . . . . . . . . 96 6.1 Normalisation des dates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 7.1 Chaines d’extraction d’événements : variantes évaluées . . . . . . . . . . . . . . . . . . 119 7.2 Extraction d’événements : précision, rappel et F-mesure . . . . . . . . . . . . . . . . . . 120 7.3 Extraction d’événements : apport de l’analyse syntaxique . . . . . . . . . . . . . . . . . 121 7.4 Extraction d’événements : influence de la REN . . . . . . . . . . . . . . . . . . . . . . 121 7.5 Alignement des types d’événement entre le modèle GTD et l’ontologie WOOKIE . . . . 124 7.6 Événements extraits et leurs dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . 125 7.7 Exemple de 3 événements agrégés automatiquement . . . . . . . . . . . . . . . . . . . 127 7.8 Fiches d’événements de référence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 xi Copyright c 2013 - CASSIDIAN - All rights reservedLISTE DES TABLEAUX xii Copyright c 2013 - CASSIDIAN - All rights reservedIntroduction Sommaire 1 Contexte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Renseignement d’Origine Sources Ouvertes . . . . . . . . . . . . . . . . 3 1.2 Media Mining & la plateforme WebLab . . . . . . . . . . . . . . . . . . . 5 2 Objectifs et axes de recherche . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 Contributions de la thèse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4 Organisation du mémoire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1 Copyright c 2013 - CASSIDIAN - All rights reservedIntroduction Le savoir occupe aujourd’hui et depuis toujours une place centrale dans notre société, il est au cœur de toute activité humaine. Épanouissement intellectuel pour certains et capital pour d’autres, la connaissance est considérée comme une richesse pouvant servir un large panel d’objectifs. Que ce soit à des fins personnelles ou professionnelles, la principale visée de l’acquisition du savoir quel qu’il soit est, sans aucun doute, de mieux appréhender et de comprendre notre environnement. Dans des situations variées et en constante mutation, la capacité à observer et à analyser ce qui nous entoure (objets, personnes, situations, relations, faits, etc.) est un préalable fondamental à tout processus de prise de décision. L’essor d’Internet et des nouvelles technologies de l’information a récemment déstabilisé les principaux mécanismes traditionnels de gestion de la connaissance. Passé d’un ensemble restreint et structuré de silos à la Toile, le savoir est de plus en plus accessible à tous et partout. Ce changement provient principalement de la démocratisation des moyens de communication et de publication de l’information. Deux problématiques principales émergent alors : – Comment faire face à cette nouvelle masse disponible qui constitue une mine d’or mais peut également s’avérer néfaste à notre acquisition de connaissance ? – Quels moyens mettre en place pour extraire un savoir homogène à partir de contenus de plus en plus diversifiés sur le fond et sur la forme ? Celles-ci occupent une place centrale dans divers domaines pour lesquels l’acquisition du savoir est stratégique : le Renseignement d’Origine Sources Ouvertes (ROSO) est l’un de ces domaines. 2 Copyright c 2013 - CASSIDIAN - All rights reserved1. Contexte 1 Contexte En guise d’introduction de nos travaux, nous proposons tout d’abord un rapide tour d’horizon du contexte de cette étude, réalisée dans un cadre à la fois académique et applicatif. Nous parlerons, dans un premier temps, du Renseignement d’Origine Sources Ouvertes constituant le fil directeur de nos recherches en termes de besoins opérationnels. Puis, nous introduirons un champ de recherche visant à répondre à ces besoins et dans lequel se situe plus précisément notre sujet de recherche, à savoir la fouille de documents multimédia, plus communément désignée par le terme de Media Mining. 1.1 Renseignement d’Origine Sources Ouvertes Le Renseignement d’Origine Sources Ouvertes (dit ROSO) désigne toute activité de recueil et d’analyse de l’information disponible publiquement et légalement (presse écrite, blogs, sites internet, radio, télévision, etc.) [Best and Cumming, 2007]. Initialement définie dans le domaine de la défense, cette activité est aujourd’hui menée plus largement à des fins stratégiques et économiques sous le nom de veille à partir de sources ouvertes. En effet, les Sources Ouvertes (SO) sont très prolifiques et peuvent, par exemple, fournir les données nécessaires pour analyser la situation d’un pays : caractéristiques géopolitiques et sociales, diffé- rents acteurs économiques, politiques, militaires, terroristes ou criminels, etc. Lors d’une crise, l’analyse systématique des médias nationaux et internationaux peut permettre, par exemple, de produire automatiquement une synthèse qui facilitera les prises de décision. Des processus de veille peuvent également être mis en œuvre pour effectuer une recherche pro-active d’informations liées à l’environnement, aux opportunités ou aux menaces qui constituent des signaux faibles et qui doivent faire l’objet d’une écoute anticipative. Les objectifs premiers du ROSO sont les suivants : – Savoir : s’informer sur les intentions d’un acteur intérieur ou extérieur, comprendre cet acteur et la situation ; – Prévoir : anticiper les évolutions, prévenir les menaces, influencer les situations. Le cycle du renseignement a été défini pour répondre à ces deux objectifs et est constitué de 5 étapes : 1. Orientation : il s’agit de définir le besoin en renseignement et spécifier les indicateurs permettant de valider la réussite de l’action de renseignement, 2. Planification : elle consiste à trouver les sources et les gisements d’information d’intérêt et à définir le besoin de veille, 3. Recherche : elle vise à réaliser l’acquisition et le pré-traitement des données à partir des gisements précédemment définis, 4. Exploitation : il s’agit d’analyser le contenu des données, de les filtrer pour en faire émerger de la connaissance, 5. Diffusion : elle consiste à faire la synthèse et à remonter l’information utile vers le décideur. Le ROSO est aujourd’hui devenu un processus complexe à mettre en œuvre. En effet, avec la croissance du Web 2.0 et la multiplication des gisements d’information, les spécialistes du renseignement 3 Copyright c 2013 - CASSIDIAN - All rights reservedIntroduction et les veilleurs se trouvent confrontés, d’une part, à une masse de données toujours plus importante et, d’autre part, à une diversité croissante des formats et structures. Face à ce nouveau phénomène, des systèmes d’information plus performants sont désormais nécessaires pour accéder et traiter cette masse d’information. Ces systèmes sont notamment indispensables pour dépasser les limites des moteurs de recherche "grand public" et proposer une collecte ciblée et précise de l’information à la fois dans le Web public, mais aussi dans le Web profond. Ils doivent également être capables de détecter les informations pouvant constituer des signes précoces de changement ou de menace. Dès lors, le développement et l’utilisation de tels outils deviennent des tâches de plus en plus complexes. L’essor des nouveaux moyens de communication favorise l’émergence des sources d’information mettant à disposition des informations aux caractéristiques particulières. La prise en compte de celles-ci est primordiale pour obtenir un système adapté à la fois aux besoins des analystes du renseignement mais également aux informations que ceux-ci sont amenés à traiter. Ainsi, les principales problématiques rencontrées lors du traitement des informations issues de sources ouvertes sont les suivantes : – L’hétérogénéité des formats et structures : les informations disponibles sont proposées dans des formats variés (pages HTML, flux RSS, réseaux sociaux, wiki, blogs, forums, etc.) et ne sont pas toujours structurées. Le traitement de ces ressources implique l’utilisation d’un ensemble varié d’outils dont l’interopérabilité n’est pas assurée. Pour atteindre ses objectifs, le veilleur doit se former à leur utilisation combinée, ce qui augmente encore la complexité de son travail. – Le multilinguisme : avec la démocratisation de l’accès à Internet, le nombre de langues employées sur la Toile a fortement augmenté ces dernières années, ce qui pose des problèmes d’intercompréhension. Les outils de traduction automatique deviennent donc clés afin de donner un accès à ces contenus dans des langues non maitrisées par l’analyste. – La quantité d’information à traiter : la quantité et le volume des informations mises à disposition aujourd’hui en sources ouvertes, notamment avec la croissance des contenus audio et vidéo en ligne, sont tels qu’il devient impossible de collecter manuellement toutes ces informations. En effet, la collecte de ces grands volumes nécessite des connexions réseaux rapides, du temps, ainsi qu’un espace de stockage important, dont on ne dispose généralement pas. De plus, face à cette quantité d’informations disponibles, le veilleur se trouve submergé et il devient impossible pour lui de traiter efficacement ces nouvelles données, et de discerner clairement les informations pertinentes pour sa tâche. – La qualité et l’interprétation des informations : les informations disponibles en sources ouvertes peuvent être peu fiables, contradictoires, dispersées, et il s’avère souvent difficile, voire impossible de savoir quel crédit leur accorder au premier abord. Il convient alors de recouper et d’analyser ces informations pour leur donner un sens et une valeur, ce qui reste une étape manuelle et donc coû- teuse, à la fois en termes de temps et de moyens. Comment rassembler et sélectionner efficacement dans la masse d’informations, les plus pertinentes, qui seront ensuite interprétées et auxquelles on tachera de donner un sens et évaluer une crédibilité ? Aujourd’hui, les plateformes de veille tentent de répondre à ces premières problématiques, mais du fait de l’évolution rapide des technologies, des formats et des structures d’information, ces systèmes ne sont pas toujours cohérents et évolutifs. S’il existe de nombreux outils publics et accessibles sur Internet (moteurs de recherche généralistes ou verticaux), une veille qui repose uniquement sur ceux-ci se trouve rapidement limitée. Pour répondre à des besoins d’information précis et garder une réactivité importante face à de nouveaux types d’information, la sélection des techniques les plus pertinentes reste indispensable. De plus, la prise en main de ces outils s’avère coûteuse en temps pour les analystes, 4 Copyright c 2013 - CASSIDIAN - All rights reserved1. Contexte c’est pourquoi l’efficacité d’un travail de veille va dépendre de l’intégration de ces divers outils au sein d’une seule et même plateforme, mais aussi et surtout des performances de chacun des composants de traitement de l’information. 1.2 Media Mining & la plateforme WebLab La coordination d’un ensemble de techniques de traitement de l’information pour les besoins que nous venons d’évoquer est notamment l’objet de recherche de la fouille de documents multimédia ou Media Mining. En effet, l’exploitation d’un tel volume d’informations requiert l’automatisation de tout ou partie des traitements d’analyse, d’interprétation et de compréhension des contenus. Il s’agit donc de rechercher, de collecter, d’extraire, de classer, de transformer ou, plus généralement, de traiter l’information issue des documents disponibles et, enfin, de la diffuser de façon sélective en alertant les utilisateurs concernés. Cet ensemble d’analyses est généralement implémenté sous forme d’une même chaine de traitement. Ceci permet de diminuer la quantité d’outils que l’analyste doit maîtriser et utiliser durant sa veille et par là même de faciliter le passage de l’information entre les différents services d’analyse. Ces chaines de traitement apportent une valeur ajoutée dans la recherche d’informations à partir de sources ouvertes mais également dans le traitement de l’ensemble des informations numériques. Dans le cadre du ROSO, elles implémentent des technologies principalement issues des domaines de l’Intelligence Artificielle (IA) et de la gestion des connaissances (KM pour Knowledge Management). Concernant le traitement des données textuelles, par exemple, différentes approches complémentaires peuvent être utilisées. Ainsi, des techniques statistiques permettent d’analyser les contenus d’un grand nombre de documents pour déterminer automatiquement les sujets abordés en fonction des termes les plus discriminants. Par ailleurs, des techniques probabilistes peuvent également être utilisées avec succès pour identifier la langue d’un document. Des techniques d’analyse linguistique à base de grammaires permettent de réaliser d’autres types de traitement en Extraction d’information (EI) tels que la recherche d’expressions régulières, d’amorces de phrases, de noms de personnes, de dates d’événements, etc. Une analyse sémantique permet, quant à elle, de traduire les chaines de caractères ou les données techniques contenues dans les documents multimédia par des concepts de haut niveau. Par ailleurs, des ontologies de domaine peuvent être définies et utilisées pour annoter et rechercher les documents selon un modèle commun bien défini. La même variété de technologies et approches se retrouve dans le domaine du multimédia. La transcription de la parole, par exemple, permet de réaliser des fonctions similaires aux documents texte sur les documents audio ou vidéo. Une fois transcrit, le document peut être indexé puis retrouvé de façon rapide et efficace. De plus, des outils de traduction peuvent également être intégrés dans une chaîne de traitement. Dans le domaine de l’image, la détection ou la reconnaissance de visages, la recherche par similarité peuvent permettre de retrouver automatiquement une information d’intérêt. La combinaison des techniques de fouille de textes et de fouilles de documents audio ou vidéo ouvre la voie à des traitements de plus en plus puissants. Côté logiciels, il existe un grand nombre de solutions et de briques technologiques mises à disposition par des éditeurs commerciaux ou en open-source et par la communauté scientifique. Le choix de la meilleure brique est souvent très difficile car les critères sont nombreux, variés et évolutifs. Pour composer des offres "sur mesure" bien adaptées au besoin, des plateformes dites d’intégration permettent d’assembler et de faire inter-opérer les outils sélectionnés. Le choix d’une plateforme devient 5 Copyright c 2013 - CASSIDIAN - All rights reservedIntroduction alors un enjeu essentiel pour produire un système de traitement des informations structurées et non structurées. Seule une plateforme basée sur des standards largement répandus peut permettre d’assurer l’interopérabilité des outils retenus. WebLab est une de ces plateformes d’intégration, elle est développée et maintenue par la société Cassidian et constitue le socle fonctionnel et technque au sein duquel nos recherches se sont déroulées [Giroux et al., 2008] La plateforme WebLab vise à faciliter l’intégration de composants logiciels plus particulièrement dédiés, au traitement de documents multimédia et d’informations non-structurées (texte, image, audio et vidéo) au sein d’applications dédiées à diverses activités de veille telles que le ROSO mais aussi la veille économique et stratégique. Différents composants spécifiques viennent ré- gulièrement enrichir cette plateforme pour lui offrir des fonctionnalités de collecte de données sur des sources ouvertes (Internet, TV ou radio, presse écrite, etc.) ou dans des entrepôts privés, de traitement automatique des contenus (extraction d’information, analyse sémantique, classification, transcription de la parole, traduction, segmentation, reconnaissance d’écriture, etc.), de capitalisation (stockage, indexation, enregistrement dans des bases de connaissance, etc.) et d’exploitation des connaissances (recherche avancée, visualisation et synthèse graphique, aide à la décision, etc.). Les objectifs scientifiques et technologiques de la plateforme sont multiples : – Il s’agit de définir un modèle de référence, basé sur les standards du Web Sémantique (XML, RDF, RDFS, OWL, SPARQL, etc.), permettant à des composants logiciels hétérogènes d’échanger efficacement des données brutes, des méta-données associées ou des informations élaborées de façon automatique. – Il s’agit également de proposer des interfaces génériques de services afin de normaliser les interactions entre les composants et de simplifier la construction de chaînes de traitement au sein desquelles ils sont mis en œuvre conjointement. – Il s’agit enfin de proposer et de mettre à disposition un ensemble de briques logicielles réutilisables et composables pour construire rapidement des applications adaptées à un besoin particulier. Ces briques prennent la forme de services qui couvrent un large spectre de fonctionnalités et de composants d’IHM 1 qui permettent de piloter les services et d’en exploiter les résultats côté utilisateur. Enfin, l’IHM est constituée par assemblage de composants qui s’exécutent au sein d’un portail Web personnalisable par l’utilisateur final. Pour des besoins nouveaux ou spécifiques, les services et composants d’IHM peuvent être créés de toute pièce ou développés en intégrant des composants du commerce et/ou open-source. L’architecture de cette plateforme est résumée par la figure 1. 2 Objectifs et axes de recherche Étant donné le contexte que nous venons de décrire, cette thèse a pour objectif de faciliter et de réduire le travail des analystes dans le cadre du ROSO et de la veille plus généralement. Face à l’augmentation croissante des informations disponibles pour tous librement et légalement, notamment sur le Web, et face à l’hétérogénéité des contenus, il s’agit de proposer un système global de capitalisation des connaissances permettant aux acteurs du ROSO d’exploiter cette masse d’informations. 1. Interface Homme-Machine 6 Copyright c 2013 - CASSIDIAN - All rights reserved2. Objectifs et axes de recherche FIGURE 1 – Architecture de la plateforme WebLab Nos recherches s’articulent autour de trois axes principaux, correspondant à trois des nombreuses problématiques de l’Intelligence Artificielle : – Modélisation de l’information – Extraction d’information – Capitalisation des connaissances Nos travaux au sein de ce premier axe visent à définir l’étendue et la nature des informations d’intérêt pour le domaine du ROSO, c’est-à-dire mettre en place un modèle de connaissances. Plus concrètement, il s’agira de recenser, définir et formaliser sémantiquement l’ensemble de concepts utilisés par les experts de ce domaine et leurs relations. Ce modèle servira de socle de référence à notre processus global de capitalisation des connaissances : pour exprimer l’ensemble des informations de façon unifiée mais aussi assurer une communication entre les différents services de traitement de l’information développés. Nous explorerons pour cela les travaux existants et notamment les représentations sous forme d’ontologie de domaine qui est, à l’heure actuelle, le mode de représentation le plus utilisé dans ce but. Le second axe a pour objectif de proposer une approche nouvelle d’extraction d’information à partir de textes en langage naturel. Celle-ci devra permettre de repérer automatiquement l’ensemble des entités et événements d’intérêt pour le ROSO définis au sein du premier axe de recherche. Pour ce faire, 7 Copyright c 2013 - CASSIDIAN - All rights reservedIntroduction nous nous intéresserons notamment à la combinaison de différentes techniques actuelles (linguistiques, statistiques ou hybrides) afin d’améliorer la qualité des résultats obtenus. Le dernier axe de recherche vise à définir un processus de transformation des informations extraites en réelles connaissances, c’est-à-dire les normaliser, les structurer, les relier, considérer les problématiques de continuité (redondance/contradiction, temps/espace), etc. Ces traitements doivent aboutir à la création de fiches de connaissances destinées aux analystes, résumant l’ensemble du savoir acquis automatiquement au sujet d’une entité d’intérêt. Celles-ci seront stockées et gérées au sein d’une base de connaissances pour permettre leur mise à jour lors du traitement de nouveaux documents mais également des mécanismes de raisonnement/inférence afin d’en déduire de nouvelles connaissances. Une place importante sera réservée à l’articulation de ces trois axes de recherche au sein d’un processus global de capitalisation des connaissances que nous souhaitons maintenir le plus générique et flexible possible. La figure 2 présente de façon synthétique la problématique et les objectifs de nos recherches. 3 Contributions de la thèse Nos travaux de recherche ont donné lieu à plusieurs contributions selon les objectifs que nous venons de définir ainsi qu’à un certain nombre de publications que nous listons ci-dessous. Nous avons tout d’abord réalisé un état de l’art des différents axes de recherche abordés par le sujet. Suite à cela, nous avons mis en place une ontologie de domaine nommée WOOKIE (Weblab Ontology for Open sources Knowledge and Intelligence Exploitation) dédiée à la représentation des connaissances spécifiques au ROSO et à la veille de façon plus générale. Nous avons notamment défini, et intégré au sein de WOOKIE, un modèle de représentation de l’événement en prenant pour base les conclusions de l’état de l’art. Un événement y est défini comme une entité complexe à quatre dimensions : une dimension conceptuelle (le type de l’événement), une dimension temporelle (la date de l’événement), une dimension spatiale (le lieu de l’événement) et une dimension agentive (les participants de l’événement). Dans un second temps, nous avons élaboré et évalué un système d’extraction d’événements dit "mixte". En effet, les travaux explorés dans le domaine de l’extraction d’information ayant mis en évidence un certain nombre de limites aux techniques existantes (symboliques et statistiques), nous nous sommes orientés vers une approche combinant deux techniques actuelles. La première méthode proposée consiste en des règles d’extraction élaborées manuellement couplées avec une analyse syntaxique en dépendance. La seconde est basée sur un apprentissage dit "symbolique" de patrons linguistiques par extraction de motifs séquentiels fréquents. Nous avons implémenté ces deux extracteurs en prenant pour base l’ontologie de domaine WOOKIE ainsi que notre représentation des événements et en assurant une possible intégration au sein de la plateforme WebLab. Une première évaluation de ces deux extracteurs a été mise en œuvre et publiée (voir les publications ci-dessous). Chacune des deux méthodes a obtenu des résultats satisfaisants et comparables à l’état de l’art. Cette évaluation a également montré qu’un processus d’agrégation adapté permettra d’exploiter au mieux les points forts de ces deux approches et d’ainsi améliorer significativement la qualité de l’extraction d’événements. Ce processus d’agrégation constitue notre troisième contribution. Pour cela, nous avons exploré notamment les travaux existants en fusion de données et plus particulièrement en fusion d’informations textuelles. Nous avons choisi d’élaborer un processus d’agrégation sémantique multi-niveaux : une simi- 8 Copyright c 2013 - CASSIDIAN - All rights reserved3. Contributions de la thèse FIGURE 2 – Système de capitalisation des connaissances proposé 9 Copyright c 2013 - CASSIDIAN - All rights reservedIntroduction larité entre événements est estimée au niveau de chaque dimension puis nous définissons un processus d’agrégation global basé sur ces similarités intermédiaires pour aider l’utilisateur à déterminer si deux événements extraits réfèrent ou non à un même événement dans la réalité. Les similarités sont exprimées selon une échelle qualitative définie en prenant en compte les besoins des utilisateurs finaux. Enfin, nous avons implémenté un prototype d’évaluation permettant l’agrégation des événements suivant le processus. Nos travaux de recherche ont donné lieu à plusieurs publications dans des conférences nationales et internationales dans les domaines abordés par cette thèse : – Serrano, L., Grilheres, B., Bouzid, M., and Charnois, T. (2011). Extraction de connaissances pour le renseignement en sources ouvertes. In Atelier Sources Ouvertes et Services (SOS 2011) en conjonction avec la conférence internationale francophone (EGC 2011), Brest, France – Serrano, L., Charnois, T., Brunessaux, S., Grilheres, B., and Bouzid, M. (2012b). Combinaison d’approches pour l’extraction automatique d’événements (automatic events extraction by combining multiple approaches) [in french]. In Actes de la conférence conjointe JEP-TALN-RECITAL 2012, volume 2: TALN, Grenoble, France. ATALA/AFCP – Serrano, L., Bouzid, M., Charnois, T., and Grilheres, B. (2012a). Vers un système de capitalisation des connaissances : extraction d’événements par combinaison de plusieurs approches. In Atelier des Sources Ouvertes au Web de Données (SOS-DLWD’2012) en conjonction avec la conférence internationale francophone (EGC 2012), Bordeaux, France – Caron, C., Guillaumont, J., Saval, A., and Serrano, L. (2012). Weblab : une plateforme collaborative dédiée à la capitalisation de connaissances. In Extraction et gestion des connaissances (EGC’2012), Bordeaux, France – Serrano, L., Bouzid, M., Charnois, T., Brunessaux, S., and Grilheres, B. (2013b). Extraction et agrégation automatique d’événements pour la veille en sources ouvertes : du texte à la connaissance. In Ingénierie des Connaissances 2013 (IC 2013), Lille, France – Serrano, L., Bouzid, M., Charnois, T., Brunessaux, S., and Grilheres, B. (2013a). Events extraction and aggregation for open source intelligence: from text to knowledge. In IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2013), Washington DC, USA 4 Organisation du mémoire Ce mémoire est organisé en deux parties reflétant l’ensemble du travail de recherche accompli durant cette thèse : – la première partie État de l’art est divisée en trois chapitres et présente l’étude de l’état de l’art réalisée ; – la seconde partie Contributions de la thèse, composée de quatre chapitres, expose l’ensemble des contributions réalisées. Le premier chapitre, intitulé Représentation des connaissances, propose un tour d’horizon, centré sur notre problématique, du domaine de la représentation et de la modélisation des connaissances. Nous commençons par rappeler succinctement les concepts de base dans ce cadre avant d’aborder la thématique de l’information sémantique avec notamment les travaux autour du Web sémantique et des ontologies. En- fin, nous centrons notre présentation sur l’objet central à cette thèse – les événements – afin de rappeler comment ce concept et ses propriétés ont été définis et modélisés jusqu’à nos jours au sein des différents 10 Copyright c 2013 - CASSIDIAN - All rights reservedaxes de recherche mentionnés précédemment. Le second chapitre Extraction automatique d’information réalise un état de l’art autour de la problé- matique principale de nos travaux, à savoir l’extraction automatique d’information. Dans celui-ci nous recensons notamment les différentes recherches menées récemment autour des trois grands types d’objets de l’EI que sont kes entités nommées, les relations et enfin les événements. Puis, nous nous focalisons sur des aspects plus applicatifs à travers la présentation de quelques plateformes/logiciels pour l’EI et un certain nombre de cas d’application dans ce domaine. Nous clôturons ce chapitre en abordant les problé- matiques d’évaluation et les performances des méthodes proposées en EI. Le chapitre Capitalisation des connaissances termine cette partie d’état de l’art par une présentation d’ensemble de la problématique nouvelle que constitue la capitalisation des connaissances. Celle-ci aborde tout d’abord les travaux liés à notre sujet de recherche dans des domaines tels que la fusion et la réconciliation de données mais également le mouvement nouveau autour du Web de données. Enfin, nous présentons un ensemble d’approches existantes visant à appliquer les techniques de capitalisation au traitement des événements. En seconde partie, le quatrième chapitre, intitulé Modélisation des connaissances du domaine, dé- taille la première contribution proposée durant nos travaux de thèse : une modélisation des événements ainsi qu’une ontologie de domaine nommée WOOKIE. Celles-ci ont été élaborées en fonction des conclusions de notre état de l’art et de façon adaptée à notre problématique et notre cadre de recherche, à savoir l’extraction automatique des événements dans le cadre du ROSO. Le chapitre Extraction automatique des événements constitue le cœur de nos recherches et présente notre seconde contribution, c’est-à-dire l’ensemble de nos réalisations autour du second axe de nos recherches. Nous y détaillons, tout d’abord, la méthode que nous avons élaborée pour l’extraction automatique des entités nommées pour les langues anglais et française. Puis, est explicitée et exemplifiée notre contribution centrale, à savoir la conception et la réalisation d’une approche pour l’extraction automatique des événements fondée sur deux méthodes issues de l’état de l’art que nous améliorées et adaptées à notre problématique et à notre cadre applicatif. Le chapitre suivant Agrégation sémantique des événements présente les recherches que nous avons menées dans le cadre du troisième axe "Capitalisation des connaissances". Tout d’abord, nous nous sommes intéressés à la réconciliation de divers méthodes et systèmes pour proposer une méthodologie générique d’agrégation sémantique des événements issus des outils d’extraction. Ce chapitre présente ses fondements autour d’une approche permettant d’estimer la similarité sémantique entre événements et ensuite de les agréger pour faciliter le travail des analystes du ROSO. Notre mémoire se poursuit avec le dernier chapitre nommé Expérimentations et résultats dans lequel sont exposées deux expérimentations réalisées dans le cadre de cette thèse dans le but d’estimer les apports et limites des contributions présentées. La première évaluation concerne notre approche pour l’extraction automatique des événements : pour ce faire nous avons employé un corpus de test issu d’une campagne d’évaluation en EI ainsi que des métriques classiques dans cette discipline. La seconde évaluation est qualitative est montre l’apport de notre méthode d’agrégation sémantique des événements au travers d’exemples réels issus de nos travaux. Pour chacune de ces expérimentations nous exposons également leurs limites ainsi que les perspectives envisagées. Nous conclurons ce mémoire de recherche en rappelant l’ensemble des contributions réalisées, puis nous exposerons les différentes perspectives ouvertes par nos travaux. 11 Copyright c 2013 - CASSIDIAN - All rights reservedIntroduction 12 Copyright c 2013 - CASSIDIAN - All rights reservedPremière partie État de l’art 13 Copyright c 2013 - CASSIDIAN - All rights reservedIntroduction Cette première partie a pour objectif de réaliser un tour d’horizon de l’existant dans les principaux domaines abordés par cette thèse. Le premier chapitre de cet état de l’art (chapitre 1) sera centré sur les concepts et approches actuels en représentation des connaissances. L’accent sera mis ici sur les technologies du Web sémantique et la modélisation des événements. Le chapitre suivant (chapitre 2) explorera les principales recherches et réalisations dans le domaine de l’extraction d’information et plus particuliè- rement les travaux concernant l’une de nos principales problématiques, à savoir l’extraction automatique des événements. Pour finir, nous aborderons, au travers du chapitre 3, un ensemble de travaux menés autour de la capitalisation des connaissances, notamment en fusion de données et résolution de coréfé- rence entre événements. Pour conclure chacun de ces trois chapitres, nous dresserons un bilan des forces et faiblesses des approches explorées et nous introduirons chacune de nos contributions en réponse à cet état de l’art. 15 Copyright c 2013 - CASSIDIAN - All rights reservedIntroduction 16 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 1 Représentation des connaissances Sommaire 1.1 Données, informations et connaissances . . . . . . . . . . . . . . . . . . . . . 18 1.2 L’information sémantique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.2.1 Le Web sémantique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.2.2 Les ontologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.2.3 Les langages de représentation . . . . . . . . . . . . . . . . . . . . . . . 22 1.2.4 Inférence et bases de connaissances . . . . . . . . . . . . . . . . . . . . . 23 1.2.5 Les éditeurs d’ontologies . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.3 Modélisation des événements . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.3.1 Qu’est-ce qu’un événement ? . . . . . . . . . . . . . . . . . . . . . . . . 26 1.3.1.1 Les événements en extraction d’information . . . . . . . . . . . . . 27 1.3.1.2 Les ontologies orientées "événement" . . . . . . . . . . . . . . . . 28 1.3.2 Modélisation du temps et de l’espace . . . . . . . . . . . . . . . . . . . . 33 1.3.2.1 Représentation du temps . . . . . . . . . . . . . . . . . . . . . . . 33 1.3.2.2 Représentation de l’espace . . . . . . . . . . . . . . . . . . . . . . 34 1.3.3 Spécifications dédiées au ROSO . . . . . . . . . . . . . . . . . . . . . . . 35 1.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 17 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 1. Représentation des connaissances Dans ce premier chapitre nous réalisons un tour d’horizon, centré sur notre problématique de recherche, des domaines de la représentation et de la modélisation des connaissances. Nous commençons par rappeler succinctement les concepts de base de donnée, information et connaissance, avant d’aborder la thématique de l’information sémantique avec notamment les travaux autour du Web sémantique et des ontologies. Enfin, nous centrons notre présentation sur l’objet central à cette thèse – les événements – afin de déterminer comment ce concept et ses propriétés sont définis et modélisés par les différents travaux actuels. 1.1 Données, informations et connaissances La distinction entre les termes "donnée", "information" et "connaissance" est couramment abordée dans la littérature liée à la gestion des connaissances [Balmisse, 2002] [Crié, 2003] [Paquet, 2008]. Celleci est nécessaire afin, d’une part, de mieux comprendre les différentes problématiques soulevées par le traitement de l’information (au sens large) et, d’autre part, de pointer à quel niveau d’analyse se situent les différents technologies et outils existants. Une donnée est généralement définie comme un élément brut non traité et disponible hors de tout contexte. Une fois collectées et traitées, par le cerveau humain ou par une machine, ces données deviennent des informations. Une information est le résultat de la contextualisation d’un ensemble de données afin d’en saisir les liens et de leur donner un sens. L’information est statique et périssable, sa valeur diminue dans le temps (car dépendante de son contexte qui est amené à varier). A l’inverse, la connaissance est le résultat d’un processus dynamique visant à assimiler/comprendre les principes sous-jacents à l’ensemble des informations obtenues. Cette compréhension permet de prévoir l’évolution future d’une situation et d’entreprendre des actions en conséquence. Sous réserve de cette interprétation profonde, une plus grande quantité d’information mène à une meilleure connaissance d’un sujet donné. Prenons par exemple la donnée brute "11/20", sans indication de contexte, cette suite de symboles peut véhiculer diverses informations et ne peut être exploitée en l’état. Toutefois, si cela fait référence à la note obtenue an mathématiques par Pierre, cette donnée contextualisée prend du sens et devient une information. Par ailleurs, si cette première information est associée avec le fait que la moyenne de la classe s’élève à 13/20, chacun peut faire le lien entre ces deux informations et savoir (obtenir la connaissance) que Pierre a des difficultés dans cette matière. Ainsi, cette connaissance acquise, les parents de Pierre pourront, par exemple, prendre la décision de l’inscrire à du soutien scolaire. La distinction entre ces trois concepts correspond à la dimension hiérarchique de la connaissance proposée par [Charlot and Lancini, 2002]. Les auteurs définissent quatre autres dimensions dont la dimension épistémologique introduisant deux grandes formes de connaissance : celle dite "explicite" qui peut être codifiée et partagée, d’une part, et celle dite "tacite", d’autre part, difficilement exprimable dans un langage commun et donc peu transmissible. Cette définition suggérée par [Polanyi, 1966] est largement partagée par les cogniticiens. Depuis les débuts du Web, ce passage des données aux connaissances s’est placé au centre des pré- occupations de divers secteurs d’activité (industriel, gouvernemental, académique, etc.). En effet, avec l’explosion de la quantité d’information mise en ligne, il est devenu primordial, en particulier pour les organisations, de pouvoir exploiter cette masse afin d’en obtenir des connaissances. Nous sommes passés ainsi de l’époque des bases de données à celle des bases de connaissances. 18 Copyright c 2013 - CASSIDIAN - All rights reserved1.2. L’information sémantique 1.2 L’information sémantique Le développement du Web et des NTIC 2 a mis en avant un nouvel enjeu : le partage des connaissances. D’un Web de documents (Web 1.0) voué à la publication/visualisation statique des informations grâce au langage HTML, nous avons évolué vers un Web plus collaboratif et dynamique. Ce Web 2.0, introduit aux débuts des années 2000, a constitué une première évolution dans ce sens en remettant l’homme au centre de la toile grâce aux blogs, réseaux sociaux, wikis et autres moyens lui permettant de créer, publier et partager ses propres connaissances. Dès les début du Web 2.0, Tim Berners-Lee évoquait déjà la prochaine "version" du World Wide Web : le Web sémantique [Berners-Lee et al., 2001]. Les sections suivantes présentent ce Web 3.0 et les différentes technologies associées. 1.2.1 Le Web sémantique Le Web sémantique désigne un projet d’évolution de notre Web actuel (Web 2.0) initié par le W3C3 . Cette initiative est aussi connue sous les noms de Web de données, Linked Data, Linked Open Data ou encore Linking Open Data. L’objectif principal de ce mouvement est de faire en sorte que la quantité de données exposée sur le Web (qui ne cesse de croître) soit disponible dans un format standard, accessible et manipulable de manière unifiée par toutes les applications du Web. Le Web sémantique de Tim Berners-Lee est voué à donner du sens à l’ensemble des données mises en ligne afin de faciliter la communication entre les hommes et les machines et permettre ainsi à ces dernières de réaliser des tâches qui incombaient jusqu’alors aux utilisateurs. Par ailleurs, le nom Web de données met en avant la nécessité de créer des liens entre données pour passer d’un Web organisé en silos de données déconnectés (ayant leur propres formats, protocoles, applications, etc.) à un espace unifié sous la forme d’une collection de silos inter-connectés. La figure 1.1 présente l’état actuel du LOD 4 sous la forme d’un graphe où les noeuds correspondent à des bases de connaissances respectant les principes du Web sémantique et les arcs représentent les liens existants. Parmi les plus renommés des silos de données, la base sémantique DBPedia fournit une grande partie du contenu de Wikipedia et incorpore des liens vers d’autres bases de connaissances telles que Geonames, par exemple. Ces relations (sous la forme de triplets RDF 5 , voir la section 1.2.3) permettent aux applications Web d’exploiter la connaissance supplémentaire (et potentiellement plus précise) issue d’autres bases de connaissances et de fournir ainsi de meilleurs services à leurs utilisateurs. 2. Nouvelles Technologies de l’Information et de la Communication 3. World Wide Web Consortium, ❤tt♣✿✴✴✇✇✇✳✇✸✳♦r❣✴ 4. Linking Open Data cloud diagram, by Richard Cyganiak and Anja Jentzsch. ❤tt♣✿✴✴❧♦❞✲❝❧♦✉❞✳♥❡t✴ 5. Resource Description Framework, ❤tt♣✿✴✴✇✇✇✳✇✸✳♦r❣✴❘❉❋✴ 19 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 1. Représentation des connaissances As of September 2011 Music Brainz (zitgist) P20 Turismo de Zaragoza yovisto Yahoo! Geo Planet YAGO World Factbook El Viajero Tourism WordNet (W3C) WordNet (VUA) VIVO UF VIVO Indiana VIVO Cornell VIAF URI Burner Sussex Reading Lists Plymouth Reading Lists UniRef UniProt UMBEL UK Postcodes legislation data.gov.uk Uberblic UB Mannheim TWC LOGD Twarql transport data.gov. uk Traffic Scotland theses. fr Thesaurus W totl.net Telegraphis TCM Gene DIT Taxon Concept Open Library (Talis) tags2con delicious t4gm info Swedish Open Cultural Heritage Surge Radio Sudoc STW RAMEAU SH statistics data.gov. uk St. Andrews Resource Lists ECS Southampton EPrints SSW Thesaur us Smart Link Slideshare 2RDF semantic web.org Semantic Tweet Semantic XBRL SW Dog Food Source Code Ecosystem Linked Data US SEC (rdfabout) Sears Scotland Geography Scotland Pupils & Exams Scholarometer WordNet (RKB Explorer) Wiki UN/ LOCODE Ulm ECS (RKB Explorer) Roma RESEX RISKS RAE2001 Pisa OS OAI NSF Newcastle LAAS KISTI JISC IRIT IEEE IBM Eurécom ERA ePrints dotAC DEPLOY DBLP (RKB Explorer) Crime Reports UK Courseware CORDIS (RKB Explorer) CiteSeer Budapest ACM riese Revyu research data.gov. uk Ren. Energy Generators reference data.gov. uk Rechtspraak. nl RDF ohloh Last.FM (rdfize) RDF Book Mashup Rådata nå! PSH Product Types Ontology Product DB PBAC Poké- pédia patents data.go v.uk Ox Points Ordnance Survey Openly Local Open Library Open Cyc Open Corporates Open Calais OpenEI Open Election Data Project Open Data Thesaurus Ontos News Portal OGOLOD Janus AMP Ocean Drilling Codices New York Times NVD ntnusc NTU Resource Lists Norwegian MeSH NDL subjects ndlna my Experiment Italian Museums meducator MARC Codes List Manchester Reading Lists Lotico Weather Stations London Gazette LOIUS Linked Open Colors lobid Resources lobid Organisations LEM Linked MDB LinkedL CCN Linked GeoData LinkedCT Linked User Feedback LOV Linked Open Numbers LODE Eurostat (Ontology Central) Linked EDGAR (Ontology Central) Linked Crunchbase lingvoj Lichfield Spending LIBRIS Lexvo LCSH DBLP (L3S) Linked Sensor Data (Kno.e.sis) Klappstuhlclub Goodwin Family National Radioactivity JP Jamendo (DBtune) Italian public schools ISTAT Immigration iServe IdRef Sudoc NSZL Catalog Hellenic PD Hellenic FBD Piedmont Accomodations GovTrack GovWILD Google Art wrapper gnoss GESIS GeoWord Net Geo Species Geo Names Geo Linked Data GEMET GTAA STITCH SIDER Project Gutenberg Medi Care Eurostat (FUB) EURES Drug Bank Diseasome DBLP (FU Berlin) Daily Med CORDIS (FUB) Freebase flickr wrappr Fishes of Texas Finnish Municipalities ChEMBL FanHubz Event Media EUTC Productions Eurostat Europeana EUNIS EU Institutions ESD standards EARTh Enipedia Population (EnAKTing) NHS (EnAKTing) Mortality (EnAKTing) Energy (EnAKTing) Crime (EnAKTing) CO2 Emission (EnAKTing) EEA SISVU educatio n.data.g ov.uk ECS Southampton ECCOTCP GND Didactal ia DDC Deutsche Biographie data dcs Music Brainz (DBTune) Magnatune John Peel (DBTune) Classical (DB Tune) Audio Scrobbler (DBTune) Last.FM artists (DBTune) DB Tropes Portuguese DBpedia dbpedia lite Greek DBpedia DBpedia dataopenac-uk SMC Journals Pokedex Airports NASA (Data Incubator) Music Brainz (Data Moseley Incubator) Folk Metoffice Weather Forecasts Discogs (Data Incubator) Climbing data.gov.uk intervals Data Gov.ie data bnf.fr Cornetto reegle Chronicling America Chem2 Bio2RDF Calames business data.gov. uk Bricklink Brazilian Politicians BNB UniSTS UniPath way UniParc Taxono my UniProt (Bio2RDF) SGD Reactome PubMed Pub Chem PROSITE ProDom Pfam PDB OMIM MGI KEGG Reaction KEGG Pathway KEGG Glycan KEGG Enzyme KEGG Drug KEGG Compound InterPro Homolo Gene HGNC Gene Ontology GeneID Affymetrix bible ontology BibBase FTS BBC Wildlife Finder BBC Program mes BBC Music Alpine Ski Austria LOCAH Amsterdam Museum AGROV OC AEMET US Census (rdfabout) Media Geographic Publications Government Cross-domain Life sciences User-generated content FIGURE 1.1 – Linking Open Data 20 Copyright c 2013 - CASSIDIAN - All rights reserved1.2. L’information sémantique Pour mettre en œuvre les principes du Web sémantique, le W3C recommande un ensemble de technologies du Web sémantique (RDF, OWL, SKOS, SPARQL, etc.) fournissant un environnement dans lequel les applications du Web peuvent partager une modélisation commune, interroger "sémantiquement" le LOD (Linked Open Data), inférer de nouvelles connaissances à partir de l’existant, etc. 1.2.2 Les ontologies Le concept d’ontologie s’avère aujourd’hui indissociable du Web sémantique. Toutefois, les ontologies ne sont pas nouvelles et sont les héritières des travaux en Ontologie (la science de l’Être) menés par des philosophes de le Grèce antique tels qu’Aristote ou Platon. L’ontologie telle qu’on l’entend en informatique est maintenant assez éloignée de ces études menées au carrefour entre la métaphysique et la philosophie. Plusieurs définitions ont été proposées dont celles de [Gruber, 1993] et [Neches et al., 1991]. Le premier donne une définition très abstraite des ontologies mais largement admise dans la littérature : "Une ontologie est une spécification explicite d’une conceptualisation" 6 . Le second propose celle-ci : "Une ontologie définit les termes et les relations de base du vocabulaire d’un domaine ainsi queMaître de conférences les règles qui permettent de combiner les termes et les relations afin de pouvoir étendre le vocabulaire" 7 . La définition de [Gruber, 1993] renvoie à deux caractéristiques principales des ontologies à savoir, d’une part, l’élaboration d’un modèle abstrait de l’existant (conceptualisation) et, d’autre part, sa formalisation en vue d’une exploitation par des machines (spécification explicite). Les ontologies à l’ère du Web sémantique sont aux bases de connaissances ce que les modèles de données sont aux bases de données [Charlet et al., 2004]. Elles définissent l’ensemble des objets du domaine de connaissances ciblé ainsi que les attributs et relations caractérisant ces objets. Les objets sont représentés sous forme de concepts hiérarchisés constituant la taxonomie de classes de l’ontologie. Cette relation de subsomption se retrouve dans toutes les ontologies, quelque soit le domaine concerné : les classes de plus haut niveau dans la hiérarchie correspondent à des concepts généraux et les classes inférieures représentent des concepts plus spécifiques. S’ajoutent à cette taxonomie des relations entre concepts de nature diverse ainsi que des attributs de concepts. Les premières ont pour domaine ("domain" en anglais) et co-domaine ("range" en anglais) des classes de l’ontologie tandis que les seconds ont pour domaine une classe et pour co-domaine une valeur d’un certain type (chaine de caractères, nombre, date, booléen, etc.). Ce type de modélisation implique un phénomène d’héritage : chaque classe de l’ontologie hérite des propriétés (relations et attributs) de sa classe supérieure (dite "classe mère"). Pour résumer, nous pouvons dire qu’une ontologie définit sémantiquement et de façon non-ambigüe un ensemble de concepts et propriétés issus d’un consensus. Un exemple commun dans la communauté d’ingénierie des connaissances est l’ontologie des pizza.owl servant couramment de base aux tutoriels dédiés aux ontologies. Cet exemple-type très complet contient un ensemble de classes, propriétés, attributs et instances du domaine ainsi que des fonctionnalités plus avancées de contraintes, cardinalités et axiomes. L’annexe G présente une vue d’ensemble de cette ontologie grâce au logiciel Protégé (présenté en section 1.2.5). 6. traduction de l’anglais "An ontology is an explicit specification of a conceptualization" 7. traduction de l’anglais "An ontology defines the basic terms and relations comprising the vocabulary of a topic area as well as the rules for combining terms and relations to define extensions to the vocabulary" 21 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 1. Représentation des connaissances Par ailleurs, on distingue différents types d’ontologie selon la portée de la modélisation [Guarino, 1998] : les ontologies dites "générales" ou "de haut niveau", les ontologies de domaine, de tâche ou d’application. Les premières visent à décrire les objets du monde communs à plusieurs domaines tels que le temps et l’espace alors que les trois autres types modélisent des concepts spécifiques. La création d’ontologies de haut niveau est parfois vue comme une utopie et la majorité des travaux de la littérature se fondent sur des ontologies de domaine. Plusieurs aspects peuvent rendre la construction d’une ontologie délicate et coûteuse en temps. En effet, ce travail de modélisation comporte une part de subjectivité car plusieurs visions d’un même domaine sont possibles et le temps pour arriver à un consensus peut s’en trouver allongé. Ce phénomène s’amplifie avec la complexité du domaine à modéliser mais également avec la taille de la communauté concernée. Afin de faciliter la tâche d’élaboration d’une ontologie des travaux tels que [Noy and Mcguinness, 2001], [Mizoguchi, 2003a] ou encore [Mizoguchi, 2003b] suggèrent un ensemble de bonnes pratiques. 1.2.3 Les langages de représentation Pour permettre aux ordinateurs d’exploiter cette modélisation, des langages informatiques de spéci- fication d’ontologie ont été créés dans le cadre du Web Sémantique. Les premiers langages ont été développés par la DARPA au début des années 2000, il s’agit de DAML 8 , OIL 9 [Fensel et al., 2001] ou DAML+OIL [Horrocks, 2002]. Ceux-ci sont les ancêtres des langages RDFS et OWL 10 devenus les recommandations actuelles du W3C. La majorité d’entre eux est fondée sur des formalismes inspirés du modèle des assertions en logique de premier ordre. RDF est le formalisme recommandé par le W3C mais d’autres existent tels que Common Logic 11 [Bachmair and Ganzinger, 2001], DOGMA 12 [Jarrar and Meersman, 2009], KIF 13 [Genesereth, 1991], F-Logic 14 [Kifer et al., 1995], etc. RDF est un formalisme pour la représentation de faits sur le Web fondé sur la notion de triplet. Tel les assertions en logique des prédicats, un triplet RDF est composé de trois éléments : un sujet, un prédicat et un objet. Le sujet et le prédicat sont des ressources et, dans le cas du Web sémantique, il s’agit de tout objet pouvant être identifié (une page Web, une personne, une propriété, etc.). Une ressource Web est représentée par un URI 15 qui peut être, par commodité, raccourci grâce à un espace de nom (namespace). L’objet du triplet, quant à lui, peut être soit une ressource (le triplet exprime une relation entre objets) soit une valeur (le triplet exprime un attribut d’un objet). L’ensemble des valeurs possibles en RDF est emprunté du format XML. Bien que le W3C recommande l’utilisation du formalisme RDF/XML, d’autres types de sérialisation existent telles que les formats N3 (Notation3), N-triples, Turtle, JSON, etc. 8. DARPA Agent Markup Language 9. Ontology Inference Layer 10. Ontology Web Language, ❤tt♣✿✴✴✇✇✇✳✇✸✳♦r❣✴❚❘✴♦✇❧✲❢❡❛t✉r❡s✴ 11. ❤tt♣✿✴✴✐s♦✲❝♦♠♠♦♥❧♦❣✐❝✳♦r❣✴ 12. Developing Ontology-Grounded Methods and Applications, ❤tt♣✿✴✴✇✇✇✳st❛r❧❛❜✳✈✉❜✳❛❝✳❜❡✴r❡s❡❛r❝❤✴❞♦❣♠❛✳ ❤t♠ 13. Knowledge Interchange Format, ❤tt♣✿✴✴✇✇✇✳❦s❧✳st❛♥❢♦r❞✳❡❞✉✴❦♥♦✇❧❡❞❣❡✲s❤❛r✐♥❣✴❦✐❢✴ 14. Frame Logic 15. Uniform Resource Identifier 22 Copyright c 2013 - CASSIDIAN - All rights reserved1.2. L’information sémantique Le langage RDFS est une première extension de RDF visant à structurer les ressources pour la spécification des ontologies. Les propriétés les plus utilisées du formalisme RDFS sont les suivantes : rdfs :Class, rdfs :subClassOf, rdfs :domain, rdfs :range et rdfs :label. Les deux premières servent à défi- nir la taxonomie de l’ontologie, les deux suivantes permettant de formaliser la notion de sujet et d’objet et enfin, la propriété label sert à nommer les éléments de l’ontologie (classes et propriétés). Enfin, le langage largement privilégié aujourd’hui pour la modélisation des ontologies est OWL. Dé- veloppé depuis 2002 par un groupe de travail du W3C, celui-ci constitue une seconde extension plus expressive du standard RDF/XML. OWL permet notamment grâce à des constructeurs d’exprimer des contraintes supplémentaires sur les classes et propriétés définies : disjonction, union et intersection de classes, propriétés symétriques, inverses, etc. Une majeure partie de ces constructeurs est directement issue des logiques de description permettant des mécanismes d’inférence sur les connaissances de l’ontologie. Précisons également que le langage OWL se décline en plusieurs sous-langages selon leurs niveaux d’expressivité : OWL-Lite, OWL-DL et OWL-Full. Nous présentons en annexe H un extrait de l’ontologie pizza.owl exprimée au format OWL. 1.2.4 Inférence et bases de connaissances Nous pouvons définir l’inférence comme le fait de déduire de nouvelles connaissances par une analyse de l’existant. Dans le cas des ontologies, il s’agit concrètement de découvrir de nouveaux triplets à partir des triplets connus en exploitant la structure de l’ontologie et les contraintes logiques spécifiées. Autrement dit, l’inférence permet de faire apparaître automatiquement des faits implicites que l’œil humain ne peut détecter car ils sont masqués par la complexité de la modélisation. Ce genre de raisonnement peut être effectué au niveau de l’ontologie en elle-même (terminological box ou TBox) ou au niveau des instances de l’ontologie (assertional box ou ABox). Il faut noter que plus le langage de représentation utilisé est expressif, plus l’inférence sera complexe voire impossible à mettre en œuvre. Dans le cas du langage OWL, la complexité de l’inférence est croissante lorsque l’on passe du sous-langage OWL-Lite à OWL-DL et enfin à OWL-Full. Ces mécanismes d’inférence sont implémentés dans des raisonneurs tels que Pellet 16, Fact++ 17 ou encore Hermitt 18. Lorsque la complexité de la modélisation est trop élevée pour ces raisonneurs, il est possible de définir des règles d’inférence manuellement grâce à des langages tels que SWRL 19 ou Jena rules 20 . Ces mécanismes de déduction constituent une réelle plus-value des ontologies car ils permettent, d’une part, de vérifier la qualité des bases de connaissance construites et, d’autre part, d’y ajouter de nouvelles connaissances de façon entièrement automatique. Les systèmes de gestion de base de données classiques (MySQL, Oracle Database, IBM DB2, etc.) ont vu se créer leurs équivalents sémantiques, nommés triplestores, permettant le stockage et le requêtage de triplets RDF. Les grands fournisseurs de SGBD propriétaires adaptent aujourd’hui leurs solutions au Web sémantique et des triplestores opensource sont également disponibles comme Sesame 21, Virtuoso 22 ou encore Fuseki 23. La récupération 16. ❤tt♣✿✴✴❝❧❛r❦♣❛rs✐❛✳❝♦♠✴♣❡❧❧❡t✴ 17. ❤tt♣✿✴✴♦✇❧✳♠❛♥✳❛❝✳✉❦✴❢❛❝t♣❧✉s♣❧✉s✴ 18. ❤tt♣✿✴✴✇✇✇✳❤❡r♠✐t✲r❡❛s♦♥❡r✳❝♦♠✴ 19. Semantic Web Rule Language, ❤tt♣✿✴✴✇✇✇✳✇✸✳♦r❣✴❙✉❜♠✐ss✐♦♥✴❙❲❘▲✴ 20. ❤tt♣✿✴✴❥❡♥❛✳❛♣❛❝❤❡✳♦r❣✴❞♦❝✉♠❡♥t❛t✐♦♥✴✐♥❢❡r❡♥❝❡✴ 21. openRDF, ❤tt♣✿✴✴✇✇✇✳♦♣❡♥r❞❢✳♦r❣✴ 22. OpenLink, ❤tt♣✿✴✴✈✐rt✉♦s♦✳♦♣❡♥❧✐♥❦s✇✳❝♦♠✴ 23. Apache Jena, ❤tt♣✿✴✴❥❡♥❛✳❛♣❛❝❤❡✳♦r❣✴ 23 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 1. Représentation des connaissances de ces données sémantiques est réalisée majoritairement grâce au langage SPARQL 24, l’équivalent sé- mantique du langage SQL 25 dont l’usage est recommandé par le W3C pour l’interrogation de bases de connaissances sémantiques. La requête ci-dessous, par exemple, permet de récupérer l’ensemble des entités de type foaf :Person dont le nom de famille commence par la lettre S. PREFIX r d f : < h t t p : / / www. w3 . o r g / 1 9 9 9/ 0 2/ 2 2 − r d f −s y nt a x −n s #> PREFIX f o a f : < h t t p : / / xmlns . com / f o a f / 0 . 1 / > SELECT ? f ? l WHERE { ? p r d f : t y p e f o a f : P e r s o n . ? p f o a f : f i r s t N a m e ? f . ? p f o a f : l a stN am e ? l . FILTER ( r e g e x ( ? l , "^ S " ) ) } 1.2.5 Les éditeurs d’ontologies Bien que l’intérêt pour les ontologies sur le Web soit relativement récent, de nombreux outils ont été développés dans le but de modéliser et de manipuler des ontologies. Le principal atout de ces logiciels est la possibilité de gérer une ontologie dans l’un des formats cités précédemment sans avoir à modifier manuellement le code sous-jacent. Nous présentons ici quelques logiciels distribués librement et gratuitement. Tout d’abord, SWOOP 26 est un éditeur simplifié d’ontologies open-source, développé par l’université du Maryland. Implémenté en Java, cet outil supporte les formats RDF (différentes sérialisations) et OWL. Parallèlement à ses fonctions d’édition, Swoop permet d’effectuer des raisonnements et propose un service de recherche des ontologies existantes. D’autre part, OntoWiki 27 est une application Web conçue comme un wiki permettant de gérer une ontologie de manière simple et collaborative. Cet outil est développé par le groupe de recherche AKSW 28 de l’université de Leipzig, également connu pour leur projet DBPedia. Cet outil supporte plusieurs formats tels que RDF/XML, Notation3, Turtle ou encore Talis(JSON). L’accent est également mis sur l’aspect Linked Data et l’intégration de ressources externes. Des extensions sont disponibles pour par exemple attacher des pages de wiki à des ressources de l’ontologie, visualiser des informations statistiques grâce à CubeViz ou encore intégrer des fonds de carte géographiques. L’outil TopBraid Composer 29 est fourni par la société anglo-saxonne TopQuadrant et s’insrcit dans la suite d’outils professionnels TopBraid Suite. Plusieurs versions de TopBraid Composer sont disponibles dont une édition gratuite Free Edition (FE). Cette application est implémentée en Java grâce à la plateforme Eclipse et exploite les fonctionnalités de la librairie Apache Jena. Elle supporte l’import et 24. SPARQL Protocol and RDF Query Language, ❤tt♣✿✴✴✇✇✇✳✇✸✳♦r❣✴❚❘✴s♣❛rq❧✶✶✲♦✈❡r✈✐❡✇✴ 25. Structured Query Language 26. ❤tt♣✿✴✴❝♦❞❡✳❣♦♦❣❧❡✳❝♦♠✴♣✴s✇♦♦♣✴ 27. ❤tt♣✿✴✴♦♥t♦✇✐❦✐✳♥❡t✴Pr♦❥❡❝ts✴❖♥t♦❲✐❦✐ 28. Agile Knowledge engineering and Semantic Web 29. ❤tt♣✿✴✴✇✇✇✳t♦♣q✉❛❞r❛♥t✳❝♦♠✴♣r♦❞✉❝ts✴❚❇❴❈♦♠♣♦s❡r✳❤t♠❧ 24 Copyright c 2013 - CASSIDIAN - All rights reserved1.3. Modélisation des événements l’export de fichiers en langage RDF(S), OWL et OWL2 et son système de built-ins ouvre de nombreuses autres fonctionnalités telles que l’utilisation de divers moteurs d’inférence, le requêtage en SPARQL, le développement de règles d’inférence en SWRL, les vérification d’intégrité et gestion des exceptions. L’application Apollo 30 est développée par le Knowledge Media Institute (KMI). Cet outil implémentée en Java fournit les fonctions basiques de création d’ontologie et présente une bonne flexibilité grâce à son système de plug-ins. Toutefois, Apollo ne permet pas de mécanismes d’inférence et est fondé sur son propre métalangage ce qui ne facilite pas l’interopérabilité avec d’autres outils. NeOn Toolkit 31 est un autre environnement open-source très complet d’ingénierie des ontologies issu du projet européen NeOn. Cet outil est particulièrement adapté à la gestion d’ontologies multi-modulaires ou multilingues, à la fusion d’ontologies et à leur intégration dans des applications sémantiques plus larges. L’accent y est mis sur la contextualisation et la mise en réseau de modélisations hétérogènes et sur les aspects collaboratifs de la gestion d’ontologies. Fondé sur l’environnement de développement Eclipse, NeOn Toolkit propose l’intégration de divers plugins pour la modélisation visuelle, l’apprentissage et l’alignement d’ontologie, la définition de règles d’inférence, etc. Son architecture ouverte et modulaire est compatible avec les architectures orientées services (SOA) [Haase et al., 2008]. Terminons par l’éditeur d’ontologies le plus renommé et le plus utilisé dans la communauté d’ingé- nierie des connaissances : l’environnement Protégé 32. Créé par les chercheurs de l’université de Stanford, Protégé est développé en Java, gratuit et open-source. Il s’agit d’une plateforme d’aide à la création, la visualisation et la manipulation d’ontologies dans divers formats de représentation (RDF, RDFS, OWL, etc.). Ce logiciel peut également être utilisé en combinaison avec divers moteurs d’inférence (tels que RacerPro ou Fact) afin d’effectuer des raisonnements et d’obtenir de nouvelles assertions. De plus, de par la flexibilité de son architecture, Protégé est facilement configurable et extensible par les plugins dé- veloppés au sein d’autres projets. La figure 1.2 présente une vue de l’ontologie-exemple pizza.owl dans l’environnement Protégé. Enfin, les créateurs de cet outil mettent l’accent sur l’aspect collaboratif dans la modélisation d’ontologies en proposant Collaborative Protégé et WebProtégé. Le premier est une extension intégrée à Protégé permettant à plusieurs utilisateurs d’éditer la même ontologie et de commenter les modifications effectuées par chacun. Un système de vote rend également possible la concertation sur tel ou tel changement. WebProtégé est une application Web légère et open-source reprenant les principes de Collaborative Protégé dans le contexte du Web. Elle permet une édition d’ontologies collaborative et à distance. Pour une description plus détaillée et une comparaison plus poussée de ces éditeurs, nous renvoyons le lecteur aux présentations faites par [Alatrish, 2012] et [Charlet et al., 2004]. 1.3 Modélisation des événements Le concept d’événement étant au centre de nos travaux, les sections suivantes visent à définir plus précisément cette notion complexe. Nous nous intéressons aux différentes visions de l’événement dans la littérature et plus particulièrement dans les domaines de l’extraction d’information et du Web sémantique. Nous résumons les grands courants de représentation des événements dans le but de les extraire 30. ❤tt♣✿✴✴❛♣♦❧❧♦✳♦♣❡♥✳❛❝✳✉❦✴ 31. ❤tt♣✿✴✴♥❡♦♥✲t♦♦❧❦✐t✳♦r❣ 32. ❤tt♣✿✴✴♣r♦t❡❣❡✳st❛♥❢♦r❞✳❡❞✉✴ 25 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 1. Représentation des connaissances FIGURE 1.2 – L’environnement Protégé automatiquement. Puis, nous réalisons un état des lieux des ontologies existantes centrées sur la modé- lisation des événements. Enfin, un focus est proposé sur les questions de représentation du temps et de l’espace qui sont des aspects importants dans le traitement des événements. 1.3.1 Qu’est-ce qu’un événement ? Considéré comme une entité aux propriétés bien spécifiques, l’événement a initialement été étudié par des philosophes [Davidson, 1967] puis par des linguistes [Van De Velde, 2006] [Desclés, 1990] (voir [Casati and Varzi, 1997] pour une bibliographie exhaustive). [Higginbotham et al., 2000] compare quelques définitions de l’événement et met notamment l’accent sur deux courants traditionnels opposés : les événements comme concepts universaux (généraux et répé- tables) d’une part, et particuliers (spécifiques et uniques), d’autre part. Dans le premier courant, on peut citer [Montague, 1969] pour lequel les événements sont des propriétés attribuées à des instants (ou des intervalles) dans le temps, ou encore [Chisholm, 1970] qui les définit comme des situations ("states of affairs") pouvant décrire le même intervalle de temps de façon différente. Par ailleurs, [Quine, 1960] et [Kim, 1973] appartiennent au second courant : les événements sont définis comme des objets particuliers ("individuals"). Le premier considère que les événements ont le même statut que les objets physiques et portent le contenu (parfois hétérogène) d’une portion de l’espace-temps. Il n’y a donc qu’un seul et même événement possible par région spatio-temporelle mais ce même événement peut donner lieu à différentes descriptions linguistiques. [Kim, 1973] se place à l’opposé de cette vision en permettant à un nombre 26 Copyright c 2013 - CASSIDIAN - All rights reserved1.3. Modélisation des événements indéfini d’événements d’occuper un seul et même moment. L’événement est ici un objet concret (ou une collection d’objets) exemplifiant une propriété (ou un ensemble de propriétés) à un moment donné. Enfin, une position intermédiaire est celle de [Davidson, 1969] qui considère les événements selon leur place dans un réseau de causalité : des événements sont identiques s’ils partagent les mêmes causes et les mêmes effets. Cependant, ce même auteur reviendra plus tard vers la thèse de Quine. Suite à ces réflexions sur la nature des événements (qui est encore débattue à l’heure actuelle), d’autres recherches ont vu le jour en linguistique, analyse du discours et philosophie du langage. En effet, les événements étant avant tout des objets sociaux, leur étude doit se faire entre étroite corrélation avec la manière dont ils sont relatés et exprimés par l’homme. Parmi ces travaux nous pouvons citer [Krieg-Planque, 2009]. Celle-ci donne une définition simple de l’événement mais qui nous paraît très juste : "un événement est une occurrence perçue comme signifiante dans un certain cadre". Ici, le terme "occurrence" met l’accent sur la notion de temporalité qui est reconnue comme partie intégrante de ce concept par la quasi totalité des travaux. Le "cadre" selon Krieg-Planque réfère à "un système d’attentes donné" qui "détermine le fait que l’occurrence acquiert (ou non) [...] sa remarquabilité [...] et, par conséquent, est promue (ou non) au rang d’événement.". C’est ici qu’est fait le lien avec la sociologie et l’histoire : tout événement prend place dans un milieu social qui détermine l’obtention de son statut "remarquable". Enfin, [Neveu and Quéré, 1996] s’attachent à décrire plus précisément l’apparition des événements dits "modernes", façonnés et relayés par les médias actuels. Ils soulignent que l’interprétation d’un évé- nement est étroitement liée au contenu sémantique des termes utilisés pour nommer cet événement. Pour plus de clarté nous appellerons ces termes des "noms d’événement". Ces "noms d’événement" transposent en langage naturel la "propriété sémantique" des événements mentionnée par [Saval, 2011]. Cette description de l’événement est également au centre d’un phénomène plus large, que [Ricœur, 1983] nomme "mise en intrigue" : celui-ci vise à organiser, selon le cadre mentionné plus haut, un ensemble d’éléments circonstants ou participants de l’événement. 1.3.1.1 Les événements en extraction d’information On distingue actuellement deux grandes visions de l’événement dans la communauté de l’extraction d’information [Ahn, 2006]. D’une part, l’approche TimeML provient de recherches menées en 2002 dans le cadre du programme AQUAINT 33 fondé par l’ARPA 34. D’autre part, le modèle ACE a été défini dans la tâche Event Detection and Recognition (VDR) des campagnes d’évaluation du même nom à partir de 2005. L’approche TimeML [Pustejovsky et al., 2003] définit les événements de la façon suivante : "situations that happen or occur [...] predicates describing states or circumstances in which something obtains or holds true". Ceux-ci peuvent être ponctuels ou duratifs et sont organisés en sept types : Occurrence, State, Reporting, I-Action, I-State, Aspectual, Perception. L’événement est considéré conjointement à trois autres types d’entité : TIMEX, SIGNAL et LINK. Les objets TIMEX correspondent aux entités temporelles simples telles que les expressions de dates et heures, durée, fréquence, etc. (annotées par des étiquettes TIMEX3 36). Les entités SIGNAL correspondent à des mots fonctionnels exprimant des rela- 33. Advanced Question Answering for Intelligence, ❤tt♣✿✴✴✇✇✇✲♥❧♣✐r✳♥✐st✳❣♦✈✴♣r♦❥❡❝ts✴❛q✉❛✐♥t✴ 34. ancien nom de la DARPA, Defense Advanced Research Projects Agency, 35 36. ❤tt♣✿✴✴t✐♠❡♠❧✳♦r❣✴s✐t❡✴t✐♠❡❜❛♥❦✴❞♦❝✉♠❡♥t❛t✐♦♥✲✶✳✷✳❤t♠❧ 27 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 1. Représentation des connaissances tions temporelles. Il peut s’agir de prépositions de temps (pendant, après, etc.), de connecteurs temporels (quand, lorsque, etc.), de mots subordonnants (si, etc.), d’indicateurs de polarité (négation) ou encore de quantifieurs (dix fois, souvent, etc.). Enfin, les annotations LINK font le lien entre les différentes entités temporelles (EVENT, TIMEX et SIGNAL). Ces liens sont de trois types : TLINK (liens temporels entre événements ou entre un événement et une autre entité de type TIMEX), SLINK (liens de subordination entre deux événements ou entre un événement et une entité SIGNAL), ALINK (liens aspectuels entre un événement aspectuel et son argument). Dans le modèle ACE [NIST, 2005] un événement est vu comme une structure complexe impliquant plusieurs arguments pouvant également être complexes. L’aspect temporel correspond ici à un type d’argument mais n’est pas au centre de la modélisation. Ce modèle définit un ensemble de types et sous-types d’événement (8 types comme Life, Movement, Business, etc. et 33 sous-types comme Marry, DeclareBankruptcy, Convict, Attack, etc.) et associe à chaque événement un ensemble d’arguments autorisés pouvant être des entités ou des valeurs et auxquels est associé un rôle (parmi 35 rôles dont Time, Place, Agent, Instrument, etc. [LDC, 2005]). Par ailleurs, les événements tels que définis dans la campagne ACE possèdent des attributs tels que le temps, la modalité, la polarité ou encore la généricité. Enfin, comme pour les autres types d’entité, on distingue la notion de mention d’événement, d’une part, qui correspond à une portion de texte constituée d’une ancre d’événement et de ses arguments et, d’autre part, l’événement en lui-même, c’est-à-dire un ensemble de mentions d’événement qui réfère au même objet du monde réel. Ces deux modèles sont différents sur plusieurs aspects, la divergence principale étant que la première approche vise à annoter tous les évènements d’un texte, alors que la seconde a pour cibles uniquement les événements d’intérêt pour une application donnée. Le modèle TimeML, considérant un événement comme tout terme temporellement ancré, est généralement choisi dans des projets où cet aspect est central (pour la construction de chronologies par exemple). L’approche ACE est la plus utilisée car elle s’applique aux besoins de divers domaines mais implique toutefois un processus d’annotation plus complexe à mettre en place (la représentation de l’événement étant elle-même plus complexe). 1.3.1.2 Les ontologies orientées "événement" Plusieurs ontologies disponibles sur le Web proposent une modélisation spécifique au concept d’évé- nement. Nous présentons ci-après les caractéristiques principales de cinq d’entre elles : – The Event Ontology (EO) 37 – Linking Open Descriptions of Events (LODE) 38 – Simple Event Model (SEM) 39 – DOLCE-UltraLite (DUL) 40 – CIDOC Conceptual Reference Model (CIDOC CRM) 41 The Event Ontology a été développée par les chercheurs Yves Raimond et Samer Abdallah du Centre for Digital Music à Londres [Raimond et al., 2007] et sa dernière version date d’octobre 2007. Cette 37. ❤tt♣✿✴✴♠♦t♦♦❧s✳s♦✉r❝❡❢♦r❣❡✳♥❡t✴❡✈❡♥t✴❡✈❡♥t✳❤t♠❧ 38. ❤tt♣✿✴✴❧✐♥❦❡❞❡✈❡♥ts✳♦r❣✴♦♥t♦❧♦❣② 39. ❤tt♣✿✴✴s❡♠❛♥t✐❝✇❡❜✳❝s✳✈✉✳♥❧✴✷✵✵✾✴✶✶✴s❡♠✴ 40. ❤tt♣✿✴✴♦♥t♦❧♦❣②❞❡s✐❣♥♣❛tt❡r♥s✳♦r❣✴♦♥t✴❞✉❧✴❉❯▲✳♦✇❧ 41. ❝✐❞♦❝✲❝r♠✳♦r❣✴ 28 Copyright c 2013 - CASSIDIAN - All rights reserved1.3. Modélisation des événements ontologie est centrée sur la notion d’événement telle que [Allen and Ferguson, 1994] la définissent : "[...] events are primarily linguistic or cognitive in nature. That is, the world does not really contain events. Rather, events are the way by which agents classify certain useful and relevant patterns of change.". Cette ontologie des événements à été développée dans le cadre du projet Music Ontology et est donc particulièrement adaptée à la représentation des événements dans ce domaine (concerts, représentations, etc.). Comme le montre la figure 1.3, cette modélisation se veut générique et s’appuie sur des ontologies largement utilisées telles que FOAF, ainsi que sur des standards de représentation spatio-temporelle recommandés par le W3C42. Toutefois, la classe Event n’est pas sous-typée et les propriétés factor et product paraissent vaguement définies (pas de co-domaine). FIGURE 1.3 – L’ontologie Event : modélisation des événements L’ontologie LODE [Troncy et al., 2010] a été créée dans le cadre du projet européen EventMedia 43 [Fialho et al., 2010]. Ce projet vise à concevoir un environnement Web permettant aux internautes d’explorer, de sélectionner et de partager des événements. L’ontologie est au centre de cette initiative car elle constitue le socle commun pour représenter et stocker des événements provenant de sources hétérogènes. Les concepteurs de cette ontologie la présente comme un modèle minimal dont l’objectif premier est, dans la mouvance du LOD (Linked Open Data), de créer des liens entre les ontologies existantes afin de représenter les aspects faisant consensus dans la communauté de représentation des événements. Pour ce faire, un certain nombre d’alignements entre ontologies est implémenté dans LODE tels que des équivalences de classes et propriétés avec les ontologies DUL, EO, CIDOC CRM, etc. Cette interopérabilité permet notamment d’obtenir des connaissances supplémentaires lors des mécanismes d’inférence. Cette ontologie n’est donc pas réellement une ontologie des événements mais plutôt un outil d’alignement des modélisations existantes. Enfin, les concepteurs ont exclu pour le moment tout travail sur la sous-catégorisation des événements et la définition de relations entre événements (inclusion, causalité, etc.). Les figures 1.4 et 1.5 schématisent respectivement les relations entre concepts et les relations entre propriétés dans LODE. 42. ❤tt♣✿✴✴✇✇✇✳✇✸✳♦r❣✴✷✵✵✻✴t✐♠❡★ et ❤tt♣✿✴✴✇✇✇✳✇✸✳♦r❣✴✷✵✵✸✴✵✶✴❣❡♦✴✇❣s✽✹❴♣♦s★ 43. ❤tt♣✿✴✴❡✈❡♥t♠❡❞✐❛✳❝✇✐✳♥❧✴ 29 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 1. Représentation des connaissances FIGURE 1.4 – LODE : modélisation des événements FIGURE 1.5 – LODE : alignements entre propriétés L’ontologie SEM a été créée par le groupe Web & Media de l’université d’Amsterdam [van Hage et al., 2011]. Elle décrit les événements dans le but de répondre à la question "who did what with what to whom, where and when ?" mais aussi dans une perspective d’interopérabilité entre différents domaines. Une particularité de SEM est l’accent mis sur la représentation des rôles associés aux acteurs impliqués dans un événement. SEM permet de modéliser la nature du rôle, des informations temporelles sur sa validité mais également la source l’ayant attribué à l’acteur. Dans le même objectif que LODE, SEM fournit de nombreux liens (sous la forme de propriétés SKOS 44) avec une dizaine d’ontologies existantes : ontologies dédiées aux événements (EO, LODE, etc.), standards W3C (Time, WGS84), ontologies de haut niveau reconnues (SUMO, OpenCyc, etc.), etc. Toutefois, les événements dans cette ontologie ne sont pas sous-typés et, mise à part la propriété hasSubEvent, aucune autre relation entre événements n’est mo- 44. Simple Knowledge Organization System, ❤tt♣✿✴✴✇✇✇✳✇✸✳♦r❣✴✷✵✵✹✴✵✷✴s❦♦s✴ 30 Copyright c 2013 - CASSIDIAN - All rights reserved1.3. Modélisation des événements délisée. La figure 1.6, fournie par les développeurs de l’ontologie, schématise les concepts et relations principaux dans SEM. FIGURE 1.6 – SEM : modélisation des événements DUL (DOLCE+DnS-UltraLite) est une ontologie développée par le laboratoire italien STLab 45 à la suite de DOLCE 46, une ontologie de haut niveau créée dans le cadre du projet WonderWeb 47. Un événement y est défini comme suit : "Any physical, social, or mental process, event, or state.". Plus concrètement, la classe Event est spécifiée en deux sous-classes Action et Process, la première se diffé- renciant de la seconde par la propriété executesTask indiquant l’objectif de l’action. Comme le montre la figure 1.7, cette ontologie de haut niveau ne comporte pas de liens vers d’autres modélisations et notamment la représentation spatio-temporelle des événements y reste peu définie. Pour finir, le modèle CIDOC CRM est un standard international ISO pour le partage d’information dans le domaine du patrimoine culturel. Celui-ci est développé depuis 1994 et distribué au format OWL par les chercheurs de l’Université d’Erlangen-Nuremberg en Allemagne 48. L’événement est une sousclasse de TemporalEntity et y est défini comme un changement d’état : "changes of states in cultural, social or physical systems, regardless of scale, brought about by a series or group of coherent physical, cultural, technological or legal phenomena.". La figure 1.8 présente une vue globale de l’organisation des différents concepts de cette ontologie avec un focus sur la classe Event. Les différentes propriétés de l’événement sont schématisées dans la figure 1.9. Les remarques faites sur DUL (absence de liens vers d’autres ontologies et représentation spatio-temporelle peu définie) s’appliquent également à cette ontologie. Par ailleurs, la distinction entre les concepts Event, Period et Condition State n’est pas clairement 45. Semantic Technology laboratory 46. Descriptive Ontology for Linguistic and Cognitive Engineering 47. ❤tt♣✿✴✴✇♦♥❞❡r✇❡❜✳s❡♠❛♥t✐❝✇❡❜✳♦r❣✴ 48. ❤tt♣✿✴✴❡r❧❛♥❣❡♥✲❝r♠✳♦r❣ 31 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 1. Représentation des connaissances FIGURE 1.7 – DUL : modélisation des événements définie. Enfin, bien que CIDOC CRM se veuille générique et définisse de nombreuses sous-classes de l’événement, cette taxonomie ne paraît pas s’appliquer à tous les domaines. FIGURE 1.8 – CIDOC CRM : taxonomie des classes 32 Copyright c 2013 - CASSIDIAN - All rights reserved1.3. Modélisation des événements FIGURE 1.9 – CIDOC CRM : modélisation des événements 1.3.2 Modélisation du temps et de l’espace Comme l’ont montré les sections précédentes, la définition des événements est indissociable des notions de temps et d’espace. Bien que ces concepts soient intuitivement compréhensibles par tous, leur modélisation en vue de traitements automatisés n’est pas chose aisée. En effet, le temps et l’espace peuvent revêtir différentes dimensions sociologiques et physiques donnant lieu à diverses représentations. Nous proposons ci-après un résumé des grands courants théoriques de représentation temporelle et spatiale ainsi que quelques exemples d’ontologies associées. 1.3.2.1 Représentation du temps On distingue classiquement deux types de représentation temporelle : d’une part, une vue sous forme de points/instants et, d’autre part, un découpage du continuum temporel en intervalles/périodes. Les travaux les plus connus sont ceux de [McDermott, 1982] pour la représentation en points et ceux de [Allen, 1983] pour les intervalles. Le premier considère l’espace temps comme une succession de points et définit les relations suivantes entre deux points ti et tj : (ti < tj ) ∨ (ti > tj ) ∨ (ti = tj ) Allen définit l’unité temporelle de base comme étant un intervalle de temps et propose une algèbre composée de 13 relations (voir la figure 1.10). Ces deux modèles temporels sont les plus utilisés mais d’autre types ont été proposés pour, par exemple, hybrider ces deux approches. De nombreuses recherches sont menées également dans diverses disciplines (philosophie, informatique théorique, bases de données, etc.) sur d’autres problématiques liées au temps telles que les logiques et les contraintes temporelles, le raisonnement sur le temps, etc. [Allen, 1991] [Hayes, 1995] Parmi ces travaux, [Ladkin, 1987] propose un système de représentation du temps faisant le lien entre les nombreux modèles théoriques développés et les besoins applicatifs de traitement du temps. Il s’agit 33 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 1. Représentation des connaissances FIGURE 1.10 – Algèbre temporel d’Allen du Time Unit System (TUS), une approche hiérarchique et granulaire qui représente toute expression temporelle en un groupe de granules (c’est-à-dire des unités temporelles indivisibles). Un granule (ou unité de temps) est une séquence finie d’entiers organisés selon une hiérarchie linéaire : année, mois, jour, heure, etc. De plus, ce formalisme introduit la notion de BTU (Basic Time Unit) qui correspond au niveau de granularité choisi en fonction de la précision nécessitée par une application (e.g. les jours, les secondes, etc.). Par exemple, si le BTU est fixé à heure, chaque unité temporelle sera exprimée comme une séquence d’entiers i telle que : i = [année,mois,jour,heure]. De plus, TUS définit la fonction maxj ([a1, a2, ..., aj−1]) donnant la valeur maximale possible à la position j pour qu’une séquence temporelle soit valide en tant que date. Cet opérateur est nécessaire car, selon notre actuel système calendaire, le granule jour dépend des granules mois et année. Depuis les débuts du Web sémantique, les chercheurs se sont intéressés à l’application des modèles théoriques existants pour la construction d’ontologies temporelles. L’ontologie OWL Time 49, issue d’un groupe de travail du W3C, est la plus connue et la plus utilisée actuellement. Celle-ci définit un concept de base TemporalEntity, spécifié en instants et intervalles. On y trouve également les relations temporelles issues de l’algèbre d’Allen ainsi qu’un découpage calendaire du temps (année, mois, jour, heure, minute et seconde). 1.3.2.2 Représentation de l’espace Du côté de la représentation spatiale, les premières approches proviennent des mathématiques : d’une part, Euclide définit plusieurs catégories d’objets géométriques de base (points, lignes, surfaces, etc.) ainsi que leurs propriétés et relations ; d’autre part, Descartes considère le point comme élément fondamental et propose un modèle numérique de représentation géographique en associant à chaque point un ensemble de valeurs pour chacune de ses dimensions (coordonnées cartésiennes). Ces deux modélisations constituent la vision classique de l’espace en termes de points. 49. ❤tt♣✿✴✴✇✇✇✳✇✸✳♦r❣✴❚❘✴♦✇❧✲t✐♠❡✴ 34 Copyright c 2013 - CASSIDIAN - All rights reserved1.3. Modélisation des événements Plus récemment, se développe une vision alternative où l’unité de base de modélisation spatiale est la région. Introduite par [Whitehead, 1920], celle-ci se fonde sur une perception plus commune de l’espace et est également désignée sous le nom de "géométrie du monde sensible". Cette représentation dite aussi qualitative (en contraste avec les premières approches quantitatives) permet de décrire toute entité spatiale selon les trois concepts suivants : intérieur, frontière et extérieur. Ce principe est à l’origine de l’analyse topologique visant à décrire les relations spatiales selon cette notion de limite. Cette proposition de Whitehead a donné lieu à l’élaboration de diverses théories de l’espace fondées sur les régions dont [Tarski, 1956]. Ce mode de représentation a été celui adopté par la communauté de l’Intelligence Artificielle parce qu’étant plus proche de la vision humaine et donc plus applicatif. Le modèle RCC-8 de [Randell et al., 1992] est le formalisme de premier ordre le plus utilisé pour modéliser et raisonner sur des entités spatiales. Il s’agit d’une transposition de l’algèbre d’Allen au problème de la représentation spatiale. Ce modèle comprend huit types de relation entre deux régions spatiales schématisés par la figure 1.11. FIGURE 1.11 – Les relations topologiques RCC-8 Du côté du Web sémantique, [Minard, 2008] propose un état de l’art de quelques ontologies d’objets géographiques développées notamment par des laboratoires spécialisés tels que l’INSEE, l’IGN ou AGROVOC. S’ajoutent à ces ontologies de nombreux thésaurus et bases de connaissances géographiques telles que la plus renommée GeoNames 50. Il nous faut mentionner également le groupe de travail W3C Geospatial Incubator Group (GeoXG) 51 et le consortium international OGC52 qui collaborent depuis quelques années pour faciliter l’interopérabilité entre les systèmes d’information géographique (SIG) dans le cadre du Web sémantique. Plusieurs initiatives en découlent dont le vocabulaire WGS84 Geo Positioning 53, le langage de balisage GML 54 ou encore le langage de requête GeoSPARQL 55 . 1.3.3 Spécifications dédiées au ROSO Nous proposons ici un tour d’horizon des modélisations réalisées et exploitées dans le domaine militaire et de la sécurité au sens large. 50. ❤tt♣✿✴✴✇✇✇✳❣❡♦♥❛♠❡s✳♦r❣ 51. ❤tt♣✿✴✴✇✇✇✳✇✸✳♦r❣✴✷✵✵✺✴■♥❝✉❜❛t♦r✴❣❡♦✴❳●❘✲❣❡♦✲♦♥t✲✷✵✵✼✶✵✷✸✴ 52. Open Geospatial Consortium, ❤tt♣✿✴✴✇✇✇✳♦♣❡♥❣❡♦s♣❛t✐❛❧✳♦r❣✴ 53. ❤tt♣✿✴✴✇✇✇✳✇✸✳♦r❣✴✷✵✵✸✴✵✶✴❣❡♦✴✇❣s✽✹❴♣♦s★ 54. Geography Markup Language, ❤tt♣✿✴✴✇✇✇✳♦♣❡♥❣❡♦s♣❛t✐❛❧✳♦r❣✴st❛♥❞❛r❞s✴❣♠❧ 55. ❤tt♣✿✴✴❣❡♦s♣❛rq❧✳♦r❣✴ 35 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 1. Représentation des connaissances Une première catégorie de ressources existantes sont les standards OTAN 56 dits STANAGs 57. Ceuxci sont des accords de normalisation ratifiés par les pays de l’Alliance pour faciliter les interactions entre leurs armées. Ces documents définissent des procédures, termes et conditions formant un référentiel commun (technique, opérationnel ou administratif) dédié à être mis en œuvre au sein des différents services militaires de l’OTAN. Certains sont spécifiques aux systèmes d’information et de communication (SIC) tels que les NATO Military Intelligence Data Exchange Standard [NATO, 2001] et Joint Consultation Command Control Information Exchange Data Model [NATO, 2007]. Le premier, également nommé STANAG 2433 ou AINTP-3(A), a pour objectif l’échange de l’information et de l’intelligence au sein de l’Alliance et la proposition d’un ensemble de standards d’implémentation. Les structures de données qui y sont définies constituent un exemple pour les pays membres qui, pour la plupart, s’en sont inspirés dans la conception de leurs bases de données. Le second accord, dit STANAG 5525 ou JC3IEDM, sert d’interface commune pour l’échange d’informations sur le champ de bataille. Il vise une interopérabilité internationale entre les systèmes d’information de type C2 (Command and Control) pour faciliter le commandement des opérations interarmées. Ces standards militaires ne sont pas des ontologies mais définissent un ensemble de concepts (et relations entre ces concepts) spécifiques à certaines procédures. Ceux-ci sont organisés de façon hiérarchique et décrits le plus précisément possible en langage naturel. Ces modèles sont généralement très complets et très bien documentés car ils sont destinés à des opérateurs humains spécialistes du sujet traité. Par ailleurs, nous pouvons citer l’agence de recherche américaine IARPA 58 et son programme OSI 59 dont l’objectif est le développement de méthodes pour l’analyse des données accessibles publiquement. Cette organisation finance des projets d’innovation technologique pour la détection automatique et l’anticipation d’événements sociétaux tels que les situations de crise, catastrophes naturelles, etc. Dans le cadre de ce programme a été développée la typologie IDEA 60 incluant environ 250 types d’événements sociaux, économiques et politiques ainsi que des entités simples avec lesquelles ils sont en relation [Bond et al., 2003]. Du côté des ontologies, on recense notamment les modélisations swint-terrorism [Mannes and Golbeck, 2005], reprenant les concepts principaux nécessaires au domaine du terrorisme, et AKTiveSA [Smart et al., 2007], dédiée à la description des contextes opérationnels militaires autres que la guerre. [Baumgartner and Retschitzegger, 2006] réalise également un état de l’art des ontologies de haut niveau dédiées à la tenue de situation (situation awareness). [Inyaem et al., 2010b] s’attache à la définition d’une ontologie floue pour l’extraction automatique d’événements terroristes. Toutefois, comme dans beaucoup de travaux de ce domaine, l’ontologie développée n’est pas distribuée publiquement pour des raisons stratégiques et/ou de confidentialité. Le site web ❤tt♣✿✴✴♠✐❧✐t❛r②♦♥t♦❧♦❣②✳❝♦♠ recense un certain nombre d’ontologies pour le domaine militaire. Enfin, [Bowman et al., 2001] et [Boury-Brisset, 2003] proposent un ensemble de conseils méthodologiques pour la construction d’ontologies dédiées aux applications militaires et stratégiques. 56. Organisation du Traité de l’Atlantique Nord 57. STANdardization AGreement, ❤tt♣✿✴✴✇✇✇✳♥❛t♦✳✐♥t✴❝♣s✴❡♥✴♥❛t♦❧✐✈❡✴st❛♥❛❣✳❤t♠ 58. Intelligence Advanced Research Projects Activity, ❤tt♣✿✴✴✇✇✇✳✐❛r♣❛✳❣♦✈✴ 59. Open Source Indicators, ❤tt♣✿✴✴✇✇✇✳✐❛r♣❛✳❣♦✈✴Pr♦❣r❛♠s✴✐❛✴❖❙■✴♦s✐✳❤t♠❧ 60. Integrated Data for Events Analysis, ❤tt♣✿✴✴✈r❛♥❡t✳❝♦♠✴■❉❊❆✳❛s♣① 36 Copyright c 2013 - CASSIDIAN - All rights reserved1.4. Conclusions 1.4 Conclusions A travers ce premier état de l’art, nous avons pu nous familiariser avec les différentes problématiques de la représentation des connaissances au sens large. Nous avons, dans un premier temps, rappelé une distinction importante antre les notions de donnée, information et connaissance qui constitue l’une de bases théoriques de ce domaine. Nous avons, par la suite, présenté, dans leur ensemble, les principes et technologies du Web sémantique qui sont partie intégrante d’une majorité de travaux actuels en fouille de documents et dont l’adéquation à cette problématique n’est plus à prouver. Enfin, nous avons réalisé un focus sur la place des événements en représentation des connaissances et présenté les différentes théories et modèles de la littérature. Dans un souci d’interopérabilité avec les systèmes existants, nous avons privilégié l’utilisation de modèles communs au sein de nos travaux tout en les adaptant aux besoins spécifiques de notre application si nécessaire. Le modèle ACE, très utilisé par la communauté en EI, nous parait bien adapté à la modélisation des événements dans le cadre de nos recherches. En effet, comme nous l’avons montré dans ce chapitre, celui-ci est compatible avec les besoins des analystes du ROSO ainsi qu’avec le modèle de données de la plateforme WebLab. Nous avons par la suite réalisé un tour d’horizon des ontologies centrées sur les événements déjà développées et réutilisables : les plus communes présentent toutes des similarités de structure et un effort est réalisé dans la communauté pour lier les différentes modélisations entre elles via des alignements ontologiques. Parmi celles-ci, les ontologies DUL et LODE présentent le plus de correspondances avec nos travaux. L’ensemble de ces observations seront prises en compte lors de l’élaboration de notre modèle de connaissances présentée au chapitre 4. 37 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 1. Représentation des connaissances 38 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 2 Extraction automatique d’information Sommaire 2.1 Définition et objectifs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.2 Approches d’extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.2.1 Extraction d’entités nommées et résolution de coréférence . . . . . . . . . 43 2.2.2 Extraction de relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.2.3 Extraction d’événements . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.3 Plateformes et logiciels pour l’EI . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.5 Évaluation des systèmes d’EI . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 2.5.1 Campagnes et projets d’évaluation . . . . . . . . . . . . . . . . . . . . . 54 2.5.2 Performances, atouts et faiblesses des méthodes existantes . . . . . . . . . 56 2.6 Problèmes ouverts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 2.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 39 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 2. Extraction automatique d’information Depuis les débuts du Traitement Automatique du Langage dans les années 60-70, la compréhension automatique de textes est l’objet de nombreuses recherches. L’objectif principal est de permettre à un ordinateur de comprendre le sens global d’un document comme savent le faire les êtres humains. Les échecs récurrents des systèmes alors développés mettent rapidement en cause une vision trop générique de la compréhension automatique. En effet, de tels outils s’avèrent alors inutilisables dans un contexte opérationnel en raison du coût élevé des adaptations nécessaires (bases de connaissances et ressources lexicales spécifiques). Conscients d’être trop ambitieux au regard des possibilités technologiques, les chercheurs s’orientent alors vers des techniques plus réalistes d’extraction d’information. S’il n’est pas directement possible de comprendre automatiquement un texte, le repérage et l’extraction des principaux éléments de sens apparaît comme un objectif plus raisonnable. Cette réorientation théorique est reprise de façon détaillée par [Poibeau, 2003]. Ce chapitre présente tout d’abord les objectifs principaux de l’extraction d’information (section 2.1), puis une synthèse des méthodes les plus couramment mises en œuvre dans ce domaine (section 2.2). Pour cela, nous distinguons les travaux existants selon la nature des informations extraites : entités nommées, relations entre entités et événements. Quelques travaux en résolution de coréférence sont également présentés. La section 2.3 propose un tour d’horizon des outils et plateformes existants pour le développement de systèmes d’extraction d’information. Nous parcourons ensuite (section 2.4) quelques-unes des applications possibles dans ce domaine. Pour conclure ce chapitre, la section 2.5 aborde le problème de l’évaluation en extraction d’information à travers une revue des campagnes d’évaluation, puis une présentation des performances (atouts et faiblesses) des systèmes existants et enfin, un récapitulatif des limites restantes à l’heure actuelle. 2.1 Définition et objectifs Face à l’augmentation vertigineuse des documents textuels mis à disposition de tous, l’extraction automatique d’information voit un intérêt grandissant depuis une vingtaine d’années. En effet, noyés sous cette masse d’information non-structurée, nous rêvons d’un système automatique capable, dans nos tâches quotidiennes (professionnelles ou personnelles), de repérer et d’extraire de façon rapide et efficace les informations dont nous avons besoin. En réponse à cela, les systèmes développés visent à analyser un texte de manière automatique afin d’en extraire un ensemble d’informations jugées pertinentes [Hobbs and Riloff, 2010]. Il s’agit généralement de construire une représentation structurée (bases de données, fiches, tableaux) à partir d’un ou plusieurs documents à l’origine non-structurés. Cela en fait une approche guidée par le but de l’application dans laquelle elle s’intègre, dépendance qui reste, à l’heure actuelle, une limite majeure des systèmes d’extraction [Poibeau, 2003]. L’extraction d’information (EI) a souvent été confondue avec un autre domaine de l’intelligence artificielle (IA) qu’est la recherche d’information (RI). Bien que cet amalgame s’explique car ces deux champs de recherche partagent un objectif premier — présenter à l’utilisateur des informations qui ré- pondent à son besoin — ceux-ci diffèrent sur plusieurs autres points. Tout d’abord, les outils de RI renvoient généralement une liste de documents à l’utilisateur alors qu’en extraction d’information les résultats sont des éléments d’information extraits de ces documents. Par ailleurs, la recherche d’information s’attache à répondre à une requête exprimée par un utilisateur grâce à un système de mots-clés (ou d’autres mécanismes plus sophistiqués de recherche sémantique) [Vlahovic, 2011]. Dans le domaine de l’EI, la réponse des outils est guidée par une définition a priori des éléments d’information à repérer 40 Copyright c 2013 - CASSIDIAN - All rights reserved2.1. Définition et objectifs dans un ensemble de textes. Malgré ces distinctions et comme c’est le cas avec d’autres disciplines du TALN 61, l’EI est utile à la RI et inversement. Les systèmes d’extraction d’information bénéficient souvent de la capacité de filtrage des outils de RI en focalisant leur analyse sur un ensemble déterminé de documents. A l’inverse, la recherche d’information peut exploiter les informations extraites par les outils d’EI en tant que champs de recherche additionnels et ainsi améliorer le filtrage et l’ordonnancement des documents pour une requête donnée. Les tâches les plus communes en extraction d’information sont l’extraction d’entités nommées [Nadeau and Sekine, 2007], le repérage de relations entre ces entités [Rosario and Hearst, 2005] et la dé- tection d’événements [Naughton et al., 2006]. Celles-ci se distinguent par la nature et la complexité des informations que l’on cherche à repérer et à extraire automatiquement : entités nommées, relations ou événements. Nous détaillons ci-après les objets d’étude et les objectifs spécifiques à chaque tâche. La reconnaissance d’entités nommées (REN) vise à reconnaître et catégoriser automatiquement un ensemble d’éléments d’information qui correspondent généralement à des noms propres (noms de personnes, organisations, lieux) mais aussi aux dates, unités monétaires, pourcentages, unités de mesure, etc. Ces objets sont communément appelés "entités nommées" et s’avèrent indispensables pour saisir le sens d’un texte. Le terme "entité nommée" (EN) n’est apparu en EI que très récemment lors de la 6ème édition des campagnes d’évaluation MUC62 (voir la section 2.5) et sa définition est encore aujourd’hui l’objet de nombreuses discussions. Ce terme renvoie à la théorie de la "référence directe" évoquée dès la fin du 19ème siècle par des philosophes du langage tels que John Stuart Mill. A l’heure actuelle, une majorité de travaux se retrouvent dans la définition proposée par [Kripke, 1980] sous le nom de "désignateurs rigides" : "entités fortement référentielles désignant directement un objet du monde". Il nous faut également souligner que ces entités nommées dites "entités simples" constituent généralement un premier niveau de la chaîne d’extraction et permettent la détection de structures plus complexes que sont les relations et les événements. A cette première tâche d’extraction est couramment associé le problème de résolution de coréférence entre EN. La résolution de coréférence vise à regrouper plusieurs extractions ayant des formes de surface différentes mais référant à la même entité du monde : par exemple, "Big Apple" et "New York", ou encore "JFK" et "John Fitzgerald Kennedy". Ce problème a surtout été exploré conjointement à la détection d’entités nommées mais cette problématique est applicable à tout type d’extraction. Dans les campagnes MUC, la résolution de coréférence fait partie d’un ensemble de tâches nommé SemEval (Semantic Evaluation) ayant pour objectif une compréhension plus profonde des textes [Grishman and Sundheim, 1996]. De plus, la différentiation entre les termes "mention" et "entité" introduite lors des campagnes ACE 63 (voir la section 2.5) met en avant ce besoin de regrouper plusieurs "mentions" d’une entité provenant d’un ou plusieurs textes. Nous détaillons dans la section 2.2.1 les mé- thodes employées pour la reconnaissance de ces entités nommées et la résolution de coréférence entre ces entités. L’extraction de relations a pour objet d’étude les liens existants entre plusieurs entités : celle-ci peut-être binaire (entre deux objets) ou n-aire (plus de deux objets en relation). Il s’agit par exemple de détecter dans un corpus de documents que Barack Obama est l’actuel président des États-Unis, ce qui se traduira par une relation de type "président de" entre l’entité de type Personne "Barack Obama" et l’entité de type Lieu "États-Unis". La détection de relations n-aires correspond à ce que l’on nomme en anglais "record extraction" où il s’agit de repérer un réseau de relations entre entités et dont l’extraction 61. Traitement Automatique du Langage Naturel 62. Message Understanding Conference, ❤tt♣✿✴✴✇✇✇✲♥❧♣✐r✳♥✐st✳❣♦✈✴r❡❧❛t❡❞❴♣r♦❥❡❝ts✴♠✉❝✴ 63. Automatic Content Extraction, ❤tt♣✿✴✴✇✇✇✳✐t❧✳♥✐st✳❣♦✈✴✐❛❞✴♠✐❣✴t❡sts✴❛❝❡✴ 41 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 2. Extraction automatique d’information d’événements fait partie. Cette problématique diffère de la tâche de reconnaissance des entités nommées sur un point essentiel. Une entité nommée correspond à une portion séquentielle de texte et peut être directement représentée par une annotation délimitant le début et la fin de cette séquence. Une relation entre entités ne correspond pas directement à un ensemble consécutif de mots mais représente un lien entre deux portions de texte. Cela implique des processus et des formats d’annotation différents pour ces deux types de tâches. Dans la section 2.2.2, nous nous focaliserons sur les méthodes utilisées pour la détection de relations binaires, les relations n-aires seront abordées par le biais de l’extraction des événements. Est couramment associée à cette tâche d’EI l’extraction d’attributs ayant pour objectif d’extraire automatiquement un ensemble pré-défini de propriétés rattachées à ces entités, le type de ces propriétés dépendant directement de la nature de l’objet en question. Par exemple pour une personne, on pourra extraire son nom et son prénom, sa nationalité, son âge, etc. et pour une entreprise, son siège social, le nombre d’employés, etc. L’extraction des événements est une autre tâche de l’EI très étudiée. Celle-ci peut être conçue comme une forme particulière d’extraction de relations où une "action" est liée à d’autres entités telles qu’une date, un lieu, des participants, etc. Comme cela a été décrit dans les sections 1.3 et 1.3.1.2, cette définition peut varier selon les points de vue théoriques et les applications et donne lieu à différentes représentations et ontologies dédiées aux événements. La détection d’événements s’avère particulièrement utile dans les activités de veille en général et intéresse de plus en plus les entreprises de nombreux domaines pour ses applications en intelligence économique et stratégique [Capet et al., 2011]. Nous présentons par la suite (en section 2.2.3) un tour d’horizon des techniques utilisées pour le repérage des événements dans les textes. 2.2 Approches d’extraction En extraction d’information émergent historiquement deux principaux types d’approche : l’extraction basée sur des techniques linguistiques d’un côté et les systèmes statistiques à base d’apprentissage de l’autre. Cette distinction se retrouve largement en IA et dans les autres disciplines du TALN telles que la traduction automatique, etc. Le premier type d’approche est nommé dans la littérature "approche symbolique", "à base de connaissances", "déclarative", "à base de règles" ou encore "système-expert" et exploite les avancées en TALN. Celles-ci reposent principalement sur une définition manuelle de toutes les formes linguistiques permettant d’exprimer l’information ciblée, autrement dit l’ensemble des contextes d’apparition de telle entité ou relation. Cela se traduit généralement par l’utilisation de grammaires formelles constituées de règles et patrons linguistiques élaborés par des experts-linguistes. Ces patrons sont généralement de deux types : les patrons lexico-syntaxiques et les patrons lexico-sémantiques. Les premiers associent des caractéristiques de mot (forme fléchie, lemme, genre, casse, etc.) à des indices structurels et de dépendance (syntagmatiques, phrastiques ou textuels). Les patrons lexico-sémantiques y ajoutent des éléments sé- mantiques tels que la projection de lexiques (gazetteers), l’utilisation de réseaux sémantiques externes tels que WordNet 64 ou encore d’ontologies [Hogenboom et al., 2011]. Les méthodes symboliques ont pour principales faiblesses leur taux de rappel peu élevé et leur coût de développement manuel coûteux. 64. ❤tt♣✿✴✴❣❧♦❜❛❧✇♦r❞♥❡t✳♦r❣ 42 Copyright c 2013 - CASSIDIAN - All rights reserved2.2. Approches d’extraction Le second type d’approche utilise des techniques statistiques pour apprendre des régularités sur de larges corpus de textes où les entités-cibles ont été préalablement annotées (domaine du "machine learning"). Ces méthodes d’apprentissage sont supervisées, non-supervisées ou semi-supervisées et exploitent des caractéristiques textuelles plus ou moins linguistiques. Parmi celles-ci nous pouvons citer les "Modèles de Markov Caché" (Hidden Markov Models, HMM), les "Champs Conditionnels Aléatoires" (Conditional Random Fields, CRF), les "Machines à Vecteur de Support" (Support Vector Machines, SVM), etc. ([Ireson et al., 2005] pour un état de l’art approfondi). Les principales limites de ce second type d’approche restent qu’elles nécessitent une grande quantité de données annotées pour leur phase d’apprentissage et produisent des modèles de type "boite noire" qui restent à l’heure actuelle difficilement accessibles et interprétables. Pour répondre à ce problème, des méthodes non-supervisées telles que [Etzioni et al., 2005] proposent d’utiliser des techniques de clustering pour extraire des entités d’intérêt. Depuis quelques années, les méthodes hybrides tendent à se généraliser : les acteurs du domaine combinent plusieurs techniques face aux limites des approches symboliques et statistiques. De plus en plus de recherches portent sur l’apprentissage de ressources linguistiques ou encore sur l’utilisation d’un apprentissage dit "semi-supervisé" visant à combiner des données étiquetées et non-étiquetées ([Nadeau and Sekine, 2007], [Hobbs and Riloff, 2010]). Afin de diminuer l’effort de développement des systèmes symboliques, certains travaux s’intéressent à l’apprentissage automatique de règles d’extraction. A partir d’un corpus annoté, l’objectif est de trouver un ensemble minimal de règles permettant d’atteindre les meilleures performances, c’est-à-dire la meilleure balance entre précision et rappel (voir la section 2.5 pour une définition de ces mesures). Ces approches sont soit ascendantes ("bottom-up" en anglais) lorsqu’elles ont pour point de départ un ensemble de règles très spécifiques et opèrent des généralisations pour augmenter la couverture du système [Califf and Mooney, 2003] ; soit descendantes ("top-down" en anglais) lorsqu’elles partent d’un ensemble de règles très génériques pour arriver, par spécialisations successives, à un système plus précis [Soderland, 1999]. [Muslea, 1999] propose un tour d’horizon des grands types de règle résultant d’un apprentissage automatique. Toutes ces méthodes reposent généralement sur des pré-traitements linguistiques dits "classiques" comme la "tokenization" (découpage en mots), la lemmatisation (attribution de la forme non-fléchie associée), l’analyse morphologique (structure et propriétés d’un mot) ou syntaxique (structure d’une phrase et relations entre éléments d’une phrase). Notons ici l’importance particulière accordée à l’analyse syntaxique (en constituants ou dépendance) dans le repérage et le typage des relations et des événements. Nous détaillons par la suite quelques techniques couramment mises en œuvre selon le type d’objet à extraire : entité nommée, relation ou événement. 2.2.1 Extraction d’entités nommées et résolution de coréférence La reconnaissance automatique d’entités nommées fut consacrée comme l’une des tâches principales de l’EI lors de la 6ème campagne d’évaluation MUC. Les systèmes alors proposés s’attellent à l’extraction de trois types d’entités définis sous les noms "ENAMEX", "TIMEX" et "NUMEX" correspondant respectivement aux noms propres (personnes, lieux, organisations, etc.), entités temporelles et entités numériques. Cette classification n’est pas la seule dans la littérature, [Daille et al., 2000] aborde notamment la distinction entre les catégorisations dites "référentielles" (telles que celle de MUC) et celles dites "graphiques". Les premières classent les entités nommées selon la nature des objets du monde auxquels elles renvoient tandis que les secondes proposent une classification selon la composition graphique des 43 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 2. Extraction automatique d’information entités. A titre d’exemple, une classification inspirée de [Jonasson, 1994] distingue les entités nommées "pures simples", des EN "pures complexes" et des EN "mixtes". Quelque soit la catégorisation choisie, celle-ci est fixée a priori dans la majorité des applications. A l’inverse, les récents travaux autour de la construction d’ontologies et de bases de connaissances à partir de textes mettent en œuvre une extraction "tous types confondus" pour en déduire a posteriori une classification. L’extraction des entités nommées est généralement déroulée en deux phases : leur repérage dans le texte et leur typage selon la catégorisation pré-définie. [Nadeau and Sekine, 2007] propose un état de l’art des différentes approches explorées jusqu’à nos jours pour réaliser cette tâche : on y retrouve l’opposition "classique" (abordée dans la section précédente) entre méthodes symboliques et statistiques ainsi que l’intérêt récent pour les approches hybrides. Par ailleurs, [Friburger, 2006] présente les aspects linguistiques les plus communément exploités par les systèmes de REN. Les approches à base de règles linguistiques sont les plus anciennes et, bien que coûteuses car elles sont définies manuellement par un expert, elles s’avèrent généralement plus rapides à l’exécution et plus personnalisables. L’ensemble de règles s’apparente à une grammaire locale et vise à définir de façon la plus exhaustive possible les différents contextes d’apparition de telle entité. De nombreux formats d’expression de règles d’extraction existent (CPSL [Appelt and Onyshkevych, 1998], JAPE [Cunningham et al., 2000], DataLog [Ceri et al., 1989], etc.) et celles-ci partagent la structure générique suivante : regle : contexte → action La partie contexte est composée de plusieurs propositions décrivant un contexte textuel au moyen de diverses caractéristiques linguistiques et formelles. Le repérage des entités nommées se base géné- ralement sur la présence de majuscules puis leur catégorisation est réalisée grâce à des listes de noms propres connus ou de mots dits "catégorisants" (par exemple les prénoms). Ces caractéristiques peuvent être des attributs associés aux tokens (unités lexicales correspondant plus ou moins à un découpage en mots), à des segments plus larges tels que les syntagmes, à la phrase ou au texte dans son entier. Elles peuvent provenir de l’entité elle-même ou de son co-texte (respectivement "internal evidence" et "external evidence" [McDonald, 1996]). Lorsque le contexte est repéré dans le corpus à traiter, la partie action de la règle est exécutée. Il s’agit dans la plupart des cas d’apposer une annotation d’un certain type sur tout ou partie du contexte textuel repéré. Afin de gérer d’éventuels conflits lors de l’exécution d’un ensemble de règles, la plupart des systèmes intègrent des polices d’exécution sous forme d’heuristiques (par exemple : la règle qui produit l’annotation la plus longue est privilégiée) ou d’attribution de priorité à chaque règle. Par ailleurs, les systèmes à base de règles sont couramment implémentés en cascade, c’est-à-dire que les annotations fournies par un premier ensemble de règles peuvent être réutilisées en entrée d’un second ensemble de règles. [Wakao et al., 1996] développe le système LaSIE, une chaine de traitement symbolique fondée sur des grammaires d’unification exprimées en Prolog. Cet extracteur d’EN pour l’anglais a été évalué sur un ensemble d’articles du Wall Street Journal et atteint une F-mesure d’environ 92%. A la même période, l’ancien Stanford Research Center propose FASTUS (sponsorisé par la DARPA), un automate à états finis non-déterministe pour l’extraction d’entités nommées (sur le modèle de MUC-4) dans des textes en anglais et en japonais [Hobbs et al., 1997]. Plus récemment, [Maurel et al., 2011] présente le système open-source CasEN dédié au traitement des textes en français et développé grâce au logiciel CasSys (fourni par la plateforme Unitex) facilitant la création de cascades de transducteurs. Cet outil a 44 Copyright c 2013 - CASSIDIAN - All rights reserved2.2. Approches d’extraction notamment été testé lors de la campagne d’évaluation ESTER 2 65 (voir la section 2.5) dont l’objectif est de comparer les performances des extracteurs d’EN sur des transcriptions de la parole. Bien que les approches statistiques bénéficient d’un essor plus récent, de nombreuses techniques d’extraction ont été et sont encore explorées. Le principe sous-jacent de ces méthodes est d’apprendre, à partir de textes pré-annotés avec les entités-cibles, un modèle de langage qui, appliqué sur un nouveau corpus, permettra d’extraire de nouvelles entités du même type. La majorité de ces approches se fonde sur un corpus d’apprentissage segmenté en tokens auxquels est associé un certain nombre de caracté- ristiques représentées par des vecteurs. Ces caractéristiques peuvent être intrinsèques au token (telles que sa forme de surface, sa longueur, sa catégorie grammaticale, etc.), liées à sa graphie (sa casse, la présence de caractères spécifiques, etc.) ou encore provenir de ressources externes (bases d’entités nommées connues, liste de mots catégorisants, etc.). A partir de ces différentes informations fournies dans le corpus d’apprentissage, la reconnaissance d’entités nommées est traitée comme un problème de classi- fication. On distingue d’une part, des classifieurs dits linéaires tels que les modèles à base de régression logistique ou les machines à vecteurs de support [Isozaki and Kazawa, 2002] et, d’autre part, des classifieurs graphiques probabilistes tels que les HMMs [Zhao, 2004] ou les CRFs [Lafferty et al., 2001]. Le second type de classification est généralement plus performant car il permet de prendre en compte, lors de l’apprentissage du modèle d’annotation, les dépendances de classe entre tokens voisins. La principale limite de toutes ces approches reste le fait qu’elles construisent leur modèle au niveau token et ne permettent pas d’exploiter d’autres caractéristiques à un niveau de granularité plus élevé. Pour pallier à ce problème, il existe des techniques d’apprentissage statistique fondées sur un découpage en segments (en groupes syntaxiques, par exemple) ou sur une analyse de la structure globale des textes [Viola and Narasimhan, 2005]. Toutefois, les CRFs restent le modèle statistique le plus utilisé actuellement en REN de par leurs bonnes performances et leur capacité d’intégration de diverses caractéristiques [Tkachenko and Simanovsky, 2012]. Par ailleurs, les chercheurs en EI s’intéressent ces dernières années à combiner des techniques issues des approches symboliques et statistiques pour améliorer les performances de leurs systèmes d’extraction. [Charnois et al., 2009], par exemple, propose une approche semi-supervisée par apprentissage de patrons linguistiques pour l’extraction d’entités biomédicales (ici les noms de gènes). Ce travail met en œuvre une extraction de motifs séquentiels fréquents par fouille de textes et sous contraintes. Cette approche permet une extraction plus performante en utilisant un nouveau type de motif appelé LSR (utilisation du contexte du motif pour augmenter sa précision). Par ailleurs, [Charton et al., 2011] s’intéresse à l’extraction d’entités nommées en utilisant des motifs d’extraction extraits à partir de Wikipedia pour compléter un système d’apprentissage statistique par CRF. Les auteurs exploitent ici le contenu riche en noms propres et leurs variantes des ressources encyclopédiques telles que Wikipedia pour en extraire des patrons linguistiques. Ces résultats sont ensuite fusionnés avec ceux de l’approche statistique et une amélioration de la REN est constatée après évaluation sur le corpus de la campagne ESTER 2. L’inconvé- nient ici étant le besoin de données annotées, d’autres méthodes proposent d’interagir avec l’utilisateur : celui-ci fournit un petit ensemble d’exemples permettant d’obtenir un premier jeu de règles et celui-ci est amélioré de façon interactive et itérative [Ciravegna, 2001]. D’autres travaux tels que [Mikheev et al., 1999] choisissent de limiter la taille des gazetteers utilisés pour la REN et montrent que, tout en allégeant fortement la phase de développement de leur système (une approche hybride combinant un système à base de règles à un modèle probabiliste à Maximum d’Entropie), cela à un impact faible sur ses performances. [Fourour, 2002] propose Nemesis, un outil de REN fondé sur une approche incrémentielle se 65. Évaluation des Systèmes de Transcription Enrichie d’Émissions Radiophoniques, ❤tt♣✿✴✴✇✇✇✳❛❢❝♣✲♣❛r♦❧❡✳♦r❣✴ ❝❛♠♣❴❡✈❛❧❴s②st❡♠❡s❴tr❛♥s❝r✐♣t✐♦♥✴ 45 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 2. Extraction automatique d’information déroulant en trois phases : un premier jeu de règles est appliqué, celui-ci est suivi d’une étape d’apprentissage pour améliorer ce jeu, puis un second repérage est effectué grâce au jeu de règles amélioré. Pour finir, beaucoup de travaux se penchent sur la résolution de coréférence entre entités nommées. En effet, comme introduit dans la section 2.1, il est courant de faire référence à une entité du monde réel (une personne, un lieu, une organisation, etc.) de plusieurs manières : par son/ses nom(s) propre(s) (par exemple, "Paris" ou "Paname"), par une expression la décrivant ("la capitale de la France") ou encore par un groupe nominal ou un pronom en contexte ("cette ville", "elle"). Ainsi, lorsqu’un texte contient différentes mentions d’une même entité, il apparait intéressant de pouvoir les lier (on obtient ainsi des chaines de référence) pour indiquer qu’elles font référence à un seul et même objet réel, qu’il y a donc coréférence. Cette problématique n’est pas triviale et peut s’appliquer à un seul et même document mais aussi au sein d’un corpus (coréférence entre entités provenant de textes distincts). La résolution de coréférence peut se faire de façon endogène ou exogène (en utilisant des ressources externes), en utilisant des techniques linguistiques, statistiques ou hybrides. Cette tâche fait notamment partie de campagnes d’évaluation telles que ACE (Global Entity Detection and Recognition). [Bontcheva et al., 2002] adopte une approche à base de règles pour la résolution de coréférence entre ENs et pronominale. Ces travaux ont été implémentés sous la forme d’un transducteur à états finis au sein de la plateforme GATE et sont intégrés à la chaine de traitement ANNIE (il s’agit des modules nommés Orthomatcher et Pronominal Corerefencer). Un exemple d’approche statistique et exogène est celle présentée par [Finin et al., 2009] où la base de connaissances Wikitology (construites à partir de Wikipedia, DBPedia et FreeBase) est utilisée en entrée d’un classifeur SVM (avec une trentaine d’autres caractéristiques textuelles) pour la construction de chaines de référence. 2.2.2 Extraction de relations L’extraction de relations consiste à détecter et typer un lien exprimé textuellement entre deux entités. Cette tâche a notamment été proposée à l’évaluation lors de la campagne MUC-7, évaluation poursuivie par les campagnes ACE à partir de 2002. Les avancées dans cette problématique proviennent essentiellement des travaux d’EI menés dans les domaines de la médecine et de la biologie, notamment pour l’analyse des rapports médicaux et expérimentaux. Les premiers systèmes développés sont fondés sur un ensemble de règles d’extraction dont les plus simples définissent des patrons sous forme de triplets du type : e1 relation e2 (où e1 et e2 sont deux entités reconnues au préalable). La majorité des relations n’étant pas exprimées aussi simplement dans les textes réels, les règles d’extraction doivent être plus élaborées et intégrer d’autres caractéristiques textuelles situées soit entre les deux entités visées soit autour du triplet relationnel. Comme pour la REN, les systèmes complètent couramment des caractéristiques de mot et une analyse structurelle par des indices sémantiques. [Muller and Tannier, 2004] présente une méthode symbolique comprenant une analyse syntaxique pour la détection de relations temporelles entre entités de type "événement". Dans le domaine médical, [Fundel et al., 2007] développe RelEx, un outil visant à extraire les interactions entre protéines et gènes et dont l’évaluation (sur le corpus MEDLINE) a montré de bonnes performances (précision et rappel d’environ 80%). [Nakamura-Delloye and Villemonte De La Clergerie, 2010] propose une méthode d’extraction de relations entre entités nommées basée sur une analyse en dépendance. Leur idée est de repérer les chemins syntaxiques entre deux entités (i.e. l’ensemble des relations de dépendance qu’il faut parcourir pour relier ces deux entités) afin de construire par généralisation des groupes de patrons de relations syntaxiques spécifiques à tel type de relation sémantique. 46 Copyright c 2013 - CASSIDIAN - All rights reserved2.2. Approches d’extraction De nombreux travaux se tournent également vers l’apprentissage automatique de patrons de relation afin de faciliter ou de remplacer le travail manuel de l’expert. [Cellier et al., 2010], par exemple, s’attache au problème de la détection et du typage des interactions entre gènes par apprentissage de règles linguistiques. Leur approche réutilise une technique employée à l’origine en fouille de données : l’extraction de motifs séquentiels fréquents. Celle-ci permet d’apprendre à partir d’un corpus annoté un ensemble de régularités pour les transformer, après validation par un expert, en règles d’extraction. Il faut noter ici que le corpus d’apprentissage n’est pas annoté avec les relations-cibles mais uniquement avec des caractéristiques de plus bas niveau (entités nommées de type "gène", catégories morpho-syntaxiques, etc.). De plus, l’ajout de contraintes permet de diminuer la quantité de motifs retournés par le système et ainsi faciliter le tri manuel fait par l’expert. Du côté des approches statistiques, [Rosario and Hearst, 2004] s’intéresse à la détection de relations entre maladies et traitements (de sept types distincts dont "cures", "prevents", "is a side effect of", etc.) et compare plusieurs méthodes statistiques pour cette tâche. Une sous-partie du corpus MEDLINE 2011 est annotée manuellement par un expert du domaine afin d’entraîner et tester plusieurs modèles graphiques et un réseau de neurones. Ce dernier obtient les meilleures performances avec une précision d’environ 97%. Une autre approche statistique pour l’extraction de relations est celle de [Zhu et al., 2009] proposant de combiner un processus de "bootstrapping" et un réseau logique de Markov. Le premier permet d’initier l’apprentissage à partir d’un petit jeu de relations fournies par l’utilisateur et ainsi diminuer le besoin en données annotées. De plus, ce travail exploite les capacités d’inférence permises par les réseaux logiques de Markov afin d’augmenter les performances globales de leur système StatSnowball. La plupart des approches supervisées étant dépendantes du domaine de leur corpus d’apprentissage, [Mintz et al., 2009] s’intéresse à une supervision dite "distante" en utilisant la base de connaissances sémantique Freebase 66. Le principe de ce travail est de repérer dans un corpus de textes brut des paires d’entités étant en relation dans Freebase et d’apprendre des régularités à partir du contexte textuel de cette paire. L’apprentissage est implémenté ici sous la forme d’un classifieur à logique de régression multi-classes et prend en compte des caractéristiques lexicales (par exemple, la catégorie grammaticale des mots), syntaxiques (une analyse en dépendance) et sémantiques (un repérage des entités nommées). Pour finir, nous pouvons citer quelques travaux comme ceux de [Hasegawa et al., 2004] ou [Wang et al., 2011] proposant d’appliquer des techniques de "clustering" à l’extraction de relations. Le premier décrit une méthode non-supervisée d’extraction et de catégorisation de relations entre EN par "clustering". Celui-ci s’opère par une première étape de représentation du contexte de chaque paire d’entités proches en vecteurs de caractéristiques textuelles. Puis, on calcule une similarité cosinus entre vecteurs qui est donnée en entrée d’un "clustering" hiérarchique à lien complet. On obtient ainsi un "cluster" par type de relation, relation nommée en prenant le mot ou groupe de mot le plus fréquent entre paires d’EN au sein du "cluster". D’autre part, [Wang et al., 2011] s’intéresse à la détection de relations en domaine ouvert et de façon non-supervisée. Pour cela, leur approche est d’extraire un ensemble de relations par plusieurs phases de filtrage puis de les regrouper par type en utilisant des techniques de "clustering". La première étape est réalisée par trois filtrages successifs : les phrases contenant deux ENs et au moins un verbe entre les deux sont sélectionnées, puis les phrases non-porteuses de relation sont évacuées par des heuristiques et enfin par un apprentissage à base de CRFs. Une fois l’ensemble des relations pertinentes extraites, celles-ci sont regroupées par type sémantique en utilisant un algorithme de "clustering de Markov". Cette méthode ne nécessite pas d’annoter les relations dans un corpus d’apprentissage, ni de fixer au préalable les différents types de relation à extraire. 66. ❤tt♣✿✴✴✇✇✇✳❢r❡❡❜❛s❡✳❝♦♠✴ 47 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 2. Extraction automatique d’information 2.2.3 Extraction d’événements L’extraction d’événements, parfois considérée comme une extraction de relations n-aires, consiste à repérer dans un ou plusieurs textes des événements d’intérêt tels que définis dans la section 1.3. Cette tâche peut se résumer par le fait de répondre à la question suivante : "Who did what to whom when and where ?". Certains modèles y ajoutent les questions "how ?" et "why ?". La littérature dans ce domaine montre que l’extraction des événements regroupe généralement plusieurs sous-tâches [Ahn, 2006] : 1. détection des marqueurs d’événement ; 2. affectation des attributs ; 3. identification des arguments ; 4. estimation des rôles ; 5. résolution de coréférence. Plusieurs campagnes MUC67 s’y sont intéressé avec notamment des tâches de remplissage automatique de formulaires ("template filling"). Comme en extraction d’information de façon générale, la littérature du domaine offre à la fois des travaux basés sur des approches symboliques et des techniques purement statistiques. La première approche symbolique retenue est décrite dans [Aone and Ramos-Santacruz, 2000] : il s’agit du système REES 68 permettant l’extraction de relations et d’événements à grande échelle. Cet outil repose sur l’utilisation combinée de lexiques et de patrons syntaxiques pour la détection d’évé- nements principalement basés sur des verbes. Ces lexiques correspondent à une description syntaxique et sémantique des arguments de chaque verbe déclencheurs d’événement. Ces informations sont par la suite réutilisées au sein des patrons syntaxiques décrivant les différents contextes d’apparition d’un évé- nement. Dans la lignée, [Grishman et al., 2002b] s’intéresse à la détection d’événements épidémiques au moyen d’un transducteur à états finis. Par ailleurs, le système d’extraction IE2 (Information Extraction Engine) a été développé par la société américaine SRA International spécialisée dans le traitement de l’information [Aone et al., 1998]. Cet outil a obtenu les meilleures performances (51% de F-mesure) pour la tâche Scenario Template (ST) lors de la campagne d’évaluation MUC-7. Il s’agit d’une extracteur modulaire comprenant 6 modules dont celui nommé "EventTag" permettant le remplissage de scénarios d’événement grâce à des règles syntactico-sémantiques élaborées manuellement. Un autre outil d’EI à base de connaissances est celui proposé par [Appelt et al., 1995], il s’agit du système FASTUS ayant participé à plusieurs campagnes d’évaluation (MUC-4, MUC-5 et MUC-6). Cet extracteur est fondé sur un ensemble de règles de grammaire développé par des experts et a montré de bonnes performances dans la tâche d’extraction d’événements de MUC-6 69. FASTUS a obtenu une F-mesure de 51% contre 56% pour le meilleur des systèmes de la campagne. Cet outil est fondé sur un formalisme d’expression de règles nommé FASTSPEC visant à faciliter l’adaptation de l’outil à de nouveaux domaines. Du côté des approches statistiques, [Chieu, 2003] développe le système ALICE 70 afin d’extraire des événements par apprentissage statistique. Ceux-ci ont évalué quatre algorithmes de classification issus de la suite Weka sur les données-test de la campagne MUC-4. Le corpus d’apprentissage est constitué des documents sources (des dépêches de presse) et des fiches d’événements associés. Les caractéristiques utilisées intègrent notamment une analyse des dépendances syntaxiques par phrase du corpus ainsi que des 67. Message Understanding Conference 68. Relation and Event Extraction System 69. la tâche concernait les changements de personnel dans la direction d’une entreprise 70. Automated Learning-based Information Content Extraction 48 Copyright c 2013 - CASSIDIAN - All rights reserved2.2. Approches d’extraction chaines de coréférence entre entités nommées. Les meilleurs résultats sont obtenus avec un classifieur à Maximum d’Entropie (ALICE-ME) et celui-ci approche les performances du meilleur des participants de la campagne MUC-4. D’autre part, [Jean-Louis et al., 2012] présente un outil d’extraction d’événements sismiques exploitant des techniques d’analyse du discours et d’analyse de graphes. La reconnaissances des événements s’effectue en trois phases : un découpage des textes selon leur contenu événementiel, la construction d’un graphe des entités reconnues et le remplissage d’un formulaire d’événement par sélection des entités pertinentes dans le graphe. La première phase repose sur un apprentissage statistique par CRF tandis que la seconde est réalisée grâce à un classifieur à Maximum d’Entropie. Le corpus d’apprentissage pour ces deux premières étapes est constitué de dépêches provenant de l’AFP et de Google Actualités pour lesquelles des experts ont manuellement construits les formulaires d’événements. Les caractéristiques pour l’apprentissage (découpage en mots et phrases, détection des ENs, analyse syntaxique, etc.) ont été obtenues de l’analyseur LIMA [Besançon et al., 2010]. Enfin, le remplissage des formulaires d’événement est réalisé par combinaison de plusieurs algorithmes de sélection (PageRank, vote, etc.) afin de choisir la meilleure entité du graphe pour chaque champ du formulaire. Les méthodes d’apprentissage de patrons ou les approches semi-supervisées apparaissent intéressantes comme par exemple le système de [Xu et al., 2006]. Ceux-ci proposent un outil d’extraction de patrons linguistiques par une méthode de "bootstrapping" appliquée à la détection des événements comme des remises de prix ("prize award events"). Cette approche est itérative et faiblement supervisée car elle permet, en partant de quelques exemples d’événements provenant d’une base de données existante, d’apprendre des régularités d’occurrence de ces événements et d’en déduire des patrons d’extraction. Ceux-ci ont ensuite été implémentés sous forme de règles dans une application créée grâce à la plateforme SProUT 71 [Drozdzynski et al., 2004]. Nous pouvons égelement citer le projet TARSQI 72 (respectant la spécification TimeML) qui a donné lieu au développement du système Evita 73. [Saurí et al., 2005] présente succinctement les principes théoriques sur lesquels repose cet outil ainsi que son fonctionnement général. Les auteurs définissent les verbes, noms et adjectifs comme les trois catégories de mots déclencheurs étant les plus porteuses de sens pour la détection d’événements. Ils détaillent par la suite les différentes méthodes d’extraction associées à chaque type de déclencheur et plus particulièrement les caractéristiques textuelles et grammaticales à prendre en compte. Ainsi, pour la détection d’événements portés par un verbe, Evita opère un découpage en syntagmes verbaux et détermine pour chacun sa tête ; puis, vient une phase de tri lexical pour écarter les têtes ne dénotant pas un événement (verbes d’état, etc.) ; l’on tient ensuite compte des traits grammaticaux du verbe tels que la voix, la polarité (positif/négatif), la modalité, etc. ; et une analyse syntaxique de surface vient aider à l’identification des différents participants de l’événement. Pour finir, [Huffman, 1995] propose LIEP 74, un système de découverte de patrons d’extraction dont les résultats sont utilisés par l’extracteur d’événements symbolique ODIE 75. Dans cette approche, on propose à l’utilisateur une interface lui permettant de remplir une fiche d’événement correspondant à une phrase donnée. Ces éléments sont ensuite utilisés pour apprendre par une approche ascendante (voir section 2.2) un ensemble de patrons récurrents qui sont ensuite ajoutés en tant que nouveaux chemins dans le transducteur à états finis de l’outil ODIE. Les auteurs de ce système montrent que LIEP approche les performances d’un système purement symbolique avec une F-mesure de 85% contre 89% pour ODIE. 71. Shallow Processing with Unification and Typed feature structures 72. Temporal Awareness and Reasoning Systems for Question Interpretation 73. Events In Texts Analizer 74. Learning Information Extraction Patterns 75. On-Demand Information Extraction 49 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 2. Extraction automatique d’information 2.3 Plateformes et logiciels pour l’EI Ces années de recherche en EI ont donné lieu comme vu précédemment à de nombreux travaux et, par conséquent, au développement de nombreux outils d’extraction d’information. Afin de faciliter ces développements et leur réutilisation au sein de chaines plus complexes, est apparu un nombre important de ce qu’on pourrait appeler des boites à outils pour l’EI et le TAL plus généralement. Celles-ci partagent une même visée finale, à savoir le traitement automatique des textes, mais se différencient sur plusieurs points. Elles proviennent tout d’abord de milieux différents, soit académique, soit industriel, soit sont issues d’une collaboration entre laboratoire(s) de recherche et entreprise(s) au sein d’un projet commun. De plus, il peut s’agir soit de simples entrepôts d’outils et algorithmes, soit de véritables plateformes d’intégration de modules hétérogènes. Dans ce cas, on pourra constater des choix d’architecture distincts pour la combinaison et l’enchainement de ces différents modules. Par ailleurs, la diversité et la complexité des documents traités constitue également un facteur de variation (différents formats, langues, domaines, structures, etc.). Cette hétérogénéité des contenus se traduit naturellement par différents choix de représentation de l’information, même si le format XML tend à se généraliser. Enfin, ces boites à outils ne donnent pas la même priorité à l’interaction avec l’utilisateur et ne se dotent pas des mêmes moyens de visualisation [Enjalbert, 2008]. Nous présentons ici un rapide tour d’horizon de ces boites à outils, centré sur différentes plateformes et suites logicielles distribuées en open-source et/ou gratuitement. Précisons tout d’abord que cette liste est non-exhaustive et a été constituée, sans ordre préférentiel, au fil de notre état de l’art et de nos travaux. OpenCalais Tout d’abord, la société Thomson Reuters (qui a racheté ClearForest) propose plusieurs services autour de l’extraction d’information regroupés sous le nom OpenCalais 76. Celle-ci a mis en place OpenCalais Web Service, un outil en ligne d’extraction d’entités nommées, relations et évènements. Cet outil ainsi que les divers plugins qui l’accompagnent (Marmoset, Tagaroo, Gnosis, etc.) sont utilisables gratuitement pour usage commercial ou non. Le service d’annotation en ligne permet de traiter des textes en anglais, français et espagnol grâce à une détection automatique de la langue du texte fourni. Il extrait pour toutes ces langues un nombre conséquent de types d’entités nommées (villes, organisations, monnaies, personnes, e-mails, etc.) et attribue également un indice de pertinence/intérêt à chacune des extractions. L’analyse des textes en anglais est plus complète : extraction d’événements et de relations, désambiguïsation d’entités, détection de thème, association automatique de mots-clés ("semantic tags"), etc. Toutes ces annotations peuvent être récupérées au format RDF 77. Enfin, précisons qu’OpenCalais fournit aussi bien des modules fondés sur des techniques linguistiques que des méthodes statistiques ou hybrides. 76. ❤tt♣✿✴✴✇✇✇✳♦♣❡♥❝❛❧❛✐s✳❝♦♠✴ 77. Ressource Description Framework 50 Copyright c 2013 - CASSIDIAN - All rights reserved2.3. Plateformes et logiciels pour l’EI LingPipe LingPipe développé par Alias-i 78 constitue une autre véritable "boîte à outils" pour l’analyse automatique de textes. Divers outils y sont disponibles gratuitement pour la recherche et, parmi ceux-ci, une majorité relèvent plus ou moins directement du domaine de l’extraction d’information. D’une part, des modules de pré-traitement permettent de « préparer » le texte pour la phase d’extraction : analyse morpho-syntaxique, découpage en phrases, désambiguïsation sémantique de mots. D’autre part, LingPipe met à disposition des modules de détection d’entités nommées, de phrases d’intérêt, d’analyse d’opinion et de classification thématique. Ces traitements sont tous réalisés par approche statistique et notamment par l’utilisation de CRF et d’autres modèles d’apprentissage (spécifiques à une langue, un genre de texte ou un type de corpus). OpenNLP Également reconnu, le groupe OpenNLP 79 rassemble un nombre important de projets open-source autour du Traitement Automatique du Langage. Son objectif principal est de promouvoir ces initiatives et de favoriser la communication entre acteurs du domaine pour une meilleure interopérabilité des systèmes. En extraction d’information, nous pouvons retenir les projets NLTK 80, MALLET 81, Weka 82 ou encore FreeLing 83. Le premier correspond à plusieurs modules en Python pouvant servir de base au dé- veloppement de son propre outil d’extraction. Le second projet, MALLET, est un logiciel développé par les étudiants de l’université du Massachussetts Amherst sous la direction d’Andrew McCallum, expert du domaine [McCallum, 2005]. Ce logiciel inclut différents outils pour l’annotation de segments (entités nommées et autres), tous basés sur des techniques statistiques de type CRF, HMM et MEMM 84. Dans la même lignée, Weka est une suite de logiciels gratuits de "machine learning" développé à l’Université de Waikato (Nouvelle-Zélande) et distribués sous licence GNU GPL 85 [Hall et al., 2009]. Enfin, FreeLing [Padró and Stanilovsky, 2012] est une suite d’analyseurs de langage dont des modules de découpage en phrases, de tokenisation, lemmatisation, étiquetage grammatical, analyse syntaxique en dépendance, etc. Ce projet propose notamment une palette d’outils de traitement automatique pour la langue espagnole. GATE GATE est une plateforme open-source Java dédiée à l’ingénierie textuelle [Cunningham et al., 2002]. Créée il y a une vingtaine d’années par les chercheurs de l’université de Sheffield (Royaume-Uni), GATE est largement utilisé par les experts en TAL et dispose d’une grande communauté d’utilisateurs. Cela lui permet de disposer d’un ensemble de solutions d’aide et de support (forum, liste de diffusion, foire aux questions, wiki, tutoriels, etc.). Par ailleurs, les créateurs de GATE propose des formations ainsi que des certifications permettant de faire valoir ses compétences à l’utilisation de cette plateforme. 78. ❤tt♣✿✴✴❛❧✐❛s✲✐✳❝♦♠✴❧✐♥❣♣✐♣❡✴ 79. Open Natural Language Processing, ❤tt♣✿✴✴♦♣❡♥♥❧♣✳s♦✉r❝❡❢♦r❣❡✳♥❡t✴ 80. Natural Language ToolKit, ❤tt♣✿✴✴✇✇✇✳♥❧t❦✳♦r❣✴ 81. Machine Learning for LanguagE Toolkit, ❤tt♣✿✴✴♠❛❧❧❡t✳❝s✳✉♠❛ss✳❡❞✉✴ 82. Waikato Environment for Knowledge Analysis, ❤tt♣✿✴✴✇✇✇✳❝s✳✇❛✐❦❛t♦✳❛❝✳♥③✴♠❧✴✇❡❦❛✴ 83. ❤tt♣✿✴✴♥❧♣✳❧s✐✳✉♣❝✳❡❞✉✴❢r❡❡❧✐♥❣✴ 84. Maximum Entropy Markov Models 85. General Public License 51 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 2. Extraction automatique d’information LinguaStream Par ailleurs, le GREYC 86 développe depuis 2001 la plateforme LinguaStream 87, un environnement intégré orienté vers la pratique expérimentale du TALN. Celui-ci permet un assemblage visuel de modules de traitement hétérogènes pour des applications en EI, RI, veille, résumé automatique, enseignement, etc. Ces enchainements de modules sont représentés sous forme de graphes acycliques et l’ensemble des données traitées est sérialisé en XML. [Widlocher et al., 2006] présente une application (réalisée dans le cadre de DEFT’2006) de segmentation thématique de textes implémentée grâce à la plateforme LinguaStream. Unitex La boite à outils Unitex est principalement développée par Sébastien Paumier à l’Institut Gaspard Monge (Université de Paris-Est Marne-la-Vallée) [Paumier, 2003]. Il s’agit d’une plateforme open-source multilingue pour le traitement en temps réel de grandes quantités de textes en langage naturel. Unitex permet d’appliquer des traitements divers sur les textes tels que le repérage de patrons linguistiques sous forme d’expressions régulières ou d’automates, l’application de lexiques et de tables, etc. Cela y est implémenté par des réseaux de transition récursifs définissables graphiquement. Cet outil propose également la production de concordances ou encore un ensemble d’études statistiques en corpus. Nooj Créateur de la plateforme Intex au LADL 88 (sous la direction du Professeur Maurice Gross), le Professeur Max Silberztein continue ses travaux depuis 2002 en proposant NooJ 89, un environnement de développement dédié au traitement du langage naturel écrit [Silberztein et al., 2012]. Cette suite propose des modules de traitement pour une vingtaine de langues (anglais, français, portugais, arabe, chinois, hébreu, etc.) et gère plus d’une centaine de formats de fichier d’entrée. Comme dans beaucoup de plateformes de ce type, les données y sont manipulées au format XML et enrichies grâce aux annotations fournies par les différents composants appliqués en cascade. Une communauté, en majorité européenne, s’est formée autour de cette plateforme donnant lieu depuis 2005 à une conférence NooJ annuelle réunissant divers travaux réalisés grâce à cet outil ainsi qu’à des tutoriels et ateliers réguliers. Stanford NLP Group et Ontotext Pour finir, mentionnons également les groupes de recherche Stanford NLP Group 90 et Ontotext 91 dont les travaux sont intégrés dans GATE. L’équipe de l’université de Stanford en Californie, a créé différents outils de TAL très utiles pour l’extraction d’information : un analyseur syntaxique probabiliste pour l’anglais, un étiqueteur morpho-syntaxique ainsi qu’un système d’extraction d’entités nommées qui 86. Groupe de Recherche en Informatique, Image, Automatique et Instrumentation de Caen 87. ❤tt♣✿✴✴✇✇✇✳❧✐♥❣✉❛str❡❛♠✳♦r❣ 88. Laboratoire d’Automatique Documentaire et Linguistique - Université de Paris-Est Marne-la-Vallée 89. ❤tt♣✿✴✴✇✇✇✳♥♦♦❥✹♥❧♣✳♥❡t✴ 90. ❤tt♣✿✴✴♥❧♣✳st❛♥❢♦r❞✳❡❞✉✴ 91. ❤tt♣✿✴✴✇✇✇✳♦♥t♦t❡①t✳❝♦♠✴ 52 Copyright c 2013 - CASSIDIAN - All rights reserved2.4. Applications reconnaît les noms de personne, d’organisation et de lieu. Ontotext développe ses activités autour des technologies sémantiques et diffuse gratuitement la plateforme KIM 92 pour un usage non-commercial. Celle-ci propose de créer des liens sémantiques entre documents mais aussi d’extraire les entités nommées, relations et événements d’un texte et de les stocker automatiquement dans une base de données. 2.4 Applications Les applications possibles de l’extraction automatique d’information sont à l’heure actuelle nombreuses et ne cessent de croître avec les avancées de la recherche (en particulier dans le domaine du Web). Dans cet ensemble d’applications, un petit nombre est historique et continue de susciter l’inté- rêt depuis les débuts de l’EI : il s’agit notamment de l’analyse des "news", du domaine biomédical ou encore de la veille économique et stratégique. D’autres usages sont plus récents et coïncident avec l’apparition de nouvelles technologies et des nouveaux besoins utilisateur ou techniques qui en découlent. Nous proposons ici un aperçu (non-exhaustif et général) de quelques cas d’application et des travaux de la littérature associés. Tout d’abord, un grand nombre de travaux s’intéressent à l’utilisation des outils d’EI pour des besoins de veille : celle-ci peut être au service d’une entreprise, d’une entité gouvernementale ou encore d’un particulier souhaitant rester informé sur un sujet donné. Cette veille peut aider à la protection des populations par la prévention des épidémies ([Lejeune et al., 2010], [Grishman et al., 2002a], [Chaudet, 2004]) ou des événements sismiques [Besançon et al., 2011], par exemple. Par l’analyse automatique de différents types de sources d’information (rapports médicaux, dépêches de presse, réseaux sociaux, etc.), l’objectif est d’anticiper autant que possible ce genre de catastrophes et de suivre leur propagation pour assister les forces de secours par exemple. Les gouvernements portent aussi un grand intérêt à l’EI pour automatiser leurs processus de veille stratégique et militaire. [Zanasi, 2009], [Capet et al., 2008], [Tanev et al., 2008] ou encore [Pauna and Guillemin-Lanne, 2010] présentent leurs travaux pour une évaluation et un suivi du risque militaire et/ou civil, national et/ou international. [Hecking, 2003] se concentre sur l’analyse automatique des rapports écrits par les militaires, [Sun et al., 2005] et [Inyaem et al., 2010a] de leur côté s’intéressent à la prévention des actes de terrorisme. Enfin, [Goujon, 2002] présente une extraction automatique des événements appliquée à la crise de 2002 en Côte d’Ivoire. Dans le domaine économique, cette veille vise essentiellement à cerner des communautés de consommateurs (par exemple à partir du contenu des blogs [Chau and Xu, 2012]), à assurer un suivi des technologies et/ou produits d’un secteur donné pour les besoins d’une entreprise [Zhu and Porter, 2002] ou encore à améliorer son service-client par analyse des conversations téléphoniques [Jansche and Abney, 2002]. Pour tous les types de veille mis en œuvre, les acteurs du domaine s’intéressent également à adapter les processus d’EI pour garantir un traitement des informations en temps réel [Piskorski and Atkinson, 2011] [Liu et al., 2008]. Cette "fraicheur" des informations est particulièrement importante pour les analystes financiers et le suivi des évolutions des bourses par exemple [Borsje et al., 2010]. Par ailleurs, les chercheurs en EI se mettent au service d’autres sciences telles que la médecine et la biologie en permettant l’analyse automatique de rapports médicaux pour l’extraction d’interactions entre protéines, gènes, etc. [Rosario and Hearst, 2005] [Bundschus et al., 2008]. Dans un autre domaine d’intérêt public, la Commission Européenne a financé le projet PRONTO pour la détection d’événements dans les réseaux de transport public [Varjola and Löffler, 2010]. Les techniques d’EI sont également 92. Knowledge and Information Management 53 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 2. Extraction automatique d’information exploitées pour aider les acteurs judiciaires avec notamment l’analyse automatique des rapports de police [Chau et al., 2002]. Un autre cas d’application est le traitement automatique des publications scientifiques pour la construction de bases de citations telles que Citeseer [Lawrence et al., 1999]. De plus, comme mentionné en introduction de ce chapitre (voir la section 2.1), les résultats des outils d’extraction peuvent servir à d’autres disciplines de l’IA telles que la recherche d’information [Vlahovic, 2011], les systèmes de questions-réponses [Saurí et al., 2005], la construction des ontologies [Vargas-Vera and Celjuska, 2004] [Piskorski et al., 2007], le résumé automatique de documents [Radev et al., 2001], etc. Enfin, avec la récente démocratisation de l’IA et l’entrée dans les foyers de nouvelles technologies de l’information, beaucoup de recherches en EI concernent des applications dédiées au grand public. Les systèmes de recommandation en ligne en bénéficient [Luberg et al., 2012] [Ittoo et al., 2006], mais aussi les logiciels anti-spam [Jason et al., 2004] ou encore les sites de recherche d’emploi [Califf and Mooney, 2003] [Ciravegna, 2001]. De plus, l’important volume de données récemment accessibles par le biais des réseaux sociaux, des sites de partage, collaboratifs, etc. a ouvert la voie à de nouvelles applications orientées vers le Web et en particulier le Web Sémantique. [Popescu et al., 2011] et [Sayyadi et al., 2009] s’intéressent à l’extraction d’événements dans les flux sociaux (Twitter notamment). [Nishihara et al., 2009] propose un système de détection des expériences personnelles dans les blogs. On peut également exploiter les méta-données fournies par les utilisateurs, comme le fait [Rattenbury et al., 2007] en analysant les "tags" postés sur Flickr pour en extraire des événements et des lieux dans les photos partagées. Pour finir, citons l’encyclopédie collaborative Wikipedia qui est non seulement une ressource précieuse pour le développement des outils d’EI mais qui bénéficie aussi de ces technologies [Chasin, 2010]. 2.5 Évaluation des systèmes d’EI 2.5.1 Campagnes et projets d’évaluation En parallèle des nombreux systèmes d’EI développés ces dernières années (dont certains ont été présentés dans les sections précédentes), la communauté des chercheurs a mis en place un certain nombre de campagnes d’évaluation telles que ACE, MUC, ESTER, CONLL 93, TAC94, etc. Afin de stimuler le développement des techniques d’extraction d’information et de dégager les pistes de recherche les plus prometteuses, ces campagnes d’évaluation sont menées tant au niveau national qu’international. Celles-ci ont pour but de mettre en place un protocole d’évaluation commun permettant aux experts du domaine de mesurer les performances de leurs outils. Les campagnes définissent généralement plusieurs tâches à accomplir telles que l’extraction d’entités nommées, de relations ou encore d’événements, la résolution de coréférence, etc. Le protocole le plus courant est de fournir un corpus d’entraînement et un corpus de test où les éléments à extraire ont été pré-annotés ainsi qu’un ou plusieurs scripts d’évaluation ("scoring"). Le corpus d’entraînement permet de préparer l’outil à la tâche d’extraction pour pouvoir ensuite s’auto-évaluer sur le corpus de test et estimer son score grâce aux scripts fournis. Une fois leurs systèmes préparés à la tâche d’évaluation, les participants sont évalués et classés par les organisateurs de la campagne. Ces évaluations s’accompagnent le plus souvent de publications d’articles dans lesquels ceux-ci décrivent leur outil et les techniques mises en œuvre. Cela 93. Conference on Computational Natural Language Learning, ❤tt♣✿✴✴✐❢❛r♠✳♥❧✴s✐❣♥❧❧✴❝♦♥❧❧✴ 94. Text Analysis Conference, ❤tt♣✿✴✴✇✇✇✳♥✐st✳❣♦✈✴t❛❝✴ 54 Copyright c 2013 - CASSIDIAN - All rights reserved2.5. Évaluation des systèmes d’EI permet de mettre en avant les nouvelles approches et de faire le point sur les performances de celles déjà connues. Dans le domaine de l’extraction d’information, les campagnes MUC restent les pionnières et les plus connues au niveau international. Créées au début des années 1990 par la DARPA 95, elles constituent les premières initiatives pour encourager l’évaluation des systèmes d’extraction et ont fortement contribué à l’essor de ce domaine. À l’origine destinées au domaine militaire, les sept séries d’évaluation menées ont permis de diversifier les applications. Celles-ci se caractérisent par la tâche d’extraction consistant à remplir un formulaire à partir d’un ensemble de documents en langage naturel. Certains jeux de données de ces campagnes sont actuellement mis à disposition gratuitement. La DARPA a également initié le « Machine Reading Program » (MRP) : projet visant à construire un système universel de lecture de texte capable d’extraire automatiquement la connaissance du langage naturel pour la transformer en représentation formelle. Celui-ci est destiné à faire le lien entre le savoir humain et les systèmes de raisonnement nécessitant ce savoir. Il s’agit pour cela de combiner les avancées en TALN et en IA. Par ailleurs, nous pouvons citer le programme ACE (Automatic Content Extraction) qui, sous la direction du NIST 96, mène également des campagnes d’évaluation. Spécialisées dans l’analyse d’articles de presse, celles-ci évaluent l’extraction d’entités nommées et la résolution de co-référence (mentions d’entités nommées). Aujourd’hui, la campagne TAC (Text Analysis Conference) a pris la suite des actions menées dans le cadre du programme ACE. Toujours à l’échelle mondiale, les campagnes CoNLL (Conference on Natural Language Learning) évaluent et font la promotion des méthodes d’extraction par apprentissage. Celles-ci sont classées parmi les meilleures conférences internationales dans le domaine de l’intelligence artificielle. Ce succès est en partie du au fait que ces conférences sont dirigées par l’ACL (Association of Computational Linguistics), la plus réputée des associations de linguistique et informatique. Celle-ci est aussi à l’origine des conférences Senseval/Semeval spécialisées dans l’évaluation des outils de désambiguïsation sémantique, point crucial en extraction d’information. En Europe, l’association ELRA (European Language Ressources Association) a mis en place les conférences LREC (Language Ressources and Evaluation Conference). Lors de celles-ci les différents acteurs en ingénierie linguistique présentent de nouvelles méthodes d’évaluation ainsi que divers outils liés aux ressources linguistiques. De plus, cette association participe à l’évaluation de systèmes divers en fournissant les corpus et données nécessaires. Enfin, il nous faut citer la campagne française ESTER (Évaluation des Systèmes de Transcription Enrichie d’Émissions Radiophoniques) qui, entre autres activités, évalue le repérage d’entités nommées appliqué à des textes issus de transcription de la parole. Les mesures d’évaluation les plus communément utilisées en extraction d’information sont la précision, le rappel et la F-mesure. Ces métriques peuvent être définies ainsi : Précisioni = nombre d’entités correctement étiquetées i nombre d’entités étiquetées i 95. Defense Advanced Research Projects Agency 96. National Institute of Standards and Technology 55 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 2. Extraction automatique d’information Rappeli = nombre d’entités correctement étiquetées i nombre d’entités i F-mesurei = (1 + β 2 ) · (précisioni · rappeli ) (β 2 · précisioni + rappeli ) où β ∈ R+ β est une facteur de pondération permettant de favoriser soit la précision soit le rappel lors du calcul de la F-mesure. La plupart des travaux pondère de façon égale la précision et le rappel, il est donc plus fréquemment utilisé une F1-mesure définie comme suit : F1-mesurei = 2 · précisioni · rappeli précisioni + rappeli Précisons également que ces métriques sont souvent évaluées lors de la conception des approches afin de favoriser soit la précision soit le rappel de celles-ci en fonction de l’application visée. 2.5.2 Performances, atouts et faiblesses des méthodes existantes Bien que le système d’évaluation actuel ne soit pas parfait, les différentes campagnes d’évaluation menées depuis plus de vingt ans permettent de dresser un bilan des performances des outils développés et de comparer les atouts et faiblesses des différentes approches adoptées. Comme nous l’avons exprimé précédemment, les avancées en EI sont disparates, elles varient en fonction de nombreux paramètres tels que l’objet ciblé (sa nature et sa complexité intrinsèque), l’ancienneté de la tâche en EI, le domaine/genre des textes analysés, les techniques employées, etc. Tout cela, rend très difficile une comparaison quantitative des systèmes développés et un bilan qualitatif nous parait plus approprié. Pour ce faire, nous nous inspirons de [Hogenboom et al., 2011] qui propose une évaluation selon quatre critères : la quantité de données annotées nécessaire, le besoin de connaissances externes, la nécessité d’une expertise humaine et l’interprétabilité des résultats. Nous donnons tout de même, à titre indicatif, quelques résultats (en termes de précision, rappel et F-mesure) issus des campagnes d’évaluation présentées ci-dessus. En premier lieu, les systèmes purement linguistiques, bien que très précis, ont pour principales faiblesses leur taux de rappel moindre et leur coût de développement manuel coûteux. Ceux-ci ne nécessitent pas de corpus annoté mais impliquent un fort besoin en expertise humaine et dépendent souvent de connaissances externes (listes de mots, réseaux lexicaux, bases de connaissances, etc.). Ce dernier point se vérifie particulièrement dans le cas des systèmes à base de règles lexico-sémantiques. Un avantage non-négligeable de ces approches reste leur caractère symbolique permettant, sous réserve d’une expertise en TAL, d’appréhender plus facilement leur machinerie interne, de les adapter après analyse des résultats et d’observer dans la foulée l’impact des modifications. Ce cycle d’ingénierie est devenu plus aisé avec l’apparition des boites à outils pour l’EI (voir la section 2.3) dont certaines proposent des modules d’évaluation en temps réel de la chaine d’extraction. A titre d’exemple, le meilleur participant de la dernière campagne MUC pour l’extraction des événements (tâche Scenario Template de MUC-7) est un système symbolique et obtient les scores suivants : 65% de précision, 42% de rappel et 51% de F-mesure. Sur la tâche de REN, [Appelt, 1999] rapporte un taux d’erreur de 30% inférieur pour les approches symboliques comparées aux méthodes statistiques entièrement supervisées (respectivement environ 96% et 56 Copyright c 2013 - CASSIDIAN - All rights reserved2.6. Problèmes ouverts 93% de F-mesure). Les systèmes à base de connaissances sont également les plus performants pour la résolution de coréférence avec des résultats allant jusqu’à 63% de rappel et 72% de précision. De leur côté, les approches statistiques permettent de couvrir de nombreux contextes d’apparition (leur rappel est généralement plus élevé) mais nécessitent une grande quantité de données annotées pour l’apprentissage du modèle sous-jacent. Cela constitue une réelle contrainte car les corpus d’apprentissage sont inégalement disponibles selon la langue ciblée, le domaine/genre des textes, etc. Quelques travaux présentés plus haut s’intéressent à cette problématique en diminuant la supervision nécessaire dans le développement de tels systèmes : c’est le cas du "clustering" ou des techniques de "bootstrapping". En contrepartie, l’apprentissage statistique nécessite peu ou pas d’expertise humaine et des ressources externes limitées. Toutefois, il reste l’inconvénient que ces approches produisent des modèles de type "boite noire" qui restent à l’heure actuelle difficilement accessibles et interprétables. Enfin, les méthodes statistiques ont montré leur efficacité tout particulièrement en contexte bruité, dans le traitement des transcriptions de l’oral, par exemple. La meilleure approche statistique de la campagne CoNLL obtient une F-mesure de 91% en reconnaissance d’entités nommées. Les méthodes d’extraction par apprentissage statistique testées lors du challenge PASCAL 2005 atteignent une F-mesure de 75% pour la tâche d’extraction de relations. Pour finir, nous avons vu l’intérêt récent pour le développement d’approches hybrides afin d’exploiter les points forts des méthodes précédentes. Même si ce type d’approche n’est pas parvenu pour le moment à éviter tous les écueils pointés ci-dessus, la combinaison des techniques symboliques et statistiques présente plusieurs atouts. L’apprentissage symbolique permet, par exemple, de diminuer l’effort de développement des règles d’un système-expert classique tout en augmentant le rappel de ces approches. On peut également opter pour une construction automatique des ressources linguistiques externes dont dé- pendent beaucoup des outils développés. Après analyse des erreurs d’extraction par méthode statistique, il est aussi intéressant de compléter le système par un ensemble de règles pour gérer les cas statistiquement peu fréquents. L’outil LP2 (ayant remporté le challenge PASCAL 2005) implémente une méthode de déduction de règles d’extraction et obtient une F-mesure de près de 90% pour l’extraction de relations. Par ailleurs, dans le domaine médical, le système CRYSTAL montre une précision de 80% et un rappel de 75%. Toutes approches confondues, les meilleurs systèmes en extraction de relations obtiennent une Fmesure d’environ 75% sur des données de la campagne ACE (ce score passe à 40% lorsque l’annotation des ENs est automatisée). Pour la tâche de remplissage de formulaires, les systèmes développés montent à une F-mesure de 60% (une annotation humaine obtenant 80%). 2.6 Problèmes ouverts Pour conclure cet état de l’art sur l’extraction d’information, nous souhaitons faire le point sur les différents problèmes et challenges restant à résoudre. En effet, même si des progrès considérables ont été accomplis depuis les débuts de l’EI, un certain nombre de problèmes constituent toujours un réel frein à la commercialisation des systèmes existants [Piskorski and Yangarber, 2013]. Tout d’abord, la plupart des solutions sont développées pour un domaine ou un genre de texte particulier et voient leurs performances décroître rapidement face à des textes différents de ce point de vue. Le même problème survient lorsque les outils sont développés à partir de corpus très homogènes (sur la 57 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 2. Extraction automatique d’information forme ou le contenu) et que ceux-ci sont réutilisés sur d’autres corpus de nature plus variée. Ces limites concernent à la fois les méthodes symboliques et statistiques et nécessitent une ré-adaptation constante des techniques. Il a été montré que les performances d’un système d’extraction peuvent varier de 20% à 40% lors du passage à un nouveau domaine/genre de texte [Nadeau and Sekine, 2007]. Des travaux tels que [Chiticariu et al., 2010] ou [Daumé et al., 2010] s’attachent à faciliter la portabilité des outils d’EI d’un domaine à un autre tout en maintenant de bonnes performances. Un autre enjeu est la réduction de l’effort de développement des systèmes, qu’ils soient symboliques ou statistiques. Outre leurs performances, ce point déterminera la commercialisation à grande échelle de ces outils. Du côté des approches à base de règles, cela est abordé par les travaux en apprentissage symbolique [Cellier and Charnois, 2010] tandis que pour réduire la quantité de données annotées nécessaires aux systèmes statistiques, les chercheurs se tournent vers des techniques comme l’apprentissage dit actif ("active learning") [Culotta et al., 2006] [Thompson et al., 1999]. Plus récemment, une solution prometteuse s’offre aux développeurs de systèmes supervisés et semi-supervisés, il s’agit du crowdsourcing : cette nouvelle pratique consiste à mettre à contribution les internautes pour créer du contenu et constitue un bon moyen pour la création de corpus annotés par exemple [Lofi et al., 2012]. Par ailleurs, on s’intéresse à adapter les méthodes d’extraction actuelles à une classification plus fine des entités nommées (villes, ONG, missiles, etc.) [Sekine et al., 2002] [Fleischman and Hovy, 2002]. En effet, la REN telle qu’elle était étudiée lors des premières campagnes d’évaluation atteint aujourd’hui des performances quasi-égales à celles d’un annotateur humain et ne répond que partiellement au besoin réel des utilisateurs finaux. Au sujet de l’évaluation des technologies, nous pouvons souligner, d’une part, le peu de discussions dans la communauté de l’EI au sujet des métriques d’évaluation. En effet, ces métriques proviennent directement de l’évaluation en recherche d’information et s’avèrent, dans certains cas, peu adaptées à l’évaluation des systèmes d’extraction. [Lavelli et al., 2004] propose un résumé critique des méthodologies d’évaluation mises en œuvre depuis les débuts de l’EI. D’autre part, nous pouvons nous demander si les meilleurs résultats obtenus depuis quelques années en EI sont directement issus de l’amélioration des technologies ou s’il s’agit plutôt d’une simplification générale des tâches d’extraction. Un dernier défi provient directement du récent engouement pour le Web Sémantique : il s’agit de tisser des liens entre les communautés de l’extraction d’information et de l’ingénierie des connaissances afin de "sémantiser" les informations extraites. Suivant l’objectif premier du Web Sémantique et du Web de données — favoriser l’émergence de nouvelles connaissances en liant les informations aux connaissances déjà présentes sur la toile — certains travaux s’attèlent à la problématique de création de liens sémantiques entre la sortie des extracteurs et les bases de connaissance existantes (notamment grâce au nommage unique par URI dans les données au format RDF). [Mihalcea and Csomai, 2007] [Ratinov et al., 2011] [Milne and Witten, 2008] sont des exemples de travaux de ce type dont le but est de lier les informations extraites à des concepts Wikipedia. Nous reviendrons plus amplement sur ces approches au chapitre 3.1.2. 2.7 Conclusions La réalisation de cet état de l’art sur l’extraction d’information a révélé un domaine de recherche très étudié étant donné sa relative jeunesse : nous avons pu recenser un nombre important d’approches, 58 Copyright c 2013 - CASSIDIAN - All rights reserved2.7. Conclusions d’applications possibles, de logiciels et plateformes développés ainsi que de campagnes et projets d’évaluation menés jusqu’à nos jours. Les méthodes développées sont historiquement réparties en deux catégories : les symboliques et les statistiques. Les premières, développées manuellement par des experts de la langue, s’avèrent globalement plus précises, tandis que les secondes réalisent un apprentissage sur une grande quantité de données présentent généralement un fort taux de rappel. Parallèlement à cela, nous avons constaté une certaine complémentarité des approches existantes (voir la section 2.5.2) non seulement en termes de précision et rappel de façon générale mais également du point de vue des types d’entité ciblés, du genre textuel, du domaine d’application, etc. Il nous parait en conséquence pertinent de proposer un système d’extraction fondé sur la combinaison de plusieurs approches existantes afin de tirer partie de leurs différentes forces. Pour ce faire, les approches par apprentissage symbolique nous paraissent intéressantes car elles s’avèrent faiblement supervisées et plus flexible que d’autres approches statistiques. Enfin, ce tour d’horizon nous a permis de comparer différents outils et logiciels pour la mise en œuvre de ces approches ainsi que différents jeu de données potentiellement adaptés à l’évaluation de nos travaux. 59 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 2. Extraction automatique d’information 60 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 3 Capitalisation des connaissances Sommaire 3.1 Fusion de données . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.1.1 Réconciliation de données . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.1.2 Web de données . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.1.3 Similarité entre données . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.2 Capitalisation appliquée aux événements . . . . . . . . . . . . . . . . . . . . . 66 3.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 61 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 3. Capitalisation des connaissances Lorsque la quantité de documents disponibles dépasse un certain seuil, la problématique essentielle devient d’aider les analystes à analyser cette masse de données et à identifier les informations d’inté- rêt sans avoir à parcourir et synthétiser manuellement l’ensemble des documents. Dans ce contexte, les outils d’EI abordés en chapitre 2 se trouvent limités : en effet, la plupart réalise une analyse de l’information parcellaire, mono-document, mono-genre et mono-langue. De plus, comme nous l’avons montré, ces systèmes sont encore, à l’heure actuelle, imparfaits et, même si les progrès dans ce sens ont été signi- ficatifs depuis des années, les erreurs d’analyse ne sont pas rares. Face à cela, il devient de plus en plus nécessaire d’adopter un point de vue global et de concevoir un système de capitalisation des connaissances permettant à la fois d’extraire les informations d’intérêt à partir d’une masse de documents mais également de valoriser les résultats des extracteurs en assurant leur cohérence (éliminer les redondances, contradictions, créer des liens entres les informations, etc.) [Ji, 2010]. Cette problématique, relativement nouvelle, ne bénéficie pas encore d’un intérêt comparable aux deux premiers axes (chapitres 1 et 2) mais elle est l’objet de recherches dans divers domaines tels que la fusion de données, la réconciliation et le nettoyage de données, le Linked Data, la résolution de coréférence, la détection de similarité entre données, etc. Nous présentons dans ce chapitre quelques-unes des méthodes développées explorant différents angles d’un même axe de recherche, à savoir la capitalisation globale et automatisée des connaissances. 3.1 Fusion de données La fusion de données a fait ses débuts aux États-Unis dans les années 70 et a été beaucoup étudiée jusqu’à nos jours dans des domaines divers tels que les applications militaires, de robotique, de transport ou encore en traitement d’images. L’objectif principal de cette discipline est d’optimiser l’acquisition de connaissances en combinant un ensemble d’informations (souvent imparfaites et hétérogènes) provenant de multiples sources plutôt que de les considérer chacune individuellement. Selon l’application, la fusion de données peut servir, d’une part, à reconstituer une situation le plus fidèlement possible à la réalité ou, d’autre part, à améliorer le processus de prise de décision [Desodt-Lebrun, 1996]. Parallèlement aux données à fusionner, la plupart des méthodes de fusion emploie des informations supplémentaires guidant la combinaison. Celles-ci peuvent provenir des données elles-mêmes ou de sources externes et sont par conséquent potentiellement exprimées dans des formalismes distincts. On distingue généralement plusieurs niveaux de fusion selon le type des informations traitées. Nous retiendrons la distinction principale entre la fusion dite numérique manipulant des informations de bas niveau (provenant essentiellement de capteurs) [Bloch, 2005] et la fusion dite symbolique dédiée aux informations de plus haut niveau. C’est dans le cadre de ce second type de fusion (symbolique) qu’apparaissent les travaux introduisant des techniques issues de l’intelligence artificielle et principalement ce que l’on nomme les systèmes à base de connaissance. Ceux-ci sont fondés, d’une part, sur une base de connaissances contenant l’expertise du domaine (les faits ou assertions connus) et, d’autre part, un moteur d’inférence permettant de déduire de nouvelles connaissances à partir de l’existant. Les systèmes à base de connaissance peuvent impliquer différents types de traitement sur les données : les raisonnements temporel et spatial, les déductions et inductions logiques, l’apprentissage automatique, diverses techniques de traitement automatique du langage, etc. La fusion symbolique est souvent choisie pour le traitement des données textuelles et trouve de nombreuses applications dans les systèmes automatiques nécessitant une interaction avec l’être humain. 62 Copyright c 2013 - CASSIDIAN - All rights reserved3.1. Fusion de données Alors que beaucoup de recherches ont été menées dans le cadre de la fusion de données numériques, l’extraction automatique d’information apparait comme une perspective nouvelle permettant d’appliquer ces techniques à un autre type de données et dans un contexte particulièrement incertain et bruité. Ce besoin est fondé sur une constatation principale qui s’avère particulièrement vraie dans le contexte de la veille en sources ouvertes : la même information est rapportée de nombreuses fois par des sources différentes et sous diverses formes. Cela ce traduit notamment par l’utilisation de divers vocabulaires et conventions pour exprimer les mêmes données, des informations plus ou moins à jour et complètes selon les sources, des points de vue différents sur les faits, etc. Dans les sections suivantes, nous abordons plusieurs axes de recherche traitant de cette problématique, à savoir la réconciliation des données, le Web de données et la détection de similarité entre données, puis nous nous centrerons sur l’application des méthodes de capitalisation de connaissances à la reconnaissance des événements. 3.1.1 Réconciliation de données Le réconciliation de données est abordée dans la littérature sous de multiples dénominations, provenant de différentes communautés scientifiques et mettant l’accent sur un aspect particulier de cette problématique : des travaux comme ceux de [Winkler et al., 2006] parlent de record linkage (littéralement traduit par "liaison d’enregistrements ou d’entrées"), on trouve également les termes d’appariement d’objets (object matching), réconciliation de référence [Saïs et al., 2009], duplicate record detection (détection de doublons) [Elmagarmid et al., 2007], désambiguïsation d’entités ou encore résolution de co-référence entre entités [Bhattacharya and Getoor, 2007]. Les premiers sont plutôt issus de la communauté des bases de données, tandis que ces derniers sont généralement employés par les chercheurs en intelligence artificielle (TAL, ingénierie des connaissances, Web sémantique, etc.). Le problème de la réconciliation de données a fait ses débuts dans les années 60 avec les travaux de [Newcombe et al., 1959] en génétique puis avec ceux de [Fellegi and Sunter, 1969] pour le traitement de duplicats dans des fichiers démographiques. La capacité à désambiguïser des dénominations polysé- miques ou d’inférer que deux formes de surface distinctes réfèrent à la même entité est cruciale pour la gestion des bases de données et de connaissances. La méthode la plus simple (mais aussi la moins efficace) pour réconcilier des données est de comparer uniquement leurs représentations textuelles. Les premiers travaux dans ce sens réalisent une réconciliation de référence par paires de mentions et fondée sur la représentation de leurs contextes linguistiques en vecteurs de caractéristiques. Différentes mesures de similarité (voir la section 3.1.3) sont ensuite estimées entre ces vecteurs pour les combiner de fa- çon linéaire par moyenne pondérée, par exemple [Dey et al., 1998]. Plus récemment, d’autres travaux proposent un appariement des descriptions de données plus générique mais toujours basé sur des comparaisons locales [Benjelloun et al., 2006] ou suivent une approche globale exploitant les dépendances existant entre les réconciliations [Dong et al., 2005]. Enfin, [Saïs et al., 2009] propose une approche de réconciliation de référence à base de connaissances et non-supervisée qui combine une méthode logique et une technique numérique. Ces travaux permettent d’exploiter la sémantique exprimée par les données et par leur structure par application d’un ensemble d’heuristiques. Dans le domaine du traitement de données textuelles, lorsque cette tâche est réalisée sans liaison à une base de connaissances externe, elle est souvent appelée résolution de co-référence : les mentions d’entités provenant soit d’un même document soit de plusieurs sont regroupées, chaque groupe référant 63 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 3. Capitalisation des connaissances à une seule et même entité réelle. La tâche de résolution de co-référence sur un ensemble de documents a été adressée par plusieurs chercheurs en commençant par [Bagga and Baldwin, 1999]. Ceux-ci se sont attelés au problème de la co-référence inter-documents en comparant, pour chaque paire d’entités dans deux documents distincts, les vecteurs de mots construits à partir de toutes les phrases contenant les mentions des deux entités. Pour aller plus loin, d’autres approches utilisent des modèles probabilistes [Verykios and Elmagarmid, 1999] mais ceux-ci nécessitent une grande quantité de données annotées pour leur apprentissage. [Wacholder et al., 1997] a développé Nominator l’un des premiers systèmes de REN et de co-référence entre entités fondé sur des mesures de similarité entre contextes d’occurrence. Par ailleurs, un ensemble de travaux récents proposent de représenter l’information extraite de plusieurs documents sous la forme d’un réseau [Ji et al., 2009] où les entités d’intérêt sont vues comme les feuilles d’un graphe et peuvent être liées entre elles par différents types de relations statiques. Ces différentes études visent à regrouper toutes les mentions d’une même entité au sein d’une collection de textes. Toutefois, la construction d’une chaine de co-référence entre entités ne suffit pas à la capitalisation des connaissances car il reste nécessaire de rattacher cette chaine à une entité du monde réel. Il s’agit d’une tâche complexe et il s’avère parfois difficile, même pour un lecteur humain, de dé- terminer à quel objet il est fait référence dans un texte. Dans la plupart des cas, celui-ci identifie la référence d’une entité grâce à des indices issus de son contexte textuel mais aussi et surtout en exploitant l’ensemble du savoir qu’il a déjà acquis par des expériences passées. 3.1.2 Web de données Avec l’essor du Web sémantique de nombreuses recherches proposent d’aller plus loin dans la réconciliation de données et de lier les connaissances entre elles pour créer un Web de données ou Linked Open Data (LOD). Ce liage d’entités est défini comme l’appariement d’une mention textuelle d’une entité et d’une entrée définie dans une base de connaissances. On distingue 3 défis à dépasser pour cette tâche [Dredze et al., 2010] : – Les variations de forme : une seule et même entité est souvent mentionnée sous diverses formes textuelles telles que des abréviations (JFK pour John Fitzgerald Kennedy), des expressions raccourcies (Obama pour Barack Obama), des orthographes alternatives (Osama, Ussamah ou encore Oussama pour désigner l’ancien dirigeant d’Al-Qaïda) et des alias/pseudonymes (Big Apple pour la ville de New York). – Les ambiguïtés référentielles : une seule et même forme de surface peut correspondre à plusieurs entrées dans une base de connaissances. En effet, de nombreux noms d’entités sont polysémiques (Paris est une ville en France mais aussi au Texas). – L’absence de référent : il peut arriver, particulièrement lorsque la quantité de documents traités est conséquente, que la ou les base(s) de connaissances servant de référentiel ne contiennent pas d’entrée pour une ou plusieurs des entités repérées dans les textes (par exemple, le tout nouveau nom donné par les spécialistes à une catastrophe naturelle). De par sa popularité et son exhaustivité, la base de connaissances Wikipédia a largement été utilisée comme base de référence pour le liage de données : ce cas particulier a même reçu le nom de Wikification. Etant donné une chaine de caractères identifiée dans un texte, l’objectif est de déterminer la page Wikipédia à laquelle cette chaine fait référence. Par exemple, pour la phrase suivante "Votre avion dé- collera de JFK", un système de Wification retournera l’identifiant ❤tt♣✿✴✴❢r✳✇✐❦✐♣❡❞✐❛✳♦r❣✴✇✐❦✐✴ 64 Copyright c 2013 - CASSIDIAN - All rights reserved3.1. Fusion de données ❆✪❈✸✪❆✾r♦♣♦rt❴✐♥t❡r♥❛t✐♦♥❛❧❴❏♦❤♥✲❋✳✲❑❡♥♥❡❞②, correspondant à l’aéroport de New York et non la page du 35ème président des États-Unis, par exemple. Les études existantes sur la Wikification diffèrent en fonction des types de corpus traités et des expressions qu’elles cherchent à lier. Par exemple, certains travaux se focalisent sur la Wikification des entités nommées alors que d’autres visent toutes les expressions d’intérêt en cherchant à reproduire un équivalent de la structure de liens de Wikipédia pour un ensemble de textes donné. Le système Wikifier [Milne and Witten, 2008], par exemple, est fondé sur une approche utilisant les liens entre les articles de Wikipédia en tant que données d’apprentissage. En effet, les liens entre les pages de cette base étant créés manuellement par les éditeurs, ils constituent des données d’apprentissage très sûres pour réaliser des choix de désambiguïsation. Par ailleurs, le prototype LODifier [Augenstein et al., 2012] vise à convertir des textes en langage naturel de tout domaine en données liées. Cette approche incorpore plusieurs méthodes de TAL : les entités nommées sont repérées par un outil de REN, des relations normalisées sont extraites par une analyse sémantique profonde des textes et une méthode de désambiguisation sémantique (Word Sense Disambiguation en anglais) permet de traiter les cas de polysémie. L’outil Wikifier y est utilisé pour améliorer la couverture de la REN mais également pour obtenir des liens vers le Web de données grâce aux identifiants DBPedia proposés. Le sens d’un document est finalement consolidé en un graphe RDF dont les noeuds sont connectés à des bases à large couverture du LOD telles que DBPedia et WordNet. Contrairement aux approches précédentes, les travaux de [Dredze et al., 2010] sont facilement adaptables à d’autres bases de connaissances que Wikipédia. Cette approche implémente un apprentissage supervisé fondé sur un ensemble exhaustif de caractéristiques textuelles et s’avère particulièrement efficace dans les cas d’absence de référent. Pour finir, citons l’initiative NLP2RDF 97 qui, également dans l’objectif de créer le Web de données, propose le format d’échange unifié NIF (NLP Interchange Format) afin de favoriser l’interopérabilité des méthodes et systèmes de TAL et l’exploitation de leurs résultats au sein du LOD (via le langage RDF notamment). 3.1.3 Similarité entre données Comme entrevu dans les sections précédentes, les recherches menées en réconciliation de données (au sens large) vont de paire, dans la littérature, avec les travaux conduits autour des calculs de similarité entre ces données. La multitude des calculs de similarité existants faisant que nous ne pourrons les parcourir de façon exhaustive ici, nous choisissons de les présenter par catégories et ce en faisant référence à des états de l’art existants, spécialisés sur cette problématique. Nous avons retenu deux d’entre eux très complets à savoir [Elmagarmid et al., 2007] et [Bilenko et al., 2003]. Les approches de calcul de similarité procèdent généralement en deux étapes : une phase dite de préparation des données suivie d’une phase de fusion des champs référant à une même entité. En effet, hétérogènes du point de vue de leur fond et de leur forme, les données manipulées nécessitent d’être pré-traitées dans le but de les stocker de façon la plus uniforme possible dans les bases de données. Cela consiste généralement à réduire au maximum leur diversité structurelle en les convertissant dans un format commun et normalisé. Vient, dans un second temps, l’étape de comparaison des données entre elles et d’estimation de leur similarité. Une grande quantité de méthodes ont été proposé pour ce faire : celles-ci varient sur plusieurs critères (type de données visé, niveau de comparaison, etc.) qui donnent 97. ❤tt♣✿✴✴♥❧♣✷r❞❢✳♦r❣✴❛❜♦✉t 65 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 3. Capitalisation des connaissances à des catégorisations différentes. [Elmagarmid et al., 2007] distingue les mesures de similarité au niveau caractère (distance d’édition, affine gap distance, distance de Smith-Waterman, distance de Jaro, Q-Grams) et les métriques basées sur les tokens (chaines atomiques, WHIRL, Q-Grams avec TF.IDF). [Bilenko et al., 2003] différencie les mesures statiques (distance d’édition, métrique de Jaro et ses variantes, distances basées sur les tokens et hybrides) de celles à base d’apprentissage (classifieurs SVM entrainés avec des vecteurs de caractéristiques, affine gap distance, modèles d’apprentissage avec distance de Levenstein). Pour finir, nous pouvons citer les recherches de [Moreau et al., 2008] définissant un modèle générique en vue de faciliter la combinaison de différentes mesures de similarité au sein d’un même système. 3.2 Capitalisation appliquée aux événements Selon [Quine, 1985], deux mentions d’événement co-réfèrent si elles partagent les mêmes propriétés et participants. Contrairement à la co-référence entre entités dites simples, celle entre événements s’avère plus complexe principalement car les mentions d’événements présentent des structures linguistiques plus riches et variées que les mentions d’entités simples. De plus, le premier type de tâche est réalisé au niveau du mot ou du groupe de mots alors que la résolution de co-référence entre événements doit s’effectuer à un niveau plus élevé (phrase, discours). Une partie des approches existantes pour répondre à ce problème repose sur des méthodes d’apprentissage supervisées explorant diverses caractéristiques linguistiques des textes [Humphreys et al., 1997] [Bagga and Baldwin, 1999] [Naughton et al., 2006]. [Lee et al., 2012], par exemple, propose une approche globale par apprentissage pour la résolution de co-référence, réalisée de façon conjointe entre entités et événements, au sein d’un seul ou de plusieurs documents. Celle-ci est fondée sur une méthode itérative de regroupement exploitant un modèle de régression linéaire appris sur ces données. Toutefois, la résolution de co-référence entre événements impliquant d’explorer une grande quantité de caractéristiques linguistiques, annoter un corpus d’apprentissage pour cette tâche requiert un effort de développement manuel important. De plus, étant donné que ces modèles reposent sur des décisions locales d’appariement, ils ne peuvent généralement pas capturer des relations de co-référence au niveau d’un sujet défini ou sur une collection de plusieurs documents. En réponse à cela, sont créés des systèmes comme Resolver [Yates and Etzioni, 2009] qui permet d’agréger des faits redondants extraits (par l’outil d’extraction d’information TextRunner) grâce à un modèle non-supervisé estimant la probabilité qu’une paire de mentions coréfèrent en fonction de leur contexte d’apparition (exprimé sous forme de n-tuples). Par ailleurs, [Chen and Ji, 2009] propose de représenter les co-références entre événements par un graphe pondéré non-orienté où les nœuds représentent les mentions d’événement et les poids des arêtes correspondent aux scores de co-référence entre deux des mentions. La résolution de co-référence est ensuite réalisée comme un problème de clustering spectral du graphe mais le problème le plus délicat reste l’estimation des similarités en elles-mêmes. Il nous faut noter enfin les travaux de [Khrouf and Troncy, 2012] explorant la problématique de la réconciliation des événements dans le Web de données. En effet, partant du constat que le nuage LOD contient un certain nombre de silos d’événements possédant leurs propres modèles de données, ceux-ci proposent d’aligner cet ensemble de descriptions d’événements grâce à diverses mesures de similarité et de les représenter avec un modèle commun (l’ontologie LODE présentée en section 1.3.1.2). 66 Copyright c 2013 - CASSIDIAN - All rights reservedPour finir, concernant l’évaluation de cette problématique, nous pouvons mentionner la campagne d’évaluation ACE (présentée en section 2.5.1) mettant à disposition des données d’évaluation pour la tâche Event Detection and Recognition (VDR). Toutefois, son utilisation s’avère limitée car cette ressource ne contient que des annotations de co-référence intra-documents et pour un nombre restreint de types d’événements (Life, Movement, Transaction, Business, Conflict, Contact, Personnel and Justice). 3.3 Conclusions La réalisation de cet état de l’art a mis en exergue une suite logique à nos travaux sur l’extraction automatique d’information, à savoir la problématique du passage du texte à la connaissance proprement dite. Comme nous avons pu le voir, celle-ci a donné lieu à diverses recherches au sein de plusieurs communautés de l’IA, chacune d’elles manipulant sa propre terminologie adaptée à ses propres besoins. Ses divergences de vocabulaire n’empêchent pas de voir la place importante réservée à la capitalisation des connaissances au sein des recherches actuelles que ce soit en fusion de données, extraction d’information ou Web sémantique. Certains de ces travaux nous paraissent convenir à nos objectifs tels que [Chen and Ji, 2009] avec leur représentation en graphe de l’ensemble des connaissances (bien adaptée aux travaux dans le cadre du Web sémantique et du WebLab notamment), [Khrouf and Troncy, 2012] pour leur approche globale autour de plusieurs bases de connaissances et enfin, les différentes similarités entre données qui peuvent permettre de réconcilier des extractions. Les enseignements tirés de ce tour d’horizon ont été exploités lors de l’élaboration de notre approche d’agrégation des événements (voir le chapitre 6). Ce chapitre clôture notre partie état de l’art sur les différents domaines de recherche abordés par cette thèse. 67 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 3. Capitalisation des connaissances 68 Copyright c 2013 - CASSIDIAN - All rights reservedDeuxième partie Contributions de la thèse 69 Copyright c 2013 - CASSIDIAN - All rights reservedIntroduction Cette seconde partie présente les contributions réalisées durant cette thèse en réponse à notre problé- matique de recherche et de façon adaptée aux conclusions de l’état de l’art réalisé. Nous y proposons, dans le chapitre 4 Modélisation des connaissances du domaine, notre première contribution : un modèle de représentation des connaissances conçu en accord avec les besoins de notre cadre applicatif (le ROSO et le Media mining) et avec les observations faites au chapitre 1 (sur les modèles et approches existantes). Cette première proposition comprend, d’une part, une modélisation des événements en plusieurs dimensions et, d’autre part, une implémentation de ce modèle au sein d’une ontologie de domaine, nommée WOOKIE, élaborée durant nos recherches. Dans un second chapitre (5 Extraction automatique des événements), les contributions liées à notre axe de recherche Extraction d’information seront exposées. Nous commencerons par la conception d’une approche de reconnaissance d’entités nommées pour l’anglais et le français et implémentée grâce à la plateforme GATE. Puis, le cœur du chapitre sera dédié à l’élaboration d’une approche mixte pour l’extraction automatique des événements dans les textes selon le modèle de connaissances défini auparavant. Celle-ci est fondée sur deux techniques actuelles issue de la littérature en extraction d’information : une première méthode symbolique à base de règles linguistiques contextuelles et une seconde fondée sur un apprentissage de patrons d’extraction par fouille de motifs séquentiels fréquents. L’ensemble des méthodes exposées seront accompagnées d’exemples tirés de données réelles afin de faciliter leur compréhension. Enfin, le dernier chapitre de cette partie (6 Agrégation sémantique des événements) sera centré sur un processus d’agrégation sémantique des événements destiné à assurer la création d’un ensemble de connaissances cohérent et d’intérêt pour l’utilisateur. Cela sera réalisé en différentes phases (conformément aux observations faites durant l’état de l’art) : une première étape de normalisation des différentes extractions, suivie d’une approche permettant d’estimer une similarité sémantique multi-niveaux entre événements et un processus d’agrégation sémantique fondé sur une représentation en graphe des connaissances. 71 Copyright c 2013 - CASSIDIAN - All rights reservedIntroduction 72 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 4 Modélisation des connaissances du domaine Sommaire 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.2 Notre modèle d’événement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.2.1 La dimension conceptuelle . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.2.2 La dimension temporelle . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.2.3 La dimension spatiale . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.2.4 La dimension agentive . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.3 WOOKIE : une ontologie dédiée au ROSO . . . . . . . . . . . . . . . . . . . 78 4.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 73 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 4. Modélisation des connaissances du domaine 4.1 Introduction Ce chapitre détaille la première contribution proposée durant nos travaux de thèse : une modélisation des événements ainsi qu’une ontologie de domaine nommée WOOKIE. Celles-ci ont été élaborées en fonction des conclusions de notre état de l’art et de façon adaptée à notre problématique, à savoir l’extraction automatique des événements dans le cadre du ROSO. Nous proposons, tout d’abord, le modèle d’événement défini pour servir de guide à l’ensemble de notre processus de capitalisation des connaissances. Un événement est représenté selon quatre dimensions (conceptuelle, temporelle, spatiale et agentive) pour chacune desquelles nous avons défini des propriétés bien spécifiques. Enfin, nous présentons l’élaboration de notre ontologie de domaine au regard de la littérature et en y intégrant notre modélisation des événements. Nous terminons ce chapitre par un bilan des forces et faiblesses de notre contribution. 4.2 Notre modèle d’événement Nous prenons pour point de départ la définition de Krieg-Planque ("un événement est une occurrence perçue comme signifiante dans un certain cadre") qui apparaît bien adaptée à nos travaux. Cette définition restant très théorique, il convient d’expliciter comment un événement est exprimé au sein des dépêches de presse (celles de l’AFP 98, par exemple). Après observation de plusieurs dépêches, celles-ci semblent généralement être centrées sur un événement principal, celui-ci étant le plus souvent résumé dans le titre et explicité tout au long de l’article (en faisant parfois référence à d’autres événements). Cette description de l’événement tout au long de la dépêche, est constituée de plusieurs sous-événements ("mentions d’événement" dans le modèle ACE) qui contribuent à la "mise en intrigue" mentionnée auparavant. Ces mentions d’événements sont généralement composées d’un terme déclencheur (dit aussi "nom d’événement" ou "ancre") associé à une ou plusieurs autres entités d’intérêt ("arguments" dans le modèle ACE) telles que des circonstants spatio-temporels (date et lieu de l’événement) et des participants (acteurs, auxiliaires, instruments, etc.). L’objectif de nos travaux est d’extraire automatiquement les mentions d’événement pertinentes pour notre application pour ensuite agréger celles qui réfèrent à un seul et même événement dans la réalité. Afin de proposer une définition formelle des événements, nous nous fondons également sur les travaux de [Saval et al., 2009] décrivant une extension sémantique pour la modélisation d’événements de type catastrophes naturelles. Les auteurs définissent un événement E comme la combinaison d’une propriété sémantique S, d’un intervalle temporel T et d’une entité spatiale SP. Nous adaptons cette modé- lisation à notre problématique en y ajoutant une quatrième dimension A pour représenter les participants impliqués dans un événement et leurs rôles respectifs. Par conséquent, un événement est représenté comme suit : Définition 1. Un événement E est modélisé comme E =< S, T, SP, A > où la propriété sémantique S est le type de l’événement (que nous appellerons dimension conceptuelle), l’intervalle temporel T est la date à laquelle l’événement est survenu, l’entité spatiale SP est le lieu d’occurrence de l’événement et A est l’ensemble des participants impliqués dans E associés avec le(s) rôle(s) qu’ils tiennent dans E. Exemple 1. L’événement exprimé par "M. Dupont a mangé au restaurant Lafayette à Paris en 1999" est représenté comme (Manger, 1999, Paris, M. Dupont). 98. Agence France Presse 74 Copyright c 2013 - CASSIDIAN - All rights reserved4.2. Notre modèle d’événement Les sections suivantes décrivent comment chaque dimension de l’événement est modélisée. Enfin, dans nos travaux, le "cadre" mentionné par Krieg-Planque est défini par l’ontologie de domaine WOOKIE 99 (voir la section 4.3) et plus précisément par la spécification de la classe événement ("Event") en différentes sous-classes et propriétés. Il détermine quels sont les entités et événements d’intérêt pour notre application. 4.2.1 La dimension conceptuelle La dimension conceptuelle S d’un événement correspond au sens véhiculé par le nom porteur de cet événement. En effet, comme le souligne [Neveu and Quéré, 1996], l’interprétation d’un événement dépend étroitement de la sémantique exprimée par les termes employés pour nommer cet événement. Cette dimension équivaut à la propriété sémantique des événements évoquée par [Saval et al., 2009] et représente le type de l’événement, c’est-à-dire sa classe conceptuelle au sein de notre ontologie de domaine. C’est la taxonomie des événements au sein de WOOKIE qui constitue le support principal de cette dimension conceptuelle. Nous avons défini pour notre application environ 20 types d’événement d’intérêt pour le renseignement militaire regroupés sous le concept de MilitaryEvent. Les différentes sous-classes d’événement sont les suivantes : – AttackEvent : tout type d’attaque, – BombingEvent : les attaques par explosifs, – ShootingEvent : les attaques par armes à feu, – CrashEvent : tous les types d’accidents, – DamageEvent : tous les types de dommages matériels, – DeathEvent : les décès humains, – FightingEvent : les combats, – InjureEvent : tout type d’événement entrainant des blessés, – KidnappingEvent : les enlèvements de personnes, – MilitaryOperation : tout type d’opération militaire, – ArrestOperation : les arrestations, – HelpOperation : les opérations d’aide et de secours, – PeaceKeepingOperation : les opérations de maintien de la paix, – SearchOperation : les opérations de recherche, – SurveillanceOperation : les opérations de surveillance, – TrainingOperation : les entrainements, – TroopMovementOperation : les mouvements de troupes, – NuclearEvent : tout type d’événement nucléaire, – TrafficEvent : tout type de trafic illégal. Cette taxonomie a été essentiellement constituée en nous inspirant des modélisations existantes dans le domaine telles que celles présentées en section 1.3.3. Mais également par observation des différents types d’événements rapportés dans des dépêches de presse sur des thèmes tels que les guerres en Afghanistan et en Irak ou encore les diverses attaques terroristes dans le monde. 99. Weblab Ontology for Open sources Knowledge and Intelligence Exploitation 75 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 4. Modélisation des connaissances du domaine 4.2.2 La dimension temporelle Pour la représentation des entités temporelles extraites nous utilisons le "Time Unit System" (TUS) proposé par [Ladkin, 1987]. Contrairement à la majorité des modèles théoriques dédiés à la logique temporelle [Fisher et al., 2005], ce formalisme s’avère plus applicable à des situations réelles de traitement des entités temporelles, notamment, par sa proximité avec les systèmes calendaires communément utilisés. Il s’agit d’une approche hiérarchique et granulaire qui représente toute expression temporelle en un groupe de granules (c’est-à-dire des unités temporelles indivisibles). Un granule (ou unité de temps) est une séquence finie d’entiers organisés selon une hiérarchie linéaire : année, mois, jour, heure, etc. De plus, ce formalisme introduit la notion de BTU (Basic Time Unit) qui correspond au niveau de granularité choisi en fonction de la précision nécessitée par une application (e.g. les jours, les secondes, etc.). Par exemple, si le BTU est fixé à heure, chaque unité temporelle sera exprimée comme une séquence d’entiers i telle que : i = [année,mois,jour,heure]. De plus, TUS définit la fonction maxj ([a1, a2, ..., aj−1]) donnant la valeur maximale possible à la position j pour qu’une séquence temporelle soit valide en tant que date. Cet opérateur est nécessaire car, selon notre actuel système calendaire, le granule jour dé- pend des granules mois et année. [Ligeza and Bouzid, 2008] définit toutes les valeurs maximales pour le granule jour de la façon suivante : – max3([g1, 2]) = 29 lorsque a1 est une année bissextile – max3([g1, 2]) = 28 lorsque a1 est une année non-bissextile – max3([g1, g2]) = 31 quelque soit g1 et lorsque g2 ∈ {1, 3, 5, 7, 8, 10, 12} – max3([g1, g2]) = 30 quelque soit g1 et lorsque g2 ∈ {4, 6, 9, 11} Une date est dite légale lorsqu’elle est valide au regard de cet opérateur et plus généralement du système calendaire courant et elle est dite illégale dans le cas contraire. Pour notre application, nous choisissons un BTU jour correspondant à la précision maximale des dates extraites. Par conséquent, toute expression temporelle i aura la forme suivante : i = [année,mois,jour]. Par exemple, [2010,09,19] représente un intervalle de temps qui débute le 18 septembre 2012 à minuit et termine un jour plus tard (ce qui équivaut à un BTU). De plus, les entités temporelles extraites peuvent s’avérer plus ou moins précises. Dans certains cas, les expressions de temps peuvent être imprécises à l’origine (e.g. "en Mai 2010") et, dans d’autres cas, l’imprécision peut être causée par une erreur d’extraction. Pour représenter ces entités floues, nous introduisons le symbole ∅ défini comme le manque d’information au sens général. Soit T = [g1, g2, g3] une expression temporelle : Définition 2. Expression temporelle complète : T est complète lorsque ∀i∈{1,2,3} , gi 6= ∅ Définition 3. Expression temporelle incomplète : T est incomplète lorsque ∃i∈{1,2,3} , gi = ∅ Nous listons ci-dessous toutes les formes possibles que peuvent revêtir les dates extraites une fois exprimées avec le formalisme TUS ainsi que des exemples : – [year, month, day], e.g. [2011, 12, 14] ; – [year, month] e.g. [2011, 12, ∅] ; – [month, day], e.g. [∅, 12, 14] ; – [year] e.g. [2011, ∅, ∅] ; – [month] e.g. [∅, 12, ∅] ; 76 Copyright c 2013 - CASSIDIAN - All rights reserved4.2. Notre modèle d’événement – [day] e.g. [∅, ∅, 14]. Enfin, le modèle TUS introduit l’opérateur convexify permettant la représentation des intervalles temporels convexes. Prenant pour paramètres deux intervalles primaires i et j, convexify(i, j) retourne le plus petit intervalle de temps contenant i et j. Par exemple, convexify([2008], [2011]) correspond à l’intervalle de 3 ans entre le premier jour de l’année 2008 et le dernier jour de 2011. Nous utilisons cet opérateur pour exprimer de façon unifiée les périodes de temps extraites des textes et permettre ainsi des calculs temporels grâce au modèle TUS. Par conséquent, une extraction telle que "from 2001 to 2005" est normalisé sous la forme convexify([2001], [2005]). 4.2.3 La dimension spatiale Pour notre application, nous choisissons de représenter les entités spatiales comme des aires géographiques et d’utiliser les relations topologiques du modèle RCC-8 pour leur agrégation. En effet, ce modèle s’avère mieux adapté à la comparaison d’entités spatiales que ceux à base de points et fondés sur les coordonnées géographiques. Comme dans le cas des entités temporelles, le raisonnement spatial nécessite d’opérer sur des objets non-ambigus et nous devons par conséquent préciser géographiquement tous les lieux extraits par notre système. Dans le cadre du WebLab, nous nous intéressons notamment à la désambiguïsation d’entités spatiales dans le but d’effectuer des traitements plus avancés comme la géolocalisation ou l’inférence spatiale [Caron et al., 2012]. Cette étape est réalisée en associant un identifiant GeoNames 100 unique (une URI) à chaque lieu extrait. Utiliser une base géographique comme GeoNames a plusieurs avantages : tout d’abord, il s’agit d’une base open-source et sémantique, par conséquent bien adaptée à une intégration au sein du WebLab ; de plus, en complément des coordonnées géographiques, cette ressource fournit des relations topologiques entre lieux, comme par exemple des relations d’inclusion. Nous utilisons, plus précisément, les trois propriétés suivantes pour l’agrégation des événements (voir la section 6.3.3) : – la propriété "children" réfère à une inclusion administrative ou physique entre deux entités géographiques ; – la propriété "nearby" relie deux entités qui sont géographiquement proches l’une de l’autre ; – la propriété "neighbour" est utilisée lorsque deux entités géographiques partagent au moins une frontière. La section 6.3.3 détaille comment nous utilisons ces relations topologiques pour l’agrégation des entités spatiales. 4.2.4 La dimension agentive Comme dit précédemment, tous les participants d’un événement et leurs rôles respectifs sont repré- sentés formellement par la dimension A. Nous définissons cette dimension de l’événement comme un ensemble A = (Pi , rj ) où chaque élément est un couple composé d’un participant pi et d’un rôle rj et où i et j ∈ N. Notre modèle ne limite pas la nature du champ "participant" (chaîne de caractères, entité nommée, nom propre/commun, etc.) pour rester le plus générique possible. Toutefois, dans notre application un participant correspond concrètement à une entité nommée de type Personne ou Organisation ayant été extraite et liée automatiquement à l’événement dans lequel elle est impliquée. Les différents 100. ❤tt♣✿✴✴✇✇✇✳❣❡♦♥❛♠❡s✳♦r❣✴ 77 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 4. Modélisation des connaissances du domaine types de rôles possibles ne sont également pas restreints ici car cela est fortement corrélé à l’application et aux capacités des extracteurs. Notre méthode d’extraction (voir la section 5.4) n’ayant pas été conçue pour déterminer le rôle joué par un participant dans un événement, cet aspect ne sera pas traité ici. 4.3 WOOKIE : une ontologie dédiée au ROSO Afin de définir précisément quelles sont les informations d’intérêt pour notre application, nous avons développé une ontologie de domaine nommée WOOKIE constituant la base de notre système de capitalisation des connaissances. Celle-ci a été créée de façon collaborative et a pour but de représenter les concepts de base nécessaires aux diverses applications du domaine. Cet aspect collaboratif (grâce notamment au logiciel WebProtégé) constitue un point important car il nous a permis de mettre en commun les visions de plusieurs membres de l’équipe IPCC afin de mieux cerner les besoins des opérationnels du renseignement. De plus, cette ontologie est implémentée au format OWL 101 selon les recommandations du W3C102 pour la représentation des connaissances au sein du Web Sémantique. Notre objectif étant de développer une ontologie à taille raisonnable mais générique pour le renseignement militaire, nous avons tout d’abord mené quelques recherches pour faire le point sur les ontologies existantes. En effet, il nous est apparu intéressant de pouvoir reprendre tout ou partie d’une modélisation déjà disponible. WOOKIE a donc été élaborée suite à un état de l’art approfondi de la littérature du domaine et des ontologies existant à l’heure actuelle (dont une partie est présentée dans le chapitre 1). Nous avons commencé par examiner les ontologies générales (dites "de haut niveau") les plus connues et utilisées telles que SUMO 103, PROTON 104, BFO 105, DOLCE 106 ou encore COSMO 107. Ces ontologies sont modélisées à divers niveaux : elles définissent des concepts de haut niveau ou "meta-concepts" (pouvant servir de base à l’organisation d’encyclopédies par exemple) mais aussi des spécialisations des concepts Lieu et Organisation qui se sont avérées particulièrement intéressantes pour le développement de notre ontologie. Puis, d’autres modélisations plus spécifiques nous ont été utiles, telles que des modélisations spécifiques au domaine militaire (voir la section 1.3.3), des ontologies spécialisées dans la description des évènements (voir la section 1.3). Ces différentes observations ont montré qu’aucune des ontologies trouvées ne correspondaient parfaitement au modèle de connaissances voulu et qu’il n’était donc pas adéquat de reprendre une de ces représentations en l’état. Toutefois, nous nous sommes inspirés de ces modélisations tout au long de la construction de WOOKIE et avons veillé à maintenir des équivalences sémantiques avec les ontologies existantes. Le développement de notre ontologie de domaine a été guidé par les méthodologies de la littérature telles que [Noy and Mcguinness, 2001] [Mizoguchi, 2003a]. Celles-ci nous ont permis d’organiser notre travail en plusieurs étapes que nous détaillons ci-après. Nous avons commencé par concevoir une taxonomie de concepts de haut niveau constituant la base de notre modélisation. Pour cela, nous avons accordé une attention particulière aux standards OTAN car ils définissent les objets d’intérêt et leurs propriétés pour le renseignement militaire. Ces standards demeurant trop techniques et détaillés, nous avons fait le choix de nous concentrer sur les catégories de 101. Ontology Web Language, ❤tt♣✿✴✴✇✇✇✳✇✸✳♦r❣✴❚❘✴♦✇❧✲❢❡❛t✉r❡s✴ 102. World Wide Web Consortium, ❤tt♣✿✴✴✇✇✇✳✇✸✳♦r❣✴ 103. Suggested Upper Merged Ontology 104. PROTo ONtology 105. Basic Formal Ontology 106. Descriptive Ontology for Linguistic and Cognitive Engineering 107. Common Semantic MOdel 78 Copyright c 2013 - CASSIDIAN - All rights reserved4.3. WOOKIE : une ontologie dédiée au ROSO l’intelligence définies par le STANAG 2433, connues sous le nom de "pentagramme du renseignement" (voir la figure 4.1). FIGURE 4.1 – Le pentagramme du renseignement Ce pentagramme reprend les éléments centraux du domaine du renseignement militaire et les définit comme ci-dessous. – Le concept Units y est défini comme tout type de rassemblement humain partageant un même objectif, pouvant être hiérarchiquement structuré et divisé en sous-groupes. Il s’agit à la fois des organisations militaires, civiles, criminelles, terroristes, religieuses, etc. – Le concept Equipment désigne toute sorte de matériel destiné à équiper une personne, une organisation ou un lieu pour remplir son rôle. Il peut s’agir d’équipement militaire ou civil, terrestre, aérien, spatial ou sous-marin. – Le concept Places regroupe les points ou espaces terrestres ou spatiaux, naturels ou construits par l’homme, pouvant être désignés par un ensemble de coordonnées géographiques. – Le concept Biographics désigne les individus et décrit un certain nombre de propriétés associées telles que des éléments d’identification, des informations sur la vie sociale et privée, un ensemble de relations avec d’autres individus ou organisations, etc. – Le concept Events décrit toute occurrence d’un élément considéré comme ayant de l’importance. Un événement peut être divisé en plusieurs sous-événements. Toutefois, les standards OTAN ne détaillent pas les sous-classes du pentagramme et les diverses propriétés de classes évoquées doivent être triées et réorganisées. Nous avons donc développé notre ontologie de haut en bas (approche descendante ou "top-down"), en partant des concepts plus généraux vers les plus spécifiques. Nous avons effectué cette spécialisation en conservant les classes intéressantes des autres ontologies observées. Pour ce faire, nous nous sommes inspirés de la méthodologie de construction d’une ontologie proposée par [Noy and Mcguinness, 2001]. La création de sous-classes a été guidée par le contexte du renseignement militaire et ses éléments d’intérêt. La taxonomie complète des classes de WOOKIE est donnée en annexe A. Pour la classe Equipment, nous nous sommes limités à décrire les différents types de véhicules et d’armes. La classe Person 79 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 4. Modélisation des connaissances du domaine est liée par équivalence au concept Person dans l’ontologie FOAF 108 mais n’a pas nécessité plus de pré- cision. En ce qui concerne le concept Unit, nous avons choisi de distinguer deux sous-classes Group et Organisation afin de différencier les groupements de personnes, point important dans le domaine visé. La classe Place a également été sous-typée en prenant en compte les besoins militaires, notamment par l’aspect stratégique des sous-classes du concept Infrastructure. Enfin, la modélisation de la classe Event s’est avérée une tâche essentielle compte tenu de l’importance de ces entités dans le renseignement et la veille militaire. Nous avons pour cela réservé plus de temps à la spécification de la classe MilitaryEvent, c’est-à-dire au choix et à l’organisation des différentes sous-classes en prenant en compte les observations préalables. La taxonomie des événements spécifiques au ROSO est présentée en annexe B. Par la suite, parallèlement aux relations hiérarchiques, nous avons liés les concepts de l’ontologie entre eux par des relations sémantiques (object properties). Il s’agit de propriétés ayant pour co-domaine un concept de l’ontologie. Celles-ci ont également été choisies en fonction des besoins du renseignement militaire et en concertation avec les membres de notre équipe. Plusieurs liens sont modélisés entre les personnes tels que des liens familiaux, de connaissance, des liens hiérarchiques, etc. (isFamilyOf, isSpouseOf, isFriendOf, isColleagueOf, etc.). Nous avons également créé des relations entre personnes et organisations (hasEmployee), entre organisations et équipements (producesEquipment, sellsEquipment) ou encore entre lieux et personnes (bornIn, diedIn), etc. Enfin, la classe Event est en relation avec tous les autres éléments du pentagramme conformément à notre modèle d’événement (voir la section 4.2). Ainsi par exemple, un événement implique des participants de type Person ou Unit (involves) ainsi qu’un instrument appartenant à la classe Equipment (hasEquipment) et se déroule dans un lieu associé à la classe Place (takesPlaceAt). Un événement peut également être relié à d’autres événements par des relations d’antécédence, succession, cause, conséquence, etc. (hasAssociatedEvent, causes, follows, etc.). Une quatrième étape a été d’attribuer à chaque classe un ensemble de propriétés permettant de les définir plus précisément en leur associant une valeur particulière. Cette valeur peut être une chaine de caractères, un nombre, une date, un booléen, etc. Comme nous l’avons déjà précisé plus haut, ces attributs sont héréditaires : ceux de la classe-mère sont automatiquement transmis aux classes-filles. Les 5 classes de plus haut niveau possèdent les attributs picture, pour leur associer une image, et alias, qui associé à la propriété rdfs :label permet d’indiquer leur(s) nom(s) alternatif(s). La classe Person possède un certain nombre d’attributs tels que la nationalité, la profession, l’âge, l’adresse postale, l’adresse électronique, les dates de naissance et de décès, etc. La classe Unit est également caractérisée par des adresses postale et électronique ainsi que des coordonnées téléphoniques. La classe Place n’a pas nécessité d’attributs. Le concept Equipment possède lui des attributs essentiellement liés à des caractéristiques techniques telles que la couleur, les dimensions, la vitesse ou encore à des informations d’identification comme la marque, le modèle, la plaque d’immatriculation, l’année de production, etc. Enfin, pour la classe Event, nous avons précisé les dates de début et fin, la durée ainsi que le nombre de victimes et de décès engendrés. La totalité des attributs de concepts est donnée en annexe D). Pour terminer, nous avons précisé les différents propriétés et attributs définis en spécifiant certaines contraintes et axiomes dans l’ontologie WOOKIE. Comme permis par la spécification OWL, des liens entre propriétés de type owl :inverseOf ont été implémentés pour indiquer que telle propriété porte un sens contraire à telle autre propriété. Par exemple, la relation isEmployeeOf (qui lie une personne à l’organisation à laquelle elle appartient) est l’inverse de hasEmployee. Par ailleurs, nous avons utilisé, le cas échéant, les restrictions symmetric/assymetric, irreflexive et transitive des modèles OWL et OWL2. La propriété hasFriend est, par exemple, symétrique et non-réflexive. La relation isSuperiorTo entre 108. Friend Of A Friend, ❤tt♣✿✴✴✇✇✇✳❢♦❛❢✲♣r♦❥❡❝t✳♦r❣✴ 80 Copyright c 2013 - CASSIDIAN - All rights reserved4.4. Conclusions deux membres d’une organisation est quant à elle transitive. Enfin, comme mentionné en section 1.3.1.2, WOOKIE intègre des liens sémantiques avec d’autres ontologies sous forme d’axiomes tels que des équivalences de classes (owl :equivalentClass) ou des relations de subsomption (rdfs :subClassOf). Ainsi, le concept Person de WOOKIE est équivalent au concept Person de l’ontologie FOAF et la classe Place de WOOKIE est un sous-type de Feature dans l’ontologie GeoNames. Par ailleurs, le concept d’événement dans notre ontologie équivaut sémantiquement au concept Event de l’ontologie LODE, par exemple. 4.4 Conclusions L’ontologie que nous venons de décrire constitue le modèle de connaissances qui servira de guide à notre approche d’extraction et de capitalisation des connaissances. Les principaux atouts de cette modélisation sont les suivants : tout d’abord, elle se fonde sur des modèles reconnus (ACE et DUL pour les événements et le modèle TUS et les relations topologiques RCC-8 pour la représentation spatiotemporelle). De plus, notre ontologie a été conçue en accord avec les besoins du ROSO (taxonomie de classes et propriétés) et intègre de nombreux liens sémantiques vers d’autres ontologies afin de maintenir une interopérabilité au sein du Web sémantique. Celle-ci présente toutefois quelques limites et nous envisageons des perspectives d’amélioration telles que l’intégration d’une cinquième dimension que nous appellerons "contextuelle" afin de représenter des éléments du contexte linguistique et extra-linguistique (indices de modalité, confiance, temporalité, propagation spatiale, etc.). Par ailleurs, nous souhaitons approfondir la représentation des rôles au sein de la dimension agentive en étudiant, par exemple, le modèle SEM (voir le chapitre 1.3.1.2). 81 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 4. Modélisation des connaissances du domaine 82 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 5 Extraction automatique des événements Sommaire 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.2 La plateforme GATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.3 Extraction d’entités nommées . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.3.1 Composition de la chaine d’extraction . . . . . . . . . . . . . . . . . . . 87 5.3.2 Développement du module de règles linguistiques . . . . . . . . . . . . . 88 5.4 Extraction d’événements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.4.1 Approche symbolique . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.4.2 Apprentissage de patrons linguistiques . . . . . . . . . . . . . . . . . . . 97 5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 83 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 5. Extraction automatique des événements 5.1 Introduction Une fois notre modèle de connaissances établi, nous proposons de concevoir une approche permettant de reconnaitre automatiquement dans un ensemble de textes les différentes informations d’intérêt pour peupler la base de connaissances (créer des instances des différentes classes de l’ontologie WOOKIE et les liens existants entre ces instances). Ce chapitre présente notre seconde contribution : un système d’extraction automatique d’événements pour la veille en sources ouvertes. Celui-ci ayant été essentiellement élaboré grâce à la plateforme GATE, nos critères de choix ainsi qu’une présentation générale de cet outil sont exposés dans une première partie. Nous décrivons par la suite le développement d’un extracteur d’entités nommées nécessaire à la reconnaissance des événements tels que définis dans le chapitre 4.2. Nous terminons par une présentation de notre système de reconnaissance des événements fondé sur deux méthodes : une approche à base de règles linguistiques, d’une part, et une méthode par apprentissage de motifs fréquents, d’autre part. Les paramètres choisis pour notre application ainsi que les performances de notre système d’EI seront présentés en section 7.2. 5.2 La plateforme GATE GATE est une plateforme open-source implémentée en Java dédiée à l’ingénierie textuelle au sens large [Cunningham et al., 2002]. Créée il y a une vingtaine d’années par les chercheurs de l’université de Sheffield (Royaume-Uni), GATE est largement utilisée par les experts en TAL et dispose d’une grande communauté d’utilisateurs. Cela lui permet de proposer un ensemble de solutions d’aide et de support (forum, liste de diffusion, foire aux questions, wiki, tutoriels, etc.). Ce point a constitué un critère important afin de choisir notre environnement de développement parmi les différentes plateformes et outils présentés en section 2.3. Par ailleurs, GATE propose une chaine d’extraction d’entités nommées pour l’anglais nommée ANNIE composée de différents modules open source. Cette chaine ayant déjà été utilisée au sein de la plateforme WebLab, elle a constitué une première base pour l’élaboration de notre propre système d’extraction d’information. Fonctionnement général L’environnement GATE repose sur le principe de chaines de traitement composées de différents modules (dits "Processing Resources" PR) appliqués successivement sur un ou plusieurs textes (dits "Language Resources" LR). Les LR peuvent être des textes seuls, fournit dans l’interface par copiercoller ou URL, ou des corpus de textes créés manuellement ou importés d’un dossier existant. Produit d’une communauté d’utilisateurs croissante, l’ensemble des PR disponibles est conséquent : ceux-ci sont organisés au sein de plugins thématiques et permettent des traitements variés pour une quinzaine de langues au total. L’utilisateur peut ainsi sélectionner un ensemble de briques logicielles pertinentes pour sa tâche, les paramétrer et les organiser à sa convenance pour construire une chaine de traitement de texte adaptée (voir l’annexe E pour un exemple de chaine de traitement). La majorité des modules de traitement fournit une analyse des textes à traiter par un système d’annotations exprimées au format XML et selon le modèle de la plateforme. Ce système permet à chaque brique de traitement d’exploiter les annotations fournies par les modules précédents. Cela consiste géné- 84 Copyright c 2013 - CASSIDIAN - All rights reserved5.2. La plateforme GATE ralement à associer un ensemble d’attributs à une zone de texte. Ces attributs se présentent sous la forme propriété = "valeur". Voici un exemple d’annotation créée par un composant de segmentation en mots : Token { c a t e g o r y =NNP, ki n d =word , l e n g t h =5 , o r t h = u p p e r I n i t i a l , s t r i n g =Obama} Il s’agit ici d’une annotation de type Token à laquelle sont associés plusieurs attributs comme sa catégorie grammaticale (NNP), son type (word), sa longueur en nombre de caractères (5), sa casse (upperInitial) et la chaîne de caractères correspondante (Obama). Le formalisme JAPE Parallèlement aux différents modules d’analyse, la plateforme GATE propose un formalisme d’expression de grammaires contextuelles nommé JAPE (Java Annotation Patterns Engine). Ce formalisme est employé pour la définition de règles au sein des modules de type transducteurs à états finis. Ce système s’avère très utile en extraction d’information car il permet de définir les contextes d’apparition des éléments à extraire pour ensuite les repérer et les annoter dans un ensemble de textes. Le principe est de combiner différentes annotations fournies par les modules précédant le transducteur (tokens, syntagmes, relations syntaxiques, etc.) pour en créer de nouvelles plus complexes (entités nommées, relations, évè- nements, etc.) : cela revient à l’écriture de règles de production et donc à l’élaboration d’une grammaire régulière. Une grammaire dans GATE se décompose en plusieurs phases exécutées consécutivement et formant une cascade d’automates à états finis. Chaque phase correspond à un fichier .jape et peut être constituée d’une ou plusieurs règle(s) écrite(s) selon le formalisme JAPE. Classiquement, ces règles sont divisées en deux blocs : une partie gauche (Left Hand Side ou LHS) définissant un contexte d’annotations à repérer et une partie droite (Right Hand Side ou RHS) contenant les opérations à effectuer sur le corpus à traiter lorsque le contexte LHS y a été repéré. Le lien entre ces deux parties se fait en attribuant des étiquettes à tout ou partie du contexte défini en LHS afin de cibler les annotations apposées en RHS. Pour plus de clarté, prenons l’exemple d’une règle simple : 1 R ule : OrgAcronym 2 ( 3 { O r g a n i s a t i o n } 4 { Token . s t r i n g == " ( " } 5 ( { Token . o r t h == " a l l C a p s " } ) : o r g 6 { Token . s t r i n g == " ) " } 7 ) 8 −−> 9 : o r g . O r g a n i s a t i o n = { r u l e =" OrgAcronymRule " , ki n d =" Acronym "} FIGURE 5.1 – Exemple de règle d’extraction exprimée dans le formalisme JAPE L’objectif de celle-ci est d’annoter avec le type Organisation tous les acronymes entre parenthèses positionnés après une première annotation de ce même type. La première ligne de cette règle indique le nom qui lui a été donné par son auteur. Les lignes 2 à 7 définissent le motif à repérer dans le texte : les types des annotations sont encadrés par des accolades (e.g. {Organisation}), l’accès aux attributs 85 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 5. Extraction automatique des événements des annotations se fait sous la forme Annotation.attribut (Token.string, par exemple, permet d’obtenir la valeur de la propriété string associée aux annotations de type Token fournies par un module précédent), (...) :org permet d’étiqueter la partie visée du motif pour y référer en partie RHS de la règle. Puis, la flèche (ligne 8) sert de séparateur entre les parties LHS et RHS. La ligne 9 permet d’attribuer une annotation de type Organisation au segment étiqueté org en partie gauche. Remarquons également en partie droite l’ajout des propriétés rule et kind à l’annotation produite afin d’indiquer quelle règle en est à l’origine et qu’il s’agit d’un acronyme d’organisation. Ces attributs peuvent être librement définis par le développeur de grammaires. Précisons également qu’un système de macros permet de nommer une séquence d’annotations afin de la réutiliser de façon raccourcie dans les règles définies. Enfin, il est possible de gérer l’ordre d’exécution d’un ensemble de règles en choisissant un des différents modes de contrôle proposés par le formalisme JAPE (all, once, appelt, etc.). Quelques modules utilisés dans nos travaux Dans cette section, nous présentons différents modules qui nous ont été utiles pour mettre en oeuvre notre système d’extraction d’information. Tout d’abord, nous devons parler d’ANNIE (A Nearly-New Information Extraction system), une chaine complète dédiée à la reconnaissance d’entités nommées pour l’anglais. Fournie conjointement à la plateforme, cette chaine comprend différents modules d’analyse : 1. Document Reset : supprime toutes les annotations apposées précédemment sur le document, 2. English Tokenizer : découpe le texte en mots (tokens), 3. Gazetteer : repère les éléments contenus dans une liste (gazetteer) et les annote en tant que Lookup, 4. Sentence Splitter : découpe le texte en phrases, 5. POS Tagger : ajoute à l’annotation Token (mise par le tokenizer) une propriété category indiquant la catégorie morpho-syntaxique du mot en question. Il s’agit, ici, d’une version dérivée de l’étiqueteur Brill [Brill, 1992], 6. NE Transducer : un transducteur JAPE définissant un ensemble de règles afin de repérer des entités nommées (Person, Organization, Location, Date, URL, Phone, Mail, Address, etc.), 7. OrthoMatcher : annote les relations de co-référence entre les entités nommées repérées précédemment. ANNIE étant spécialisée pour le traitement de textes anglais, d’autres modules se sont avérés né- cessaires pour l’extraction d’information en français. Nous avons notamment utilisé les modules de découpage en mots et d’étiquetage morpho-syntaxique adaptés pour la langue française (French tokenizer et TreeTagger). De plus, la phase d’extraction d’événements a nécessité l’utilisation de l’analyseur syntaxique Stanford parser fournissant une analyse syntaxique en dépendance des phrases à traiter. Par ailleurs, nos travaux nécessitant un découpage des textes en groupes syntaxiques (nominaux et verbaux), les modules NP Chunker et VP Chunker ont répondu à ce besoin. Enfin, divers modules d’analyse linguistique nous ont été utiles tels qu’un analyseur morphologique ou encore un repérage lexical paramétrable (dit Flexible Gazetteer). 86 Copyright c 2013 - CASSIDIAN - All rights reserved5.3. Extraction d’entités nommées 5.3 Extraction d’entités nommées La phase d’extraction d’entités nommées consiste à mettre en place un système de détection et de typage des entités d’intérêt pour l’anglais et le français. En effet, pour reconnaitre des événements tels qu’ils sont définis dans WOOKIE, notre premier objectif est de repérer les entités de type Person, Unit, Date et Place. Pour ce faire, comme nous l’avons montré dans l’état de l’art correspondant (voir la section 2.2.1), plusieurs types de méthodes ont été explorées : des approches symboliques, statistiques ou encore hybrides. Celles-ci ayant montré des performances comparables pour le problème de la REN, notre choix a été guidé par le contexte applicatif de nos travaux. En effet, il apparait important dans le contexte du ROSO d’éviter l’extraction d’informations erronées (ce que l’on nomme plus communément le bruit). Ce critère nous a donc orienté vers le choix d’une approche à base de règles, pour laquelle la littérature a montré une plus grande précision et qui s’avère également plus facilement adaptable à un domaine d’application donné. Notre système de REN a été élaboré selon un processus ascendant et itératif : nous avons collecté un ensemble de dépêches de presse en anglais et français et repéré manuellement des exemples d’entités nommées à extraire. Partant de ces exemples, nous avons construit (par généralisations successives des contextes d’apparition de ces entités) une première version du système, que nous avons appliqué sur ce même jeu de textes pour vérifier la qualité des règles construites (en termes de précision et rappel). Le système a ensuite été modifié pour atteindre la qualité voulue, puis un nouveau corpus de textes a été constitué pour découvrir de nouvelles formes d’entités et ainsi de suite. De par la proximité des langues anglaise et française, cette méthode a été appliquée pour les deux langues et les systèmes d’extraction développés suivent donc les mêmes principes et présentent une structure commune. Nous présentons ci-dessous les différentes étapes d’extraction ainsi que leur implémentation pour l’anglais et le français grâce à la plateforme GATE. 5.3.1 Composition de la chaine d’extraction La chaine d’extraction d’entités nommées est composée de différents modules d’analyse listés et dé- crits ci-après. L’ordre d’exécution de ces modules a son importance car chaque traitement exploite les annotations créées précédemment. Pour cela, nous nous sommes inspirés de la chaine d’extraction ANNIE présentée ci-dessus tout en l’adaptant à notre problématique et à notre représentation des connaissances. La composition de modules obtenue reste la même pour l’anglais et le français bien que certains des modules soient spécifiques à la langue traitée. Notons également que les quatre premières étapes sont réalisées par des briques (reprises en l’état) proposées dans GATE tandis que les deux derniers modules ont été au centre de nos travaux et ont nécessité notre expertise. 1. Réinitialisation du document Ce premier module permet de nettoyer le document traité de toute annotation existante afin de le préparer à l’exécution d’une nouvelle chaine d’extraction. Il est indépendant de la langue. 2. Découpage en mots Ce second module découpe le texte en mots ou plus précisément en tokens. Ce traitement dépend de la langue traitée : nous avons donc utilisé les tokenizers spécifiques à l’anglais et au français fournis dans GATE. 87 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 5. Extraction automatique des événements 3. Découpage en phrases Cette étape permet d’obtenir une annotation par phrase du texte. L’anglais et le français étant des langues proches syntaxiquement, le même module a été utilisé pour ces deux langues. 4. Étiquetage grammatical Également nommée "étiquetage morpho-syntaxique" ou Part-Of-Speech (POS) tagging en anglais, cette phase donne pour chaque token repéré un ensemble d’informations grammaticales telles que la catégorie grammaticale (nom, verbe, adjectif, etc.), le genre, le nombre, etc. Ce module est dépendant de la langue considérée : l’analyseur pour l’anglais est celui présent dans la chaine ANNIE tandis que pour le français nous avons choisi l’outil TreeTagger. 5. Repérage lexical Cette étape consiste à repérer dans les textes à traiter un ensemble de mots clés préalablement définis dans des listes nommées gazetteers. Ces mots clés sont généralement des termes communs dont l’occurrence peut indiquer la présence d’une entité nommée. Les gazetteers sont de nature variée : listes de prénoms pour la reconnaissance des noms de personnes, jours de la semaine et noms des mois pour la détection des dates, etc. A chaque liste est associée une étiquette sémantique qui constituera le type de l’annotation apposée. Même s’il s’agit généralement d’une simple projection de lexique, ce module s’avère très utile pour l’extraction des entités nommées en tant que telle effectuée par le module suivant. En effet, les annotations fournies par ce module entrent dans la définition des différents contextes d’apparition des entités nommées (partie gauche des règles). Un exemple de gazetteer pour la détection des personnes en français est présentée en annexe F. 6. Règles d’extraction Ce dernier module d’annotation constitue le cœur du système de REN et contient les règles linguistiques élaborées manuellement, suivant le formalisme JAPE, pour la détection des entités de type Personne, Organisation, Lieu et Date. Nous avons fait le choix, dans le cadre de nos recherches, de ne pas conserver le module de résolution de co-référence Orthomatcher au sein de notre système d’extraction. En effet, nous avons constaté que la précision de ce module n’étant pas suffisamment bonne pour notre cas d’application et que cela détériorait la qualité de l’extraction d’événements (fortement dépendantes de l’extraction d’entités nommées). 5.3.2 Développement du module de règles linguistiques Le module d’extraction constitue le cœur de notre système. L’ensemble des règles linguistiques dé- veloppées sont contextuelles et partagent la forme suivante : regle : contexte → action Elles sont implémentées selon le formalisme JAPE décrit en section 5.2. Conformément à notre cas d’application, nous avons veillé, lors de leur développement, à privilégier la précision au rappel, c’est-à-dire l’extraction d’informations pertinentes pour mieux répondre à l’attente des opérationnels du domaine. Concrètement, cela s’est traduit par la construction de règles linguistiques dont les résultats sont plus sûrs et la mise à l’écart de règles pouvant entrainer de fausses annotations. 88 Copyright c 2013 - CASSIDIAN - All rights reserved5.3. Extraction d’entités nommées Celles-ci sont organisées en plusieurs ensembles, chacun étant spécifique au type d’entité ciblé. Ces ensembles sont implémentés sous la forme d’automates à états finis et l’exécution des règles au sein d’un même ensemble est régie par un système de priorité. Celui-ci permet de déterminer, lorsqu’il y a conflit entre plusieurs règles, quelle est celle qui doit être privilégiée pour annoter une portion de texte. Les différents ensembles de règles sont eux exécutés successivement selon un ordre fixé à l’avance au sein du système, constituant ce que l’on appelle des phases d’extraction. Pour déterminer le meilleur agencement de ces phases au sein du module de règles, nous nous sommes référés aux travaux de [Mikheev, 1999] et plus particulièrement à la gestion d’éventuelles ambiguïtés entre entités. Le principe suggéré est le suivant : afin d’éviter toute ambiguïté, il est conseillé d’exécuter en premier les règles les plus sûres (basées essentiellement sur le contexte linguistique), puis de typer les entités encore inconnues grâce aux gazetteers du module précédent et de lever les ambiguïtés restantes dans une phase finale. Nos propres observations nous ont amenés à choisir, en outre, un ordre de détection entre les 4 entités-cibles : nous typons, tout d’abord, les dates qui ne sont généralement pas confondues avec les autres entités ; puis, vient une phase de détection des organisations, suivie par les entités de type Personne et, enfin, les noms de lieux. En effet, nous avons observé que les noms d’organisations peuvent inclure des noms de personnes ou de lieux et doivent donc être repérés en priorité afin d’écarter l’ambiguïté. Par ailleurs, certains prénoms présentant une homonymie avec des noms de lieux, les entités de type Personne doivent être extraites avant celles de type Lieu. Ces premières présentent des formes moins variables et sont donc plus facilement repérables (grâce notamment aux listes de prénoms et de titres personnels). Pour l’extraction des EN en anglais, nous sommes partis de l’existant (la chaine ANNIE) en modifiant l’ordre des phases selon le principe explicité ci-dessus et sélectionnant/améliorant les règles les plus pertinentes pour notre application. Dans le cas du français, nous avons construit le système complet en s’appuyant sur la même méthodologie. Nous présentons, pour conclure cette section, quelques exemples de règles et gazetteers pour chaque type d’extraction. Extraction des dates La référence au temps peut s’exprimer de façon diverse et les expressions temporelles revêtent des formes textuelles variées : littérales ("en janvier"), numériques (10/01/2013) ou mixtes ("le 9 janvier"). La littérature distingue les expressions dites "absolues" de celles dites "relatives" : les premières permettent à elles seules de se repérer sur un axe temporel (par exemple, "le 9 janvier 2002" ou "l’année 2010") alors que les secondes ("hier matin", "le 5 mars", etc.) nécessitent pour cela des informations complémentaires provenant du co-texte ou du contexte extra-linguistique. On constate aussi des diffé- rences dans l’expression des dates d’une langue à une autre : dans notre cas, les anglophones n’expriment pas les dates comme les francophones. Toutes ces variations rendent la détection automatique des dates complexe et le développement des règles doit être réalisé en tenant compte des spécificités de l’anglais et du français. Voici pour exemple deux règles JAPE extraites de notre système : la première 5.2 permet d’extraire des dates en français, la seconde 5.3 est l’équivalent pour l’anglais (les dates en gris sont des exemples pouvant être détectés par la règle). Comme mentionné plus haut, il est fait usage de macros (termes en majuscules) pour faciliter la lecture et la maintenance du jeu de règles. De plus, un ensemble de propriétés est adjoint à l’annotation pour faciliter la normalisation des dates (voir la section 6.2). 89 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 5. Extraction automatique des événements / / l u n d i 27 a v r i l 2009 / / 27 a v r i l 2009 / / 1 e r a v r i l 2006 R ule : DateC om plete ( (NOM_JOUR) ? (NB_JOUR ) : j o u r ( { Token . s t r i n g == "−" } ) ? (NOM_MOIS ) : mois (ANNEE ) : a n n e e ) : d a t e −−> : d a t e . Date = { r u l e = " DateC om plete " , s t a r t Y e a r = : a n n e e . Token . s t r i n g , s t a r tM o n t h = : mois . Lookup . v al u e , s t a r t D a y = : j o u r . Lookup . v a l u e } FIGURE 5.2 – Règle d’extraction de dates en français / / Wed 10 J ul y , 2000 / / Sun , 21 May 2000 / / 10 t h o f J ul y , 2000 R ule : DateNameComplete P r i o r i t y : 100 ( (DAY_NAME (COMMA) ? ) ? (ORDINAL | DAY_NUM ) : day (MONTH_NAME ) : month (COMMA) ? (YEAR ) : y e a r ) : d a t e −−> : d a t e . Date = { r u l e = " DateNameComplete " , s t a r t Y e a r = : y e a r . Token . s t r i n g , s t a r tM o n t h = : month . Lookup . v al u e , s t a r t D a y = : day . Token . s t r i n g } FIGURE 5.3 – Règle d’extraction de dates en anglais Extraction des organisations Le système développé permet d’extraire des noms d’organisations divers tels que : NATO, Communist Party of India-Maoist, Ouattara Party, Nations Unies, AFP, Radio Azzatyk, armée de l’Air, etc. Nous utilisons pour cela des gazetteers de mots couramment associés à des noms d’organisations, qu’ils fassent partie de l’entité nommée ou non (la figure 5.4 est un extrait d’un de ces gazetteers pour l’anglais). La règle ci-dessous 5.5 utilise un de ces gazetteers afin de reconnaitre des noms d’organisations en anglais précédés d’un titre ou d’une fonction de personne tels que the director of the FBI ou the interim vice president of Comcast Business. Extraction des personnes Concernant la reconnaissance des noms de personne, nous avons assez classiquement utilisé des gazetteers de prénoms. Toutefois, face à l’immense variété des prénoms (même en domaine restreint) et 90 Copyright c 2013 - CASSIDIAN - All rights reserved5.3. Extraction d’entités nommées FIGURE 5.4 – Extrait du gazetteer org_key.lst R ule : O r g T i t l e P r i o r i t y : 60 ( { Lookup . maj o rT y pe == " j o b t i t l e " } { Token . s t r i n g == " o f " } ( { Token . s t r i n g == " t h e " } ) ? ( (UPPER ( POSS ) ? ) [ 1 , 4 ] ORG_KEY ) : o r g ) −−> : o r g . O r g a n i s a t i o n = { r u l e = " O r g T i t l e " } FIGURE 5.5 – Règle d’extraction d’organisations en anglais les ambiguïtés possibles, il est nécessaire d’exploiter d’autres indices contextuels tels que les titres ou fonctions personnelles. Ces derniers peuvent être placés en début ou en fin de l’entité nommée, la figure 5.6 donne quelques exemples d’indices antéposés en anglais. Enfin, la règle suivante 5.7, par exemple, exploite les annotations posées par différents gazetteers (fonctions et nationalités ici) pour l’extraction des noms de personne. 91 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 5. Extraction automatique des événements FIGURE 5.6 – Extrait du gazetteer person_pre.lst R ule : P e r s F o n c t i o n ( FONCTION ( NATIONALITE ) ? ( ( ( NP ) [ 1 , 3 ] ) : l a s t ) : p e r s o n ) −−> : p e r s o n . P e r s o n = { r u l e = " P e r s F o n c t i o n " } FIGURE 5.7 – Règle d’extraction de personnes en français Extraction des lieux L’extraction automatique des noms de lieux est, avec celle des organisations, l’une des plus complexe à mettre en œuvre de par la grande diversité formelle et référentielle de ces entités. En effet, les dépêches de presse relatant généralement des faits de façon précise, nous pouvons y trouver des noms de lieux granularité variable (noms de pays, villes, villages, régions, quartiers, rues, etc.) et exprimés parfois de façon relative ou imprécise (par exemple, eastern Iraq, Asie centrale, au nord de l’Afghanistan). De même que 92 Copyright c 2013 - CASSIDIAN - All rights reserved5.3. Extraction d’entités nommées pour la détection des organisations, de nombreuses ressources géographiques sont disponibles librement (gazetteers de toponymes, bases de données géographiques, etc.) et nous avons pu constituer quelques listes de base pour la détection des lieux communément mentionnés (liste des pays, des continents, des capitales, etc.). Il est nécessaire également de fonder nos règles d’extraction sur des indices contextuels tels que des noms communs déclencheurs d’entités géographiques (voir la figure 5.8 pour des exemples en français). FIGURE 5.8 – Extrait du gazetteer loc_key.lst Pour finir, la figure 5.9 est un exemple de règle JAPE exploitant ces indices de contexte : R ule : L o cK e y I n cl P r i o r i t y : 50 ( { Lookup . mino rType == " l o c _ k e y _ i n c l " } (DE ) ? ( ARTICLE ) ? (UPPER ) [ 1 , 3 ] ( ADJLOC ) ? ) : l o c −−> : l o c . L o c a ti o n = { r u l e = " L o cK e y I n cl " } FIGURE 5.9 – Règle d’extraction de lieux en français 93 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 5. Extraction automatique des événements 5.4 Extraction d’événements Nous détaillons, dans cette section, le cœur de notre première contribution à savoir la conception et l’implémentation d’un système d’extraction automatique d’événements tels que nous les avons défi- nis au chapitre 4. L’objectif est donc d’extraire un ensemble d’événements associés aux participants et circonstants suivants : la date de l’événement, son lieu d’occurrence et les entités de type Personne et Organisation impliquées. Celui-ci est constitué de deux extracteurs suivant deux approches distinctes : une méthode symbolique à base de règles linguistiques élaborées manuellement et une approche par apprentissage de motifs séquentiels fréquents. En effet, l’état de l’art réalisé (voir le chapitre 2.2.3) n’ayant pas révélé d’approche nettement supérieure aux autres, la combinaison de plusieurs méthodes parait être la solution la plus pertinente. Élaborer un système composite permet de tirer le meilleur parti des approches actuelles en exploitant la complémentarité des différents types d’approche (statistique, symbolique, etc.). Nous nous sommes tournés, dans le cadre de cette thèse, vers la combinaison d’un système à base de règles et d’un apprentissage symbolique pour plusieurs raisons. Tout d’abord, de par les besoins du ROSO, il est préférable de privilégier l’extraction d’informations fiables et précises et les approches symboliques répondent bien à ce besoin. Par ailleurs, afin d’améliorer les performances de ces techniques, il est apparu intéressant de les combiner avec un système d’apprentissage, dont le rappel est généralement meilleur. Nous nous sommes, dans un premier temps, orientés vers un système statistique tels que les CRFs, cependant après plusieurs recherches, nous n’avions pas à disposition un corpus d’apprentissage adapté (annoté en événements du domaine militaire/sécurité). Cela nous a donc mené vers le choix d’une méthode d’apprentissage faiblement supervisée telle que l’extraction de motifs fréquents qui ne nécessite pas de données annotées avec les entités-cibles. Nous présentons ci-dessous les principes théoriques de chaque approche choisie ainsi que la façon dont nous les avons mises en œuvre pour le traitement de dépêches de presse en langue anglaise. La méthode élaborée se veut indépendante de la langue des textes considérés et a été illustrée, dans le cadre de cette thèse, pour des textes en anglais uniquement en raison d’une plus grande disponibilité pour cette langue des outils et données nécessaires. 5.4.1 Approche symbolique La première méthode employée pour la détection d’événements est fondée sur la définition de règles linguistiques contextuelles (du même type que celles présentées en section 5.3) couplée avec une analyse syntaxique des textes. Celle-ci a été implémentée grâce à la plateforme GATE sous la forme d’une chaîne de traitement composée de différents modules d’analyse linguistique (tokenisation, découpage en phrases, repérage lexical, étiquetage grammatical, analyse syntaxique, etc.) et se déroule en plusieurs étapes détaillée ci-après. Repérage des déclencheurs d’événements La première étape consiste à repérer dans les textes à traiter les termes qui réfèrent potentiellement aux événements ciblés, que nous appellerons des "déclencheurs d’événements" (correspondant aux ancres du modèle ACE). Tout d’abord, nous considérons comme possibles déclencheurs d’événements 94 Copyright c 2013 - CASSIDIAN - All rights reserved5.4. Extraction d’événements les verbes et les noms, éléments porteurs de sens. Le repérage de ces déclencheurs se fait par l’utilisation de gazetteers contenant, d’une part, des lemmes verbaux pour les déclencheurs de type verbe et, d’autre part, des lemmes nominaux pour les déclencheurs de type nom. Nous avons chois de constituer des listes de lemmes afin d’obtenir des listes plus courtes et d’étendre le repérage des déclencheurs à toutes les formes fléchies (grâce au module GATE de type flexible gazetteer). Ces déclencheurs (139 lemmes actuellement) ont été manuellement répartis en différentes listes, chacune étant associée à un type d’événement (c’est-à-dire à une classe de notre ontologie) afin d’être repérés et annotés dans le corpus à analyser. Après une phase de découpage en mots, un analyseur morphologique attribue à chaque token son lemme. Nous comparons ensuite chaque lemme aux listes de déclencheurs et, s’ils correspondent, le mot lemmatisé est annoté comme étant un déclencheur d’événement. De plus, on lui associe la classe d’événement qu’il représente. La figure 5.4.1 présente la liste des lemmes verbaux utilisés comme déclencheurs des événements de type BombingEvent (les informations indiquées après le symbole § seront clarifiées par la suite). FIGURE 5.10 – Gazetteer bombings.lst Analyse des dépendances syntaxiques Une fois les déclencheurs d’événement repérés, il nous faut leur associer les différentes entités impliquées pour former l’événement dans son ensemble. Pour cela, nous effectuons, dans un premier temps, une extraction automatique d’entités nommées grâce au système présenté en section 5.3 ainsi qu’une analyse en constituants syntaxiques (syntagmes nominaux NP, verbaux VP, prépositionnels SP ou adjectivaux SA). Nous devons ensuite repérer les différentes relations entre le déclencheur et les entités de la phrase. Nous employons, dans ce but, un analyseur syntaxique donnant les dépendances entre les différents élé- ments phrastiques. En effet, une analyse syntaxique permet d’obtenir une meilleure précision par rapport à l’utilisation d’une analyse dite "par fenêtre de mots" associant un simple découpage en syntagmes (chunking) et des règles contextuelles. Après avoir examiné les différentes solutions proposées dans la plateforme GATE, nous avons opté pour l’utilisation du Stanford parser [De Marneffe and Manning, 2008]. Ici, nous faisons le choix de n’utiliser que les dépendances principales à savoir les relations sujet, objet, préposition et modifieur de nom. Les dépendances extraites par le Stanford parser se présentent sous la forme de liens entre les éléments centraux du syntagme-recteur et du syntagme-dépendant, plus communément appelés "têtes de syntagme" (voir la figure 5.11). 95 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 5. Extraction automatique des événements FIGURE 5.11 – Exemple d’analyse syntaxique en dépendance Voix active Voix passive Classe 1 sujet = agent, objet = patient sujet = patient, ct. prep. "by" 109 = agent Classe 2 sujet = agent, objet = instrument sujet = instrument, ct. prep. "by" = agent Classe 3 sujet = instrument, objet = patient sujet = patient, ct. prep. "by" = instrument Classe 4 sujet = agent, objet = locatif sujet = locatif, ct. prep. "by" = agent Classe 5 sujet = patient ∅ TABLE 5.1 – Classes argumentales pour l’attribution des rôles sémantiques Attribution des rôles sémantiques La dernière étape consiste à attribuer un rôle sémantique (agent, patient, instrument, etc.) aux diffé- rents participants de l’événement. Cela est rendu possible par une étude de la structure argumentale du verbe ou du nom déclencheur : à savoir déterminer sa valence (nombre d’arguments) et les rôles sémantiques de ses différents actants. Si nous prenons l’exemple des verbes anglais kill et die, nous remarquons qu’ils ont des valences différentes (2 et 1 respectivement) et que leurs sujets n’ont pas le même rôle sé- mantique : le premier sera agent et le second patient. L’attribution de ces rôles sémantiques nécessite d’étudier, en amont, la construction des lemmes présents dans nos gazetteers [François et al., 2007]. Pour cela, nous avons choisi de constituer 5 classes argumentales, chacune d’elles correspondant à un type de construction verbale ou nominale (voir le tableau 5.1). L’attribution des rôles sémantiques est réalisée par un ensemble de règles linguistiques (implémenté sous la forme d’un transducteur JAPE) exploitant les informations fournies par les phases précédentes. La figure 5.12 résume les différentes étapes de notre approche. FIGURE 5.12 – Extraction des événements : différentes étapes L’ensemble de ces traitements ont été implémentés grâce à des modules fournis dans GATE (voir la figure 5.13) et permettent d’obtenir une annotation positionnée sur l’ancre d’événement indiquant le type 96 Copyright c 2013 - CASSIDIAN - All rights reserved5.4. Extraction d’événements de l’événement (sa classe dans l’ontologie de domaine) ainsi que les différentes entités impliquées (date, lieu et participants). Un exemple d’annotation obtenu est fourni par la figure 5.14. FIGURE 5.13 – Extraction des événements : chaine de traitement GATE pour l’anglais FIGURE 5.14 – Extraction des événements : exemple d’annotation GATE 5.4.2 Apprentissage de patrons linguistiques Dans un second temps, nous nous sommes intéressés à l’extraction d’événements par une technique d’extraction de motifs séquentiels fréquents. Ce type d’approche permet d’apprendre automatiquement des patrons linguistiques compréhensibles et modifiables par un expert linguiste. Présentation de l’approche La découverte de motifs séquentiels a été introduite par [Agrawal et al., 1993] dans le domaine de la fouille de données et adaptée par [Béchet et al., 2012] à l’extraction d’information dans les textes. 97 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 5. Extraction automatique des événements Ceux-ci s’intéressent en particulier à la découverte de motifs séquentiels d’itemsets. Il s’agit de repérer, dans un ensemble de séquences de texte, des enchaînements d’items ayant une fréquence d’apparition supérieure à un seuil donné (dit support). La recherche de ces motifs s’effectue dans une base de sé- quences ordonnées d’itemsets où chaque séquence correspond à une unité de texte. Un itemset est un ensemble d’items décrivant un élément de cette séquence. Un item correspond à une caractéristique particulière de cet élément. Un certain nombre de paramètres peuvent être adaptés selon l’application visée : la nature de la séquence et des items, le nombre d’items, le support, etc. La fouille sur un ensemble de séquences d’itemsets permet l’extraction de motifs combinant plusieurs types d’items et d’obtenir ainsi des patrons génériques, spécifiques ou mixant les informations (ce qui n’est pas permis par les motifs d’items simples). Par exemple, cette technique permet d’extraire les patrons suivants : < t h r e e C h i l e a n s a r r e s t e d n e a r Buenos A i r e s > < NP a r r e s t e d n e a r L o c a ti o n > < NP VB PRP L o c a ti o n > où NP est un syntagme nominal, VB est un verbe, PRP est une préposition et Location est une entité nommée de type Lieu. La phase d’apprentissage permet d’obtenir un ensemble de motifs séquentiels fréquents qui sont ensuite sélectionnés par un expert pour en retenir les plus pertinents pour la tâche d’extraction visée. Les motifs retenus sont alors appliqués sur un le nouveau corpus à analyser, préalablement annoté pour obtenir les différents types d’items considérés. Contrairement à d’autres approches d’EI (présentées en section 2.2), la découverte de motifs séquentiels fréquents ne nécessite ni corpus annoté avec les entités-cibles, ni analyse syntaxique. Cela constitue un réel avantage car, tout d’abord, l’annotation manuelle de corpus reste un effort important et l’analyse syntaxique est encore une technologie aux performances inégales et peu disponible librement selon les langues. Toutefois, le point faible partagé par les méthodes d’apprentissage symbolique reste le nombre important de motifs extraits. Pour pallier ce problème, [Béchet et al., 2012] propose l’ajout de contraintes pour diminuer la quantité de motifs retournés et l’utilisation de l’outil Camelis [Ferré, 2007] pour ordonner et visualiser les motifs des plus généraux aux plus spécifiques puis filtrer les plus pertinents. Application à l’extraction automatique des événements Dans la lignée de ces travaux, nous avons utilisé un outil d’extraction de motifs séquentiels développé au GREYC110 (selon la méthode de [Béchet et al., 2012]). Le système repris présente plusieurs points forts qui justifient ce choix : il permet d’extraire des motifs dits "fermés" (c’est-à-dire non redondants) et génère ainsi moins de motifs que d’autres systèmes. De plus, ce logiciel s’avère robuste et permet la fouille de séquences d’itemsets, fonctionnalité qui est rarement proposée par les outils de fouille de données existants. Notre contribution a donc été d’adapter la technique de fouille de motifs séquentiels à notre domaine d’application et au traitement de dépêches de presse dans le but de générer automatiquement des patrons linguistiques pour la détection d’événements. Nous avons tout d’abord défini un ensemble de paramètres de base pour l’apprentissage : nous choisissons comme séquence la phrase et comme unité de base le 110. ❤tt♣s✿✴✴s❞♠❝✳❣r❡②❝✳❢r 98 Copyright c 2013 - CASSIDIAN - All rights reserved5.5. Conclusions token ainsi qu’un ensemble d’items de mot tels que sa forme fléchie, son lemme, sa catégorie grammaticale. Pour segmenter le corpus d’apprentissage et obtenir ces différentes informations, nous avons employé l’analyseur morpho-syntaxique TreeTagger 111 [Schmid, 1994]. A cela, nous proposons d’ajouter une reconnaissance des entités nommées (de type Personne, Organisation, Lieu et Date) ainsi qu’un repérage lexical des déclencheurs d’événements. Nous pouvons ainsi découvrir des motifs séquentiels fréquents impliquant un déclencheur d’événement et une ou plusieurs entités d’intérêt constituant les participants/circonstants de l’événement en question. Ces deux traitements sont réalisés grâce à la chaine de REN présentée en section 5.3 et aux gazetteers construits pour le premier système symbolique. Enfin, pour réduire le nombre de motifs retournés et faciliter la sélection manuelle de l’expert, nous introduisons un ensemble de contraintes spécifiques à notre application. D’une part, des contraintes linguistiques d’appartenance permettant de filtrer les motifs selon les items qu’ils contiennent. Pour notre application, la contrainte d’appartenance utilisée est de ne retourner que les motifs contenant au minimum un déclencheur d’événement et une entité nommée d’un type d’intérêt (voir plus haut). D’autre part, nous avons employé une contrainte dite de gap [Dong and Pei, 2007], permettant l’extraction de motifs ne contenant pas nécessairement des itemsets consécutifs (contrairement aux n-grammes dont les éléments sont strictement contigus). Un gap d’une valeur maximale n signifie qu’au maximum n itemsets (mots) sont présents entre chaque itemset du motif retourné dans les séquences correspondantes. Par exemple, si la séquence suivante (1) est présente dans le corpus d’apprentissage et que le gap est fixé à 2, le système pourra retourner le motif (2) : (1) [...] a suspected sectarian attack in Mehmoodabad area of Karachi [...] (2) < DT EventTrigger PRP Location PRP Location > où DT est un déterminant, EventTrigger un déclencheur d’événement, PRP une préposition et Location une entité nommée de type Lieu. La contrainte de gap permet, dans cet exemple, de retourner un motif plus générique en omettant la présence des mots "suspected" et "sectarian" si ceux-ci ne sont pas fréquents. Les contraintes de gap et de support (fréquence minimale d’un motif) sont des paramètres à ajuster lors de la phase d’apprentissage. Ce paramétrage a été réalisé pour nos expérimentations et est présenté en partie 7.2.1. Une fois le nombre de motifs extraits diminué grâce à des contraintes, ceux-ci ont été manuellement sélectionnés en fonction de leur pertinence pour la tâche d’extraction des événements. Pour cela, nous utilisons l’outil Camelis [Ferré, 2007] permettant d’ordonner et visualiser les motifs des plus généraux aux plus spécifiques puis de filtrer les plus pertinents. La figure 5.15 présente un aperçu de cet outil. Les motifs ainsi sélectionnés sont ensuite appliqués sur un nouveau corpus afin d’en extraire les événements-cibles. Dans un souci de réutilisation des systèmes déjà développés et pour faciliter cette dernière étape, nous choisissons d’exprimer les motifs obtenus grâce au formalisme JAPE et obtenons ainsi un nouveau module intégrable dans une chaine de traitement GATE. 5.5 Conclusions Notre seconde contribution détaillée dans ce chapitre est une approche mixte pour l’extraction automatique des événements fondée sur une méthode symbolique à base de grammaires contextuelles et sur 111. ❤tt♣✿✴✴✇✇✇✳✐♠s✳✉♥✐✲st✉tt❣❛rt✳❞❡✴♣r♦❥❡❦t❡✴❝♦r♣❧❡①✴❚r❡❡❚❛❣❣❡r✴ 99 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 5. Extraction automatique des événements FIGURE 5.15 – Visualisation et sélection des motifs avec l’outil Camelis une seconde technique de fouille de motifs séquentiels fréquents. Dans un premier temps, nous avons implémenté un extracteur d’entités nommées sur le modèle des approches symboliques classiques. Celui-ci permet d’annoter au préalable les différentes entités dites simples nécessaires à la reconnaissance des événements. Les résultats de ce premier extracteur ont montré des performances comparables à l’état de l’art bien qu’il pourrait être amélioré en réalisant notamment une extraction des dates dites relatives ou encore des entités d’intérêt autres que les entités nommées (comme l’extraction des équipements par exemple). Les deux méthodes pour l’extraction des événements ont montré leur efficacité lors de l’état de l’art présenté à la section 2.2.3 et nous renvoyons à la section 7.2 pour une évaluation de leurs performances sur un corpus de test de notre domaine d’application. La méthode à base de règles GATE pourra être améliorée en tenant compte d’autres informations fournies par l’analyse syntaxique telles que la voix (passive ou active) du déclencheur, la polarité de la phrase (négative ou positive), la modalité mais aussi les phénomènes de valence multiple. L’approche à base de motifs séquentiels fréquents pourrait également tirer profit de cette analyse syntaxique en intégrant les relations de dépendance produites en tant que nouveaux items ou sous forme de contraintes. Enfin, concernant les deux approches, leur limite principale (qui est aussi celle de beaucoup des approches de la littérature) est qu’ils réalisent l’extraction au niveau phrastique. Une granularité plus large tel que le paragraphe ou le discours pourrait permettre d’améliorer le rappel de ces approches. 100 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 6 Agrégation sémantique des événements Sommaire 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 6.2 Normalisation des entités . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 6.3 Similarité sémantique entre événements . . . . . . . . . . . . . . . . . . . . . 105 6.3.1 Similarité conceptuelle . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 6.3.2 Similarité temporelle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 6.3.3 Similarité spatiale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 6.3.4 Similarité agentive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 6.4 Processus d’agrégation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 6.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 101 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 6. Agrégation sémantique des événements 6.1 Introduction L’état de l’art réalisé ainsi que le développement de notre propre système d’extraction d’information a montré la difficulté de cette tâche mais également et surtout des pistes d’amélioration possibles. Un premier constat est que les outils d’EI existants fournissent des résultats pouvant être incomplets, redondants, flous, conflictuels, imprécis et parfois totalement erronés. Par ailleurs, comme confirmé par nos premières expérimentations (voir la section 7.2), plusieurs approches actuelles s’avèrent complé- mentaires et une combinaison adaptée de leurs résultats pourrait aboutir à une découverte de l’information plus efficace. Dans le cadre de cette thèse, l’objectif visé est d’améliorer la qualité des fiches de connaissances générées et présentées à l’utilisateur afin de faciliter son processus de découverte des informations d’intérêt. Pour ce faire, nous proposons un processus d’agrégation sémantique des résultats issus de la phase d’extraction des événements. Cette troisième contribution vise à produire un ensemble de connaissances cohérent 112 en regroupant entre elles les mentions d’événements référant à un seul et même événement du monde réel. Ce processus est fondé sur des calculs de similarité sémantique et permet d’agréger à la fois des événements provenant des deux extracteurs développés et/ou de différentes sources d’information (agrégation intra- et inter-documents). Dans ce chapitre, nous présentons tout d’abord les mécanismes mis en place pour la normalisation des entités extraites, étape préalable nécessaire au processus d’agrégation. Dans un second temps, sont exposées les différentes méthodes définies pour estimer la similarité des événements au niveau de chacune de leurs dimensions (conceptuelle, temporelle, spatiale et agentive). Enfin, nous détaillons le processus d’agrégation proposé, fondé sur ces mesures de similarité locales. 6.2 Normalisation des entités Notre processus d’agrégation sémantique des événements nécessite de manipuler non plus de simples portions de texte extraites mais des objets sémantiques, des éléments de connaissance à proprement parler. Pour cela, nous proposons une première phase de normalisation visant à désambiguïser les différentes entités impliquées dans les événements dans le but de les rattacher à un objet du monde réel (c’est-à-dire déterminer leur référence). Nous présentons, dans les sections suivantes, les différentes méthodes mises en place pour la normalisation de ces entités. Normalisation des dates La normalisation des entités temporelles est nécessaire en raison des multiples façons possibles d’exprimer le temps en langage naturel ("04/09/2012", "Mon, 09/April/12", "two days ago", etc.). Celle-ci consiste à les convertir en un format unique facilitant leur comparaison et le calcul de leur similarité (voir la section 6.3.2). Conformément à notre modélisation temporelle de l’événement (voir la section 4.2.2), nous avons choisi pour cela le format TUS, définissant une représentation numérique unifiée des dates. Nous avons effectué cette normalisation au sein-même de notre module d’extraction d’information : les règles JAPE en question récupèrent séparément les différents éléments de la date (année, mois et jour, 112. Nous entendons ici par "cohérence" une absence de contradictions et de redondances. 102 Copyright c 2013 - CASSIDIAN - All rights reserved6.2. Normalisation des entités les granules dans le modèle TUS) afin de reconstituer sa forme normalisée et d’ajouter cette information sous forme d’attribut aux annotations de type Date produites. Nous avons fait le choix de nous intéresser exclusivement aux dates absolues car la résolution des dates relatives est un sujet de recherche à part entière [Llorens Martínez, 2011] que nous ne pouvons approfondir dans le cadre de cette thèse. En effet, l’extraction de ces dernières nécessite des traitements plus complexes en vue de leur normalisation tels que la résolution d’anaphores temporelles, l’analyse automatique du contexte linguistique d’occurrence, etc. Le tableau 6.1 ci-dessous présente quelques exemples de dates extraites dans des textes en anglais ainsi que leurs formes normalisées par notre outil. Date extraite Date normalisée 1999-03-12 1999-03-12 1948 1948-01-01/12-31 April 4, 1949 1949-04-04 July 1997 1997-07-01/31 03-12-99 1999-12-03 TABLE 6.1 – Normalisation des dates Normalisation des lieux [Garbin and Mani, 2005] ont montré que 40% des toponymes présents dans un corpus de dépêches de l’AFP sont ambigus. Il existe différents types d’ambiguïté : un nom propre peut référer à des entités de types variés (Paris peut faire référence à une ville ou à une personne), deux lieux géographiques peuvent porter le même nom (London est une ville en Angleterre mais aussi au Canada), certaines entités géographiques peuvent également porter plusieurs noms ou en changer au cours du temps, etc. Notre objectif ici est d’associer à chaque entité géographique extraite un référent unique correspondant à une entité du monde réel bien définie. Nous avons pour cela utilisé un service de désambiguïsation des entités géographiques développé dans notre département. Celui-ci opère en attribuant à chaque entité géographique extraite un identifiant de la base sémantique GeoNames. Le projet GeoNames a notamment pour avantages d’être open-source, de définir de nombreux toponymes dans le monde (l’information n’est pas limitée à certaines régions géographiques) et d’intégrer des relations avec d’autres sources de données (permettant des traitements plus avancés si nécessaire). Par ailleurs, ce composant de normalisation des lieux s’inspire des travaux de [Buscaldi, 2010]. L’approche est fondée sur des calculs de cohérence sur l’ensemble des toponymes possibles pour chaque entité géographique extraite. Cette cohérence est définie à partir de trois types de critères : – les distances géographiques entre toponymes ; – les types géographiques des toponymes (continent, ville, rivière ...) ; – les distances entre les entités géographiques au sein du texte considéré. L’intérêt de cette méthode est de pouvoir dégager des groupes de toponymes cohérents entre eux. De plus, elle permet de donner plus d’importance à la notion de qualité administrative par rapport à celle de proximité géographique lorsqu’une dépêche parle, par exemple, de New York, Paris et Londres. 103 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 6. Agrégation sémantique des événements Le code ci-dessous est un extrait des triplets RDF/XML produits par ce service. Les quatre premières lignes lient l’entité extraite identifiée par l’URI http ://weblab.ow2.org/wookie/instances/Place#india et son entité de référence dans la base GeoNames (ayant pour identifiant http ://sws.geonames.org/1269750/). Le reste de l’exemple donne quelques-unes des propriétés de l’entité GeoNames Republic of India : sa classe dans l’ontologie Geonames (ligne 15), ses coordonnées géographiques selon le standard WGS84 (lignes 9 et 11), son nom dans GeoNames (ligne 14), etc. 1 < r d f : D e s c r i p t i o n 2 r d f : a b o u t = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / P l a c e # i n d i a " > 3 < g e o : h a s S p a t i a l T h i n g r d f : r e s o u r c e = " h t t p : / / sws . geonames . o r g / 1 2 6 9 7 5 0/ " / > 4 < / r d f : D e s c r i p t i o n > 5 6 < r d f : D e s c r i p t i o n r d f : a b o u t = " h t t p : / / sws . geonames . o r g / 1 2 6 9 7 5 0/ " > 7 < g e o : p a r e n t F e a t u r e r d f : r e s o u r c e = " h t t p : / / sws . geonames . o r g / 6 2 9 5 6 3 0/ " / > 8 < g e o : p a r e n t F e a t u r e r d f : r e s o u r c e = " h t t p : / / sws . geonames . o r g / 6 2 5 5 1 4 7/ " / > 9 < w g s 8 4:l o n g 10 r d f : d a t a t y p e = " h t t p : / / www. w3 . o r g / 2 0 0 1 / XMLSchema# d o u bl e " > 7 7 . 0 < / w g s 8 4:l o n g > 11 < w g s 8 4 : l a t 12 r d f : d a t a t y p e = " h t t p : / / www. w3 . o r g / 2 0 0 1 / XMLSchema# d o u bl e " > 2 0 . 0 < / w g s 8 4 : l a t > 13 < r d f s : l a b e l > R e p u bli c o f I n d i a < / r d f s : l a b e l > 14 < ge o: name > R e p u bli c o f I n d i a < / ge o: name > 15 < r d f : t y p e r d f : r e s o u r c e = " h t t p : / / www. geonames . o r g / o nt o l o g y # F e a t u r e " / > 16 < / r d f : D e s c r i p t i o n > FIGURE 6.1 – Désambiguïsation des entités spatiales : exemple de triplets RDF/XML produits Normalisation des participants La normalisation des participants est également une problématique complexe de par la diversité des noms de personnes et d’organisations rencontrés dans les dépêches de presse. Une même entité est souvent mentionnée sous de multiples formes telles que des abréviations (Boston Symphony Orchestra vs. BSO), des formes raccourcies (Osama Bin Laden vs. Bin Laden), des orthographes alternatives (Osama vs. Ussamah vs. Oussama), des alias ou pseudonymes (Osama Bin Laden vs. Sheikh Al-Mujahid). De nombreux travaux se sont intéressés à la désambiguïsation de telles entités et particulièrement depuis l’essor du mouvement Linked Data et la mise à disposition de nombreuses bases de connaissances sé- mantiques sur le Web (voir le chapitre 3). Bien que cette thèse ne nous ait pas permis d’approfondir cette problématique, nous avons retenu, à titre de perspectives, certaines approches théoriques et outils qui pourront être intégrés au sein de notre processus d’agrégation. Nous pouvons citer, parmi les outils disponibles sur le Web, DBPedia Spotlight 113 et Zemanta 114 , permettant l’attribution d’un référent unique (provenant d’une base de connaissances externe) à chaque entité d’intérêt repérée dans un texte. Le premier outil est issu d’un projet open source et se présente sous la forme d’un service Web utilisant DBPedia comme base sémantique de référence. Le système Zemanta est lui propriétaire et initialement conçu pour aider les créateurs de contenu Web (éditeurs, blogueurs, etc.) grâce à des suggestions automatiques diverses (images, liens, mots-clés, etc.). Cet outil suggère notamment des liens sémantiques vers des concepts issus de nombreuses bases de connaissances telles que Wikipédia, IMDB, MusicBrainz, etc. Du côté des travaux théoriques, nous avons retenu deux 113. ❤tt♣s✿✴✴❣✐t❤✉❜✳❝♦♠✴❞❜♣❡❞✐❛✲s♣♦t❧✐❣❤t✴❞❜♣❡❞✐❛✲s♣♦t❧✐❣❤t✴✇✐❦✐ 114. ❤tt♣✿✴✴❞❡✈❡❧♦♣❡r✳③❡♠❛♥t❛✳❝♦♠✴ 104 Copyright c 2013 - CASSIDIAN - All rights reserved6.3. Similarité sémantique entre événements approches qui nous paraissent adaptées pour réaliser cette étape de normalisation des participants d’un événement. D’une part, [Cucerzan, 2007] propose de réaliser de façon conjointe la reconnaissance des entités nommées et leur désambiguïsation. Pour cette seconde tâche, cette approche emploie une grande quantité d’informations contextuelles et catégorielles automatiquement extraites de la base Wikipédia. Il est notamment fait usage des nombreux liens existants entre les entités de cette base, des pages de désambiguïsation sémantique (redirections) ainsi que d’un modèle vectoriel créé à partir du contexte textuel des entités nommées repérées. Par ailleurs, [Mann and Yarowsky, 2003] présente un ensemble d’algorithmes visant à déterminer le bon référent pour des noms de personne en utilisant des techniques de regroupement non-supervisé et d’extraction automatique de caractéristiques. Les auteurs montrent notamment comment apprendre et utiliser automatiquement l’information biographique contenue dans les textes (date de naissance, profession, affiliation, etc.) afin d’améliorer les résultats du regroupement. Cette technique a montré de bons résultats sur une tâche de désambiguïsation de pseudonymes. 6.3 Similarité sémantique entre événements Nous proposons d’évaluer la similarité entre événements, c’est-à-dire d’estimer à quel degré deux mentions d’événement peuvent référer à un seul et même événement de la réalité. Cette estimation sera utilisée dans notre application pour aider l’utilisateur final à compléter ses connaissances et à prendre des décisions de fusion d’information. Celui-ci pourra, le cas échéant, décider de fusionner deux fiches de connaissances (une interface d’aide à la fusion de fiches a été développée au sein de la plateforme WebLab) qu’il considère référer au même événement réel. Ci-après, nous détaillons les mesures de similarité mises en œuvre pour chacune des dimensions de l’événement et les exprimons selon une échelle qualitative. En effet, une échelle qualitative s’avère plus pertinente pour élaborer un processus de capitalisation orienté utilisateur : étant donné que ces similarités seront présentées à l’analyste, une estimation symbolique sera mieux comprise qu’une similarité numérique. Nous définissons cette échelle comme composée de quatre niveaux : 1. identité (ID) : les deux dimensions réfèrent à la même entité réelle, 2. proximité (PROX) : les deux dimensions ont des caractéristiques communes et pourraient référer à la même entité du monde réel, 3. différence (DIFF) : les deux dimensions ne réfèrent pas à la même entité, 4. manque d’information (MI) : il y a un manque d’information dans l’une ou les deux dimensions (résultant du document d’origine ou du système d’extraction) qui empêche de savoir si elles ré- fèrent ou non à la même entité réelle. Définition 4. Une fonction de similarité sémantique est une fonction : R : E × E → {ID, P ROX, DIF F, MI} où E est un ensemble d’événements et ID, PROX, DIFF et MI représentent respectivement l’identité, la proximité, la différence et le manque d’information. Ces niveaux sont définis et illustrés dans les sections suivantes, à travers différentes fonctions de similarité spécifiques à chaque dimension de l’événement : Rs, Rt , Rsp et Ra. 105 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 6. Agrégation sémantique des événements 6.3.1 Similarité conceptuelle Soient e = (S, T, SP, A) et e 0 = (S 0 , T0 , SP0 , A0 ) deux événements et S et S 0 leurs dimensions conceptuelles respectives. Définition 5. Rs est une fonction de similarité conceptuelle : Rs : E × E → {ID, P ROX, DIF F} tel que 1. Rs(e, e0 ) = DIF F ssi S et S 0 sont deux classes différentes de l’ontologie et ne sont pas en relation de subsomption l’une avec l’autre 2. Rs(e, e0 ) = P ROX ssi S est une sous-classe (à tout niveau) de S 0 dans l’ontologie (et inversement) 3. Rs(e, e0 ) = ID ssi S et S 0 correspondent à la même classe de l’ontologie de domaine Notons ici que le niveau Manque d’Information (MI) n’est pas défini car la dimension conceptuelle d’un événement est nécessairement fournie par notre système d’extraction. En effet, celui-ci est conçu de façon telle qu’il n’y a pas d’extraction d’événement sans déclencheur d’événement et association à une classe de l’ontologie. Prenons quelques exemples pour illustrer le calcul de cette similarité conceptuelle. Exemple 2. Soient e1, e2, e3 et e4 des événements et S1, S2, S3 et S4 leurs dimensions conceptuelles respectives tel que S1 = AttackEvent, S2 = BombingEvent, S3 = SearchOperation et S4 = AttackEvent. 1. Rs(e1, e3) = DIF F car les classes S1 et S3 ne sont pas en relation de subsomption dans notre ontologie de domaine (voir l’annexe B). 2. Rs(e1, e2) = P ROX car S2 est une sous-classe de S1 dans l’ontologie (voir l’annexe B). 3. Rs(e1, e4) = ID car e1 et e4 sont des instances de la même classe d’événement dans WOOKIE (S1 = S4 = AttackEvent). 6.3.2 Similarité temporelle Soient T = [g1, g2, g3] et T 0 = [g 0 1 , g0 2 , g0 3 ] deux entités temporelles exprimées au formalisme TUS. Définition 6. Nous définissons ⊕, un opérateur de complétion temporelle, prenant en paramètres T et T 0 où T et/ou T 0 est incomplet (voir la section 4.2.2) et retourne une expression temporelle T 00 = [g 00 1 , g00 2 , g00 3 ] où : 1. si gi = g 0 i alors g 00 i = gi = g 0 i 2. si gi = ∅ alors g 00 i = g 0 i (et inversement) Exemple 3. Si T = [∅, 3, 15] et T 0 = [2012, ∅, ∅] alors T 00 = T ⊕ T 0 = [2012, 3, 15] Exemple 4. Si T = [∅, 2, ∅] et T 0 = [2013, ∅, 29] alors T 00 = T ⊕ T 0 = [2013, 2, 29] 106 Copyright c 2013 - CASSIDIAN - All rights reserved6.3. Similarité sémantique entre événements Dans certains cas, deux événements ayant deux dates d’occurrence strictement différentes selon le format TUS (niveau DIFF défini plus bas) peuvent tout de même référer au même événement dans le monde réel. Par exemple, une attaque peut durer plusieurs heures sur deux jours et ce même événement pourra être reporté avec deux dates différentes. Afin de traiter ces situations particulières, nous définissons Dt(T, T0 ), une fonction générique retournant une distance temporelle entre deux entités et un seuil dt délimitant la différence (DIFF) et la proximité (PROX) temporelle. Cette distance peut être obtenue par des librairies dédiées à la manipulation d’entités temporelles et le seuil dt défini en fonction de l’application. Soient e = (S, T, SP, A) et e 0 = (S 0 , T0 , SP0 , A0 ) deux événements, nous définissons Rt au moyen de ces différents opérateurs et seuils. Définition 7. Rt est une fonction de similarité temporelle : Rt : E × E → {ID, P ROX, DIF F, MI} telle que 1. Rt(e, e0 ) = DIF F ssi l’une des conditions suivantes est vérifiée :  ∃i∈{1,2,3} tel que gi 6= ∅ et g 0 i 6= ∅ et gi 6= g 0 i et Dt(T, T0 ) > dt si T ou T 0 est incomplet et T ⊕ T 0 retourne une date illégale (voir la section 4.2.2) 2. Rt(e, e0 ) = P ROX ssi l’une des conditions suivantes est vérifiée :  Rt(e, e0 ) = DIF F et Dt(T, T0 ) ≤ dt T ou T 0 est incomplet et T ⊕ T 0 retourne une date légale (voir la section 4.2.2) 3. Rt(e, e0 ) = ID ssi T et T 0 sont complets (voir la section 4.2.2) et ∀i∈{1,2,3} , gi = g 0 i 4. Rt(e, e0 ) = MI ssi T et/ou T 0 est inconnu (c’est-à-dire que soit l’information n’était pas présente dans la source, soit elle n’a pas été extraite par le système d’extraction) Prenons quelques exemples pour illustrer le calcul de cette similarité temporelle. Exemple 5. Soient e1, e2, e3, e4 et e5 des événements et T1, T2, T3, T4 et T5 leurs dimensions temporelles respectives tel que T1 = [2012, 11, 05], T2 = [2013, 11, 05], T3 = ∅, T4 = [2012, 11, ∅] et T5 = [2013, 11, 05]. 1. Rt(e1, e2) = DIF F 2. Rt(e1, e4) = P ROX 3. Rt(e2, e5) = ID 4. Rt(e2, e3) = MI 6.3.3 Similarité spatiale Pour estimer la similarité entre entités géographiques, nous utilisons les relations topologiques du modèle RCC-8 (présenté en section 4.2.3) ainsi que les différentes relations spatiales existantes dans la base GeoNames. Comme mentionné précédemment, la dimension spatiale peut varier avec le point de 107 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 6. Agrégation sémantique des événements vue duquel un événement est rapporté : par exemple, un événement survenu à Cestas (un petit village à côté de Bordeaux) pourra être précisément situé par une agence de presse française mais pourrait être plus généralement localisé à Bordeaux (la grande ville la plus proche) par une agence étrangère. La similarité peut également être influencée par la nature des deux entités spatiales comparées (ville, pays, quartier, continent, etc.) et par la taille de la région administrative les englobant (par exemple, la distance entre deux villes du Lichtenstein ne sera pas considérée de la même façon que celle entre deux villes de Russie). Pour résumer, l’absence d’identité topologique entre deux lieux peut être due à une différence dans le niveau d’abstraction mais ne signifie pas nécessairement une différence spatiale (niveau DIFF). Pour mieux traiter ces cas, nos introduisons Dsp(SP, SP0 ), une fonction générique retournant une distance spatiale entre deux entités géographiques et un seuil dsp délimitant la différence (DIFF) et la proximité (PROX) spatiale. Cette distance peut être obtenue par des librairies dédiées à la manipulation d’entités spatiales et le seuil dsp défini en fonction de l’application. Soient e = (S, T, SP, A) et e 0 = (S 0 , T0 , SP0 , A0 ) deux événements et r une relation RCC-8 entre deux entités spatiales SP et SP0 (voir la section 1.3.2 et la figure 1.11). Définition 8. Rsp est une fonction de similarité spatiale : Rsp : E × E → {ID, P ROX, DIF F, MI} telle que 1. Rsp(e, e0 ) = DIF F ssi r(SP, SP0 ) = DC et Dsp(SP, SP0 ) > dsp 2. Rsp(e, e0 ) = P ROX ssi l’une des conditions suivantes est vérifiée :  r(SP, SP0 ) ∈ {EC, P O, T P P, NT P P, T P P i, NT P P i} r(SP, SP0 ) = DC et Dsp(SP, SP0 ) < dsp 3. Rsp(e, e0 ) = ID ssi r(SP, SP0 ) = EQ 4. Rsp(e, e0 ) = MI ssi SP et/ou SP0 est inconnu (c’est-à-dire que soit l’information n’était pas présente dans la source, soit elle n’a pas été extraite par le système d’extraction) Ces définitions théoriques sont implémentées en utilisant les relations GeoNames comme suit : Définition 9. 1. Rsp(e, e0 ) = DIF F ssi SP et SP0 ne sont liés par aucune relation topologique dans GeoNames et Dsp(SP, SP0 ) > dsp 2. Rsp(e, e0 ) = P ROX ssi (a) SP et SP0 sont liés par une relation "nearby", "neighbour" ou "children" dans la base GeoNames (b) SP et SP0 ne sont liés par aucune relation topologique dans GeoNames et Dsp(SP, SP0 ) < dsp 3. Rsp(e, e0 ) = ID ssi SP et SP0 ont le même identifiant GeoNames (c’est à dire la même URI) 4. Rsp(e, e0 ) = MI ssi SP et/ou SP0 est inconnu (c’est-à-dire que soit l’information n’était pas présente dans la source, soit elle n’a pas été extraite par le système d’extraction) Prenons quelques exemples pour illustrer le calcul de cette similarité spatiale. 108 Copyright c 2013 - CASSIDIAN - All rights reserved6.3. Similarité sémantique entre événements Exemple 6. Soient e1, e2, e3, e4 et e5 des événements et SP1, SP2, SP3, SP4 et SP5 leurs dimensions spatiales respectives tel que SP1 = P aris, SP2 = Singapour, SP3 = ∅, SP4 = F rance et SP5 = Singapour. 1. Rsp(e1, e2) = DIF F 2. Rsp(e1, e4) = P ROX 3. Rsp(e2, e5) = ID 4. Rsp(e2, e3) = MI 6.3.4 Similarité agentive Comme mentionné précédemment, l’ensemble des participants d’un événement est composé d’entités nommées de type Personne ou Organisation et ces participants sont concrètement stockés dans une base de connaissances sous la forme de chaines de caractères. Par conséquent, l’agrégation au niveau de cette dimension implique l’utilisation de mesures de similarité dédiées aux chaines de caractères mais aussi adaptées à la comparaison des entités nommées. La distance de Jaro-Winkler [Winkler et al., 2006] convient bien à notre cadre d’application en raison de son temps d’exécution modéré et sa bonne performance dans le traitement des similarités entre noms de personne notamment. Notre approche visant à rester indépendante des mesures de similarité choisies, il conviendra d’évaluer plusieurs métriques au sein de notre système de capitalisation des connaissances pour guider notre choix final et définir le seuil de distance dstr utilisé ci-dessous. Nous définissons Dstr(P, P0 ), une fonction générique retournant une distance entre deux chaines de caractères tel que Dstr(P, P0 ) = 0 signifie que les chaines de caractères représentant les participants P et P 0 sont égales. De plus, comme nous manipulons des entités nommées et non de simples chaines de caractères, nous avons exploré d’autres techniques dédiées à la résolution de coréférence entre entités nommées telles que [Finin et al., 2009]. Dans un premier temps, nous proposons d’utiliser la base sémantique DBpedia 115 pour agréger tous les noms alternatifs référant à la même entité réelle. Nous définissons alt(P) comme une fonction retournant tous les noms alternatifs donnés dans DBpedia pour une participant P. Définition 10. P et P 0 coréfèrent, noté P ' P 0 , ssi P ∈ alt(P 0 ) (et inversement) ou Dstr(P, P0 ) = 0 Par ailleurs, comme mentionné en section 4.2.4, la dimension A est définie comme un ensemble de participants, par conséquent, l’agrégation des participants d’un événement doit permettre de traiter une similarité entre ensembles. Définition 11. A et A0 coréfèrent, noté A ∼= A0 , ssi |A| = |A0 | et ∀P ∈ A, ∃P 0 ∈ A0 tel que P ' P 0 (et inversement) Soient e = (S, T, SP, A) et e 0 = (S 0 , T0 , SP0 , A0 ) deux événements tels que A = {P1, P2, . . . , Pn} et A0 = {P 0 1 , P0 2 , . . . , P0 n}. 115. ❤tt♣✿✴✴❞❜♣❡❞✐❛✳♦r❣✴ 109 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 6. Agrégation sémantique des événements Définition 12. Ra est une fonction de similarité agentive : Ra : E × E → {ID, P ROX, DIF F, MI} tel que 1. Ra(e, e0 ) = DIF F ssi ∀P ∈A et ∀P0∈A0, P 6∈ alt(P 0 ) (et inversement) et Dstr(P, P0 ) > dstr 2. Ra(e, e0 ) = P ROX ssi l’une des conditions suivantes est vérifiée :  ∀P ∈A, ∃P0∈A0 tel que P ' P 0 ∀P ∈A, ∃P0∈A0 tel que Dstr(P, P0 ) ≤ dstr 3. Ra(e, e0 ) = ID ssi A ∼= A0 4. Ra(e, e0 ) = MI ssi |A| = 0 et/ou |A0 | = 0 (c’est-à-dire que soit l’information n’était pas présente dans la source, soit elle n’a pas été extraite par le système d’extraction) Prenons quelques exemples pour illustrer le calcul de cette similarité agentive. Exemple 7. Soient e1, e2, e3, e4 et e5 des événements et A1, A2, A3, A4 et A5 leurs dimensions agentives respectives tel que A1 = {Elmer Eusebio}, A2 = {Police}, A3 = {∅}, A4 = {E. Eusebio} et A5 = {Police}. 1. Ra(e1, e2) = DIF F 2. Ra(e1, e4) = P ROX 3. Ra(e2, e5) = ID 4. Ra(e2, e3) = MI 6.4 Processus d’agrégation Nous proposons un processus d’agrégation fondé sur une représentation en graphe : l’ensemble des similarités calculées est organisé en un graphe non-orienté G = (E, edges), où E est l’ensemble des sommets et chaque sommet e ∈ E est un événement extrait, et où edges est l’ensemble des arêtes et chaque arête edge est un lien entre deux sommets e et e0 défini comme une fonction de similarité multivaluée. Définition 13. La fonction edge a pour domaine de définition : edge : E × E → R où R = Rs × Rt × Rsp × Ra correspondant aux fonctions de similarité définies précédemment. Définition 14. La fonction edge est définie telle que : edge(e, e0) = hRs(e, e0), Rt(e, e0), Rsp(e, e0), Ra(e, e0)i 110 Copyright c 2013 - CASSIDIAN - All rights reserved6.4. Processus d’agrégation Cette représentation est bien adaptée à notre cadre d’application (la plateforme WebLab et les technologies du Web sémantique au sens large), car la connaissance provenant des différents services de traitement de l’information est stockée dans des bases de connaissances sémantiques fondées sur les graphes. Le graphe G est construit selon le principe suivant : chaque événement extrait (avec l’ensemble de ses dimensions) est comparé deux à deux à chacun des autres événements pour déduire un degré de similarité au niveau de chaque dimension (Rs, Rt , Rsp et Ra) (selon les principes théoriques exposés dans les sections précédentes). Nous créons ensuite dans G l’ensemble des sommets E correspondant aux événements extraits ainsi qu’une arête multivaluée de similarité edge entre chaque paire d’événements. Les similarités obtenues pour chacune des dimensions (conceptuelle, temporelle, spatiale et agentive) constituent un ensemble d’indicateurs que nous souhaitons exploiter afin de déterminer si des événements co-réfèrent. Il est communément admis dans les applications de veille en sources ouvertes que l’information manipulée par les analystes est incertaine à plusieurs niveaux : l’information en elle-même (sa véracité, sa fiabilité, etc.), la source, les traitements opérés, etc. Partant de ce constat, nous voulons laisser à l’utilisateur le choix final de fusionner (ou non) deux événements jugés similaires, et cela dans le but d’éviter toute perte d’information pouvant survenir avec une fusion entièrement automatisée. L’objectif de nos travaux est en effet, non pas de concevoir un système de fusion de fiches à proprement parler, mais plutôt d’aider l’analyste dans sa prise de décision grâce à un processus d’agrégation sémantique des événements. Pour ce faire, nous proposons d’appliquer un regroupement automatique (clustering) sur le graphe de similarités obtenu afin de guider l’analyste au sein de cet ensemble de connaissances. Différentes combinaisons de similarités sont possibles pour réaliser ce regroupement en fonction des besoins de l’utilisateur. Nous proposons, dans un premier temps, de hiérarchiser l’ensemble de ces configurations selon leur degré de similarité global (en commençant par celle qui lie les événements les plus similaires) de la manière suivante : Configuration (C1) {isConceptuallyID, isT emporallyID, isSpatiallyID, isAgentivelyID} Configuration (C2a) {isConceptuallyID, isT emporallyID, isSpatiallyID, isAgentivelyP ROX} Configuration (C2b) {isConceptuallyID, isT emporallyID, isSpatiallyP ROX, isAgentivelyID} Configuration (C2c) {isConceptuallyID, isT emporallyP ROX, isSpatiallyID, isAgentivelyID} Configuration (C2d) {isConceptuallyP ROX, isT emporallyID, isSpatiallyID, isAgentivelyID} Configuration (C3a) {isConceptuallyID, isT emporallyID, isSpatiallyP ROX, isAgentivelyP ROX} Configuration (Cn) etc. Intuitivement, la configuration C1 parait la plus à même de regrouper entre eux les événements qui co-réfèrent. Toutefois, l’incomplétude et l’hétérogénéité des données extraites est telle que cette première configuration est peu fréquemment observée dans une application réelle. De sorte que, si l’analyste souhaite retrouver dans sa base de connaissances des événements similaires, il sera plus pertinent d’explorer d’autres configurations d’agrégation. Pour cela, nous proposons de réaliser un regroupement ascendant et hiérarchique du graphe de similarités fondé sur les différentes configurations de similarités possibles. Le processus de regroupement se déroule ainsi : 111 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 6. Agrégation sémantique des événements 1. Nous appliquons une première passe de regroupement selon la première configuration de notre hiérarchie C1 afin d’obtenir un premier jeu de n agrégats d’événements Ω. Ainsi, chaque sommet e ∈ Ωi est lié à tous les autres sommets de Ωi par une arête edgeC1 satisfaisant la configuration C1. Par ailleurs, chaque agrégat Ωi ainsi formé possède la caractéristique suivante : l’ensemble des arêtes edgeCn (présentes avant le premier regroupement) reliant un sommet e ∈ Ωi à un autre sommet e0 ∈ Ωj sera de même configuration de similarités. 2. Nous pouvons donc fusionner cet ensemble d’arêtes en une nouvelle arête edgeCx où Cx correspond à cette configuration commune et relie l’agrégat Ωi au sommet e ∈ Ωj . 3. Une fois ce nouvel ensemble d’arêtes créé, nous proposons d’effectuer de nouveau un regroupement selon une nouvelle configuration Cx (la suivante dans la hiérarchie) et ainsi de suite. Ce processus de regroupements successifs n’a pas vocation à être réalisé jusqu’à épuisement des configurations possibles. En effet, afin de proposer un environnement d’aide à la décision flexible et adaptable, nous envisageons de laisser l’analyste libre du nombre de regroupements qu’il souhaite effectuer ainsi que de définir ses propres configurations de regroupement. Cela sera réalisé de la façon suivante : les différents agrégats d’événements constitués après chaque phase de regroupement seront rendus accessibles à l’utilisateur au travers de diverses interfaces de la plateforme WebLab (recherche, carte géographique, bandeau temporel et autres vues). Celui-ci pourra observer les différents agrégats proposés et décider en fonction de cela du prochain regroupement à effectuer. Par exemple, l’utilisateur pourra demander au système de lui renvoyer tous les événements jugés similaires seulement sur le plan temporel (niveau PROX) et ensuite examiner les fiches d’événements correspondantes pour en déduire de nouvelles connaissances. Ce processus d’agrégation itératif et interactif sera illustré dans la section 7.3 de nos expérimentations. 6.5 Conclusions Ce chapitre nous a permis de présenter un processus d’agrégation sémantique des événements fondé sur une échelle de similarité qualitative et sur un ensemble de mesures spécifiques à chaque type de dimension. Nous avons tout d’abord proposé des mécanismes de normalisation des entités, adaptés à leurs natures, afin d’harmoniser formellement les différentes informations extraites. Concernant ce premier aspect, nous envisageons des améliorations futures telles que la désambiguïsation des dates relatives, par exemple, ou encore l’intégration au sein de notre système d’un outil de désambiguïsation des participants (tels que ceux mentionnés en section 6.2). Nous avons ensuite proposé un une échelle de similarité qualitative orientée utilisateur et un ensemble de calculs de similarité intégrant à la fois un modèle théorique adapté à chaque dimension et une implémentation technique employant les technologies du Web sémantique (ontologies et bases de connaissances externes). Nous souhaitons poursuivre ce travail en élargissant le panel de similarités employées comme en intégrant des mesures de proximité ontologique plus sophistiquées ainsi des outils de distance temporelle et spatiale (fondés sur les coordonnées géographiques par exemple) et pour la similarité agentive, une distance dédiée aux ensembles telle que SoftJaccard [Largeron et al., 2009]. L’ensemble des similarités calculées pourraient également provenir d’une fonction d’équivalence apprise automatiquement à partir de données annotées manuellement. Enfin, un processus d’agrégation fondé sur les graphes a été proposé afin de regrouper les mentions d’événements similaires et de permettre à l’analyste de découvrir de nouvelles connaissances. Ce type d’agrégation possède l’avantage principal d’être intrinsèquement adapté aux traitement des bases de connaissances et 112 Copyright c 2013 - CASSIDIAN - All rights reserved6.5. Conclusions ainsi aisément généralisable à d’autres silos du Web de données. Cette agrégation pourrait également être réalisée par calcul d’une similarité globale qui combinerait les différentes similarités locales. Le point délicat pour cette méthode sera alors d’estimer le poids de chaque dimension dans cette combinaison. Enfin, la possibilité donnée à l’utilisateur de fusionner des fiches grâce aux suggestions d’agrégats soulèvera d’autres problématiques à explorer telles que la mise à jour et le maintien de cohérence de la base de connaissance en fonction des actions de l’analyste. 113 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 6. Agrégation sémantique des événements 114 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 7 Expérimentations et résultats Sommaire 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 7.2 Évaluation du système d’extraction . . . . . . . . . . . . . . . . . . . . . . . . 116 7.2.1 Protocole d’évaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 7.2.2 Analyse des résultats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 7.2.3 Bilan de l’évaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 7.3 Premières expérimentions sur l’agrégation sémantique . . . . . . . . . . . . . 122 7.3.1 Implémentation d’un prototype . . . . . . . . . . . . . . . . . . . . . . . 122 7.3.2 Jeu de données . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 7.3.3 Exemples d’observations . . . . . . . . . . . . . . . . . . . . . . . . . . 125 7.3.4 Bilan de l’expérimentation . . . . . . . . . . . . . . . . . . . . . . . . . 128 7.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 115 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 7. Expérimentations et résultats 7.1 Introduction Ce dernier chapitre présente les expérimentations réalisées durant cette thèse afin d’évaluer l’apport scientifique et technique de nos contributions. Nous commençons par une évaluation du système d’extraction d’événements conçu et implémenté tel que décrit dans le chapitre 5. Est ensuite détaillée notre seconde expérimentation appliquée cette fois-ci à l’évaluation du processus d’agrégation sémantique des événements (voir le chapitre 6). Pour chacune des évaluations, nous décrivons, tout d’abord, le système évalué et les différents corpus et paramètres utilisés pour l’expérimentation. Puis, nous présentons une analyse qualitative et/ou quantitative (par différentes métriques) des résultats obtenus. Enfin, nous exposons les différentes limites et perspectives de ces évaluations. 7.2 Évaluation du système d’extraction La première expérimentation vise à évaluer l’approche d’extraction des événements élaborée et dé- taillée en section 5.4. Nos objectifs sont, d’une part, d’estimer de façon quantitative les performances de chacune des méthodes d’extraction d’événements conçue et implémentée (l’extracteur à base de règles et celui fondé sur un apprentissage de motifs) et, d’autre part, de montrer leur complémentarité et l’utilité de les combiner. 7.2.1 Protocole d’évaluation Pour réaliser ces objectifs nous proposons d’extraire automatiquement l’ensemble des événements d’intérêt pour notre domaine d’application au sein d’une collection de textes de type journalistique en anglais et dont le thème est d’intérêt pour le ROSO. Nous nous focalisons donc sur la vingtaine de types d’événement listée en section 4.2.1 et sur leurs dimensions : temporelle (la date de l’événement), spatiale (son lieu d’occurrence) et agentive (les personnes et organisations impliquées) conformément au modèle d’événement présenté au chapitre 4.2. Nous présentons par la suite les données et les paramètres d’apprentissage ayant servi à mettre en place l’approche à base de motifs séquentiels. Précisons que la première approche symbolique n’a pas nécessité de paramétrage particulier. Données d’apprentissage Le premier ensemble de données nécessaire est un corpus d’apprentissage afin de mettre en œuvre notre système d’extraction par motifs séquentiels fréquents. Conformément aux paramètres définis en section 5.4.2, la découverte de motifs pertinents pour notre tâche d’extraction des événements nécessite un corpus de textes présentant les caractéristiques suivantes : – découpage en phrases ; – découpage en tokens; – lemmatisation ; – étiquetage grammatical ; – annotation des entités nommées (dates, lieux, personnes et organisations) ; – repérage des déclencheurs d’événements. 116 Copyright c 2013 - CASSIDIAN - All rights reserved7.2. Évaluation du système d’extraction Nous avons constitué ce corpus de manière semi-automatique à partir de 400 dépêches de presse sur l’engagement du Canada en Afghanistan collectées sur le Web 116 et de 700 dépêches parues entre 2003 et 2009 sur le site de l’ISAF 117. Dans un second temps, nous avons traité automatiquement ce corpus pour obtenir l’ensemble des annotations nécessaires : les découpages en phrases et mots ainsi que la lemmatisation ont été réalisés par des modules fournis dans GATE, l’étiquetage grammatical par l’outil TreeTagger et enfin l’annotation en entités nommées et déclencheurs d’événements grâce à notre système d’extraction d’EN et nos gazetteers d’événements. Puis, nous avons révisé manuellement ces deux derniers types d’annotation afin de corriger les éventuelles erreurs et ainsi garantir une meilleure qualité des données d’apprentissage. Enfin, un ensemble de pré-traitements spécifiques sont réalisés pour transformer l’ensemble de ce corpus au format supporté par l’outil d’extraction de motifs (notamment la constitution des itemsets). L’annexe K présente, à titre d’exemples, plusieurs phrases extraites du corpus d’apprentissage. Données de test Le second jeu de données nécessaire est un corpus de test (ou référence) permettant de comparer notre extraction d’événements par rapport à une vérité-terrain. Pour cela, nous avons choisi d’utiliser un corpus fourni dans la campagne d’évaluation MUC-4 et constitué de 100 dépêches de presse relatant des faits terroristes en Amérique du Sud. Il est fourni avec ce corpus un ensemble de fiches de référence, une seule fiche correspondant à une seule dépêche et décrivant l’événement principal relaté par celleci ainsi qu’un ensemble de propriétés. Ces fiches d’événements ne correspondant pas exactement aux besoins de cette évaluation, nous n’avons pu les réutiliser comme référence. En effet, la granularité (une seule fiche d’événement est proposée pour la totalité d’une dépêche) et le type des événements (uniquement de type ATTACK et BOMBING) ne correspondent pas à notre modélisation et ne permettent pas d’évaluer la totalité de notre approche. Par conséquent, notre évaluation porte sur une partie de ce corpus qui a été annotée manuellement, soit environ 210 mentions d’événements et près de 240 de leurs dimensions (55 dimensions temporelles, 65 dimensions spatiales et 120 dimensions agentives). L’annotation manuelle a été facilitée notamment par l’utilisation de la plateforme Glozz 118 dédiée à l’annotation et à l’exploration de corpus. L’annexe L présente, à titre d’exemple, une dépêche du corpus de test annotée avec les événements de référence. Phase d’apprentissage Nous avons, tout d’abord, opéré un apprentissage de motifs séquentiels fréquents sur le premier corpus en considérant quatre types d’item : la forme fléchie du mot, sa catégorie grammaticale, son lemme et sa classe sémantique (LookupEvent, Date, Place, Unit ou Person). Nous avons choisi de réaliser une tâche d’apprentissage par type de d’entité impliquée en utilisant le système des contraintes d’appartenance. Nous avons donc défini et employées les quatre contraintes : – Le motif retourné doit contenir un déclencheur d’événement et au minimum une entité de type Date ; 116. ❤tt♣✿✴✴✇✇✇✳❛❢❣❤❛♥✐st❛♥✳❣❝✳❝❛✴❝❛♥❛❞❛✲❛❢❣❤❛♥✐st❛♥, consulté le 21/03/2012 117. International Security Assistance Force, ❤tt♣✿✴✴✇✇✇✳♥❛t♦✳✐♥t✴✐s❛❢✴❞♦❝✉✴♣r❡ssr❡❧❡❛s❡s, consulté le 21/03/2012 118. ❤tt♣✿✴✴✇✇✇✳❣❧♦③③✳♦r❣✴ 117 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 7. Expérimentations et résultats – Le motif retourné doit contenir un déclencheur d’événement et au minimum une entité de type Place ; – Le motif retourné doit contenir un déclencheur d’événement et au minimum une entité de type Unit ; – Le motif retourné doit contenir un déclencheur d’événement et au minimum une entité de type Person. Nous obtiendrons donc après apprentissage quatre ensembles de motifs de type LookupEvent-Date, LookupEvent-Place, LookupEvent-Person et LookupEvent-Unit. Au préalable, il est nécessaire de fixer les deux paramètres de support et de gap (voir la section 5.4.2) : le premier donnant le nombre d’occurrences minimal du motif dans l’ensemble des séquences et le second permettant de "relâcher" la composition des motifs en autorisant un nombre maximal de mots possibles entre chaque élément du motif. Ce paramétrage a été effectué de façon itérative : nous avons fixé une première de valeur de support et de gap pour chaque apprentissage, effectué une première extraction de motifs, estimé leur qualité (en observant leur nombre, leur généricité/spécificité, leur couverture/précision), puis ajusté ces paramètres en fonction de nos observations. Le tableau 7.2 présente le nombre de motifs retournés par type de motif en fonction des paramètres choisis. Au regard de ces tests, nous avons choisi de fixer un gap maximal de 3 (correspondant à 3 mots possibles entre chaque élément du motif) et un support absolu relativement bas (environ 6% des séquences) afin d’obtenir des motifs intéressants mais en nombre raisonnable pour une exploration et une validation manuelles (environ 12000 motifs au total). Une fois l’ensemble des motifs raffinés par les contraintes d’appartenance et de gap, nous avons sélectionnés manuellement et les plus pertinents grâce à l’outil Camelis. La figure 7.1 suivante présente certains des motifs retenus 119, nous en avons conservés au total une cinquantaine. 1 { P e r s o n }{ VBD }{ DT }{ NN }{ IN }{ DT }{ E v e nt } 2 { , }{ DT }{ NN }{ E v e nt }{ J J }{ IN }{ P l a c e } 3 { J J }{ E v e nt }{ NP }{ NP }{ Date } 4 { s e r v e VBG }{ IN a s }{ a }{ NN }{ IN }{ t h e }{ J J }{ NP U nit }{ E v e nt } FIGURE 7.1 – Exemples de motifs séquentiels fréquents sélectionnés Pour finir, le jeu de motifs final a été converti en un ensemble de règles JAPE, nous obtenons donc un module indépendant facilement intégrable au sein de notre chaine globale d’extraction d’événements. Évaluation réalisée Une fois cette phase d’apprentissage accomplie, nous avons constitué trois chaines d’extraction GATE distinctes : Chaine 1 Approche à base de règles linguistiques (voir la section 5.4.1) ; Chaine 2 Approche par apprentissage de motifs (voir la section 5.4.2) ; 119. Les items des motifs correspondent aux catégories grammaticales et aux annotations sémantiques telles que VBD : auxiliaire "être" au passé, DT : déterminant, NN : nom commun au singulier, IN : préposition, JJ : adjectif, NP : nom propre au singulier, VBG : auxiliaire "être" au participe présent, Person : entité nommée de type personne, Event : déclencheur d’événement équivalant à l’étiquette LookupEvent ci-dessus, Place : entité nommée de type lieu, Date : entité nommée de type date, Unit : entité nommée de type organisation. 118 Copyright c 2013 - CASSIDIAN - All rights reserved7.2. Évaluation du système d’extraction LookupEvent-Date LookupEvent-Place LookupEvent-Person LookupEvent-Unit Sup3 Gap0 - - - 1381 Sup4 Gap0 113 - 67748 - Sup4 Gap2 - - - 18278 Sup6 Gap0 - 1000 - - Sup6 Gap2 1046 - - 2039 Sup8 Gap0 30 317 - - Sup8 Gap2 699 - - 725 Sup8 Gap3 1108 - - - Sup10 Gap2 - 8693 1730 344 Sup10 Gap3 681 6596 9540 47 Sup12 Gap3 - - 4614 - FIGURE 7.2 – Nombre de motifs retournés en fonction des paramètres choisis Chaine 3 Union des deux approches : elle contient l’ensemble des pré-traitements nécessaires à chaque approche (listés en section 5.4) ainsi que les deux modules de règles développés exécutés successivement. Ces trois chaines ont été exécutées sur le corpus de test et nous avons comparé manuellement et par document la qualité des événements extraits aux événements de référence correspondants. Cela nous a permis de calculer des scores de précision, rappel et F1-mesure (voir la section 2.5.1 pour la définition de ces métriques) pour chaque chaine. Par ailleurs, nous avons souhaité évaluer l’influence de l’analyse syntaxique en dépendance et de la qualité de la REN sur nos systèmes d’extraction d’événements. Le tableau 7.1 ci-dessous résume les différentes variantes des chaines évaluées et dont les résultats sont présentés dans la section suivante. Règles manuelles Motifs séquentiels Union des résultats Avec analyse syntaxique x x Sans analyse syntaxique x x REN automatique x x x REN manuelle x x x TABLE 7.1 – Chaines d’extraction d’événements : variantes évaluées 7.2.2 Analyse des résultats Le tableau 7.2 présente les scores de précision, rappel et F1-mesure obtenus : il s’agit des résultats d’évaluation des 3 chaines présentées plus haut par type de dimension et toutes dimensions confondues. Précisons qu’il s’agit des métriques calculées avec une annotation manuelle des entités nommées pour les 3 chaines et avec analyse syntaxique en dépendance pour la chaine 1. Nous pouvons tout d’abord constater que l’approche à base de règles et l’apprentissage de motifs obtiennent tous deux une très bonne précision globale et que, comme attendu, le rappel est meilleur pour cette dernière approche. Par ailleurs, nous avons été assez surpris par la bonne précision de la méthode par 119 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 7. Expérimentations et résultats Approche à base de règles manuelles Apprentissage de motifs Union des résultats Précision Rappel F1-mesure Précision Rappel F1-mesure Précision Rappel F1-mesure Date 0,93 0,25 0,39 0,90 0,64 0,75 0,90 0,68 0,78 Lieu 0,92 0,37 0,53 0,86 0,49 0,63 0,81 0,60 0,69 Participants 0,97 0,49 0,42 0,93 0,32 0,47 0,92 0,51 0,66 Toutes dimensions 0,94 0,37 0,45 0,90 0,48 0,62 0,88 0,60 0,71 TABLE 7.2 – Extraction d’événements : précision, rappel et F-mesure apprentissage, que nous expliquons par une sélection manuelle restrictive et précise des motifs. Quant aux taux de rappel peu élevés, ce n’est pas rare pour les approches à base de règles construites manuellement et, dans le cas de l’apprentissage de motifs, cela peut être du à un "gap" maximal trop restreint qui ne permet pas d’extraire les relations distantes. Par ailleurs, une analyse des résultats par type de dimension permet d’en apprendre sur les points forts et faiblesses de chacune des approches développées. Nous pouvons, par exemple, voir que le système à base de motifs séquentiels est nettement plus performant pour la reconnaissance des dates des événements. Concernant la dimension spatiale, bien que celle-ci soit meilleure en termes de F1-mesure, l’approche symbolique s’avère plus précise. Enfin, tandis que pour ces deux premières dimensions (temporelle et spatiale) l’union des résultats apporte peu par rapport à la meilleure des deux approches, celle-ci s’avère particulièrement utile pour la dimension agentive. Enfin, nous pouvons retenir de cette expérimentation que l’union des résultats obtient une F1-mesure nettement supérieure (près de 10 points par rapport à la meilleure des deux approches), ce qui dénote une amélioration globale de la qualité d’extraction pour tout type de dimension. De plus, nous remarquons que l’apprentissage de patrons complète avec succès notre approche symbolique en augmentant sensiblement le taux global de rappel. Nous constatons tout de même une légère perte de précision de l’union par rapport aux deux approches seules : celle-ci résulte du fait que réaliser une simple union des résultats, même si elle permet d’additionner les vrais positifs des deux approches, entraine aussi une addition des faux positifs. Toutes approches confondues, les résultats obtenus sont satisfaisants comparés à l’état de l’art même s’il convient de prendre des précautions lorsque l’on compare des systèmes évalués selon différents protocoles. Apport de l’analyse syntaxique Parallèlement à ces résultats, nous nous sommes intéressés à l’apport de l’analyse syntaxique au sein de notre approche symbolique. Le tableau 7.3 présente les performances de notre système avec et sans analyse syntaxique en dépendance. Nous pouvons constater que celle-ci permet d’augmenter sensiblement la qualité des extractions pour tous les types de dimension. Bien que les outils d’analyse syntaxique soient inégalement disponibles selon les langues, cette observation confirme l’intérêt de cette technique pour l’extraction des événements. Une perspective intéressante serait d’exploiter les résultats de l’analyse syntaxique au sein de la seconde approche en tant que caractéristique supplémentaire à prendre en compte lors de l’apprentissage des motifs séquentiels fréquents. Influence de la reconnaissance automatique des entités nommées Pour compléter les résultats précédents fondés sur une annotation manuelle des entités nommées, nous avons évalué les 3 chaines avec une annotation automatique des entités (voir la section 5.3). En 120 Copyright c 2013 - CASSIDIAN - All rights reserved7.2. Évaluation du système d’extraction Sans analyse syntaxique Avec analyse syntaxique Précision Rappel F1-mesure Précision Rappel F1-mesure Date 0,32 0,45 0,38 0,93 0,25 0,39 Lieu 0,86 0,18 0,30 0,92 0,37 0,53 Participants 0,90 0,31 0,46 0,97 0,49 0,42 Toutes dimensions 0,69 0,31 0,38 0,94 0,37 0,45 TABLE 7.3 – Extraction d’événements : apport de l’analyse syntaxique effet, les entités nommées étant repérées de façon automatique dans les applications réelles, il est important d’estimer quelle sera l’influence de cette automatisation sur l’extraction des événements. Le tableau 7.4 ci-dessous compare les performances de la 3ème chaine d’extraction (c’est-à-dire l’union des deux approches) avec une REN réalisée manuellement ou automatiquement. Nous observons une baisse géné- rale de la qualité des extractions qui s’explique par le fait que notre système d’extraction des événements dépend de ces entités nommées pour la reconnaissances des différentes dimensions. Par conséquent, si une entité nommée impliquée dans un événement n’a pas été reconnue, celui-ci ne pourra la reconnaître en tant que dimension de cet événement (ce qui fait diminuer le rappel de notre approche). Par ailleurs, si une entité a bien été reconnue mais mal catégorisée ou mal délimitée cela entrainera une perte de pré- cision pour notre extracteur d’événements. Malgré cette influence négative sur les résultats globaux, les scores obtenus par notre approche restent à la hauteur de l’état de l’art. REN manuelle REN automatique Précision Rappel F1-mesure Précision Rappel F1-mesure Date 0,90 0,68 0,78 0,86 0,64 0,73 Lieu 0,81 0,60 0,69 0,94 0,46 0,62 Participants 0,92 0,51 0,66 0,94 0,39 0,55 Toutes dimensions 0,88 0,60 0,71 0,91 0,50 0,64 TABLE 7.4 – Extraction d’événements : influence de la REN 7.2.3 Bilan de l’évaluation Cette première évaluation a permis les observations suivantes : 1. les deux approches proposées pour l’extraction des événements obtiennent des résultats équivalents voire supérieurs aux systèmes existants (voir la section 2.5.2) ; 2. l’analyse syntaxique en dépendance des phrases améliore significativement la qualité des extractions réalisées par l’approche symbolique ; 3. les performances globales de notre approche sont impactées à un niveau acceptable par la détection automatique des entités nommées ; 4. les deux techniques mises en œuvre pour l’extraction des événements présentent des forces complémentaires ; 5. une union de leurs résultats (peu coûteuse à réaliser) permet d’améliorer sensiblement la qualité des événements extraits. 121 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 7. Expérimentations et résultats Nous dressons donc un bilan positif de cette première évaluation. Toutefois, celle-ci présente des limites qui donnent lieu à plusieurs perspectives d’amélioration. Tout d’abord, la conclusion 1 bien que vraie si l’on compare les scores obtenus, doit être nuancée car il s’avère toujours difficile de comparer des systèmes d’extraction ayant été évalués selon des protocoles différents. L’influence de la REN automatique révèle une problématique plus largement présente dans le contexte du Media Mining : il s’agit de l’interdépendance des modules de traitement et du suivi de qualité tout au long de la chaine de traitement. Dans notre cas de figure, il serait intéressant de mettre en place une réelle collaboration (et non plus une simple juxtaposition) entre les modules de REN et d’extraction d’événements en explorant, par exemple, les travaux sur la qualité des annotations. Par ailleurs, suite à l’observation 2, nous souhaiterions amé- liorer notre approche d’extraction de motifs séquentiels avec de nouvelles caractéristiques issues d’une analyse syntaxique. Celles-ci pourraient notamment servir en tant que contraintes supplémentaires afin de limiter la quantité de motifs retournés. Enfin, évaluer les résultats obtenus par une simple union des deux approches a mis en exergue leurs complémentarités et de nouvelles pistes d’exploration pour élaborer une combinaison plus adaptée. Dans le cadre de nos travaux, nous avons choisi de réaliser l’extraction selon les deux approches successivement et de gérer cette problématique sur le même procédé que les mentions d’événements provenant de différents documents. 7.3 Premières expérimentions sur l’agrégation sémantique Cette seconde expérimentation vise à évaluer le prototype d’agrégation sémantique des événements que nous avons développé selon l’approche présentée au chapitre 6. Nous présentons, dans un premier temps, les principales caractéristiques techniques de ce prototype puis le jeu de données employé pour cette expérimentation. Par la suite, nous présentons nos premiers résultats et concluons par un bilan de cette expérimentation ainsi que les perspectives envisagées. 7.3.1 Implémentation d’un prototype L’approche proposée au chapitre 6 a été implémentée au sein de plusieurs services (en langage Java) permettant de traiter un ensemble de documents au format WebLab issus du service d’extraction d’information développé (voir la section 5.4). Un exemple de document contenant des événements extraits par notre système et représentés en RDF/XML selon le schéma l’ontologie WOOKIE est proposé en annexe I. L’ensemble des connaissances créé et modifié par ces services est stocké et géré au sein de bases de connaissances sémantiques grâce au triplestore Jena Fuseki 120. Dans un premier temps, les événements extraits ainsi que leurs dimensions provenant du système d’extraction sont stockés dans une première base de connaissances A régie par notre ontologie de domaine (voir la section 4.3). Les différents calculs de similarité ont été implémentés au sein d’un premier service qui présente les fonctionnalités suivantes (le reste des fonctions restant comme perspectives à nos travaux) : – similarité conceptuelle : implémentation par des tests de subsomption ontologique (telle que pré- sentée en section 6.3.1) ; – similarité temporelle : implémentation telle que proposée en section 6.3.2 mais sans la fonction de distance temporelle ; 120. ❤tt♣✿✴✴❥❡♥❛✳❛♣❛❝❤❡✳♦r❣✴❞♦❝✉♠❡♥t❛t✐♦♥✴s❡r✈✐♥❣❴❞❛t❛✴ 122 Copyright c 2013 - CASSIDIAN - All rights reserved7.3. Premières expérimentions sur l’agrégation sémantique – similarité spatiale : implémentation du service de désambiguïsation spatiale par GeoNames (voir la section 6.3.3) mais sans la fonction de distance spatiale ; – similarité agentive : implémentation par distance de Jaro-Winkler comme proposé en section 6.3.4, la désambiguïsation avec DBPedia reste à mettre en place. Les calculs de similarité sont combinés grâce à la librairie Apache Jena 121 et son mécanisme de règles d’inférence (voir l’annexe J pour un exemple de règle). Le moteur d’inférence développé est appliqué à la base A qui se trouve ainsi augmentée de l’ensemble des liens de similarité. Un second service réalise le processus d’agrégation sémantique : celui-ci permet de définir une confi- guration et d’appliquer une phase de regroupement au graphe de similarité entre événements (chargé dans la base A). Une fois le regroupement réalisé, c’est-à-dire lorsque le premier graphe a été enrichi avec les agrégats d’événements similaires, celui-ci est stocké dans une seconde base de connaissances B. Cette base sera alors disponible et interrogeable par les services de la plateforme WebLab pour présenter les agrégats à l’utilisateur final par divers modes de visualisation. Cette implémentation constitue une première preuve de concept pour montrer la faisabilité de notre processus d’agrégation sémantique. Toutefois, le système que nous avons conçu n’est, à l’heure actuelle, pas apte à passer à l’échelle pour obtenir des résultats significatifs sur un corpus d’évaluation à taille réelle. Ce passage à l’échelle constitue la principale perspective à court terme de nos travaux. Nous présentons dans les sections suivantes les observations que nous avons pu réaliser sur un jeu de données réelles plus réduit. 7.3.2 Jeu de données Pour cette expérimentation, nous nous appuyons sur une base de données nommée Global Terrorism Database (GTD). Il s’agit d’une base open-source contenant plus de 104 000 événements terroristes recensés manuellement de 1970 à 2011 [LaFree and Dugan, 2007]. Cette collection est gérée par le consortium américain START 122 étudiant les faits terroristes survenus dans le monde. Celle-ci est constituée à la fois d’événements d’échelle internationale et nationale, principalement collectés à partir de sources d’actualité sur le Web mais aussi provenant de bases de données existantes, livres, journaux et documents légaux. Les événements dans la base GTD sont catégorisés selon les neuf types suivants : 1. Assassination 2. Hijacking 3. Kidnapping 4. Barricade Incident 5. Bombing/Explosion 6. Unknown 7. Armed Assault 8. Unarmed Assault 9. Facility/Infrastructure Attack 121. ❤tt♣s✿✴✴❥❡♥❛✳❛♣❛❝❤❡✳♦r❣✴ 122. Study of Terrorism and Responses to Terrorism 123 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 7. Expérimentations et résultats En fonction de son type, un événement peut présenter entre 45 et 120 propriétés/attributs tels que les suivants : – type de l’événement ; – date de l’événement (année, mois, jour) ; – lieu de l’événement (région, pays, état/province, ville, latitude/longitude) ; – auteur(s) (personne ou groupe) ; – nature de la cible ; – type de l’arme utilisée ; – dommages matériels ; – nombre de décès ; – nombre de blessés ; – résumé ; – indice de certitude ; – sources (1 à 3) : nom de la source, extrait (titre), date de l’article, URL ; – etc. L’ensemble de ces données est représenté sous forme de couples "champ-valeur" et téléchargeable au format CSV. Nous disposons d’ores et déjà d’une partie de cette base (environ 4800 fiches d’événements survenus en 2010) convertie en base de connaissance sémantique (graphe RDF). Cette base de données est bien adaptée à l’évaluation de notre processus global de capitalisation des connaissances car elle constitue une collection de fiches d’événements manuellement agrégées à partir de plusieurs sources d’information. Les sources (articles et dépêches principalement) à l’origine d’une fiche sont accessibles via l’attribut scite qui spécifie leurs URLs. Une collecte automatique de ces sources (grâce à un service WebLab) permet donc facilement de constituer un corpus d’évaluation composé de dépêches de presse et des fiches d’événements correspondantes. Il faut noter également que cette base est fondée sur une modélisation différente de celle définie dans le cadre de nos recherches (l’ontologie WOOKIE). Il nous faut donc trouver le meilleur alignement de classes et attributs afin de pouvoir évaluer les résultats de notre approche par rapport aux fiches de référence de la base GTD. Le modèle d’événement employé par cette base étant sensiblement similaire au nôtre, l’alignement des attributs d’intérêt (date, lieu et participants) n’a pas soulevé de difficultés. Concernant la taxonomie des événements, 4 classes (sur les 9 du modèle GTD) ont pu être alignées avec le modèle WOOKIE car ayant la même sémantique (voir le tableau 7.5). Nos expérimentations sont donc limitées par ce point et ne concernent que des événements de ces 4 types. Modèle GTD Ontologie WOOKIE Facility/Infrastructure Attack DamageEvent Bombing/Explosion BombingEvent Armed Assault AttackEvent Hostage Taking KidnappingEvent TABLE 7.5 – Alignement des types d’événement entre le modèle GTD et l’ontologie WOOKIE 124 Copyright c 2013 - CASSIDIAN - All rights reserved7.3. Premières expérimentions sur l’agrégation sémantique 7.3.3 Exemples d’observations Calculs de similarité Nous présentons ici un exemple d’application à ce jeu de données de la similarité sémantique entre événements : nous collectons et analysons trois dépêches de presse (que nous nommerons source1, source2 et source3) à l’origine d’une même fiche d’événement issue de la base GTD 123 et résumée par la figure 7.3. Le tableau suivant 7.6 présente quatre des événements extraits automatiquement par FIGURE 7.3 – Un exemple d’événement issu de la base GTD notre système à partir de ces sources, ainsi que leurs dimensions. Event1 Event2 Event3 Event4 rdfs :label explosions explosions bomb blasts attacked rdf :type BombingEvent BombingEvent BombingEvent AttackEvent wookie :date - [∅ ,10,31] - - wookie :takesPlaceAt Atirau city - - Kazakstan wookie :involves - KNB department Kazakh Police - source scite1 scite2 scite3 scite2 TABLE 7.6 – Événements extraits et leurs dimensions Nous constatons dans cet exemple que, sans post-traitement adapté, ces quatre événements seraient présentés à l’utilisateur final de façon distincte et sans aucun lien entre eux (quatre fiches de connaissance différentes seraient créées). Toutefois, appliquer notre modèle de similarité sémantique entre événements, permet de compléter cet ensemble de connaissances avec des relations de similarité entre ces quatre fiches de connaissance. La figure 7.4 constitue un extrait des différentes similarités obtenues représentées en RDF/XML. Tout d’abord, une proximité conceptuelle a été détectée entre les événements Event2 et Event4 (ligne 9) de par le fait que le type de l’événement Event2 est une sous-classe de celui de l’événement Event4 dans l’ontologie de domaine. De plus, le service de désambiguïsation géographique ayant assigné des identifiants GeoNames aux entités Atirau city et Kazahstan, nous pouvons appliquer 123. National Consortium for the Study of Terrorism and Responses to Terrorism (START). (2012). Global Terrorism Database [Data file]. Retrieved from ❤tt♣✿✴✴✇✇✇✳st❛rt✳✉♠❞✳❡❞✉✴❣t❞ 125 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 7. Expérimentations et résultats le calcul de similarité spatiale et déduire que les événements Event1 et Event4 sont spatialement proches (ligne 5). Enfin, la fonction de similarité agentive utilisant la distance de Jaro-Winkler a permis d’estimer que les participants des événements Event2 et Event3 ne co-réfèrent pas (ligne 10). 1 < r d f : D e s c r i p t i o n r d f : a b o u t = " # E ve nt 1 " > 2 3 4 5 6 < / r d f : D e s c r i p t i o n > 7 < r d f : D e s c r i p t i o n r d f : a b o u t = " # E ve nt 2 " > 8 9 10 11 < / r d f : D e s c r i p t i o n > 12 < r d f : D e s c r i p t i o n r d f : a b o u t = " # E ve nt 3 " > 13 14 < / r d f : D e s c r i p t i o n > FIGURE 7.4 – Similarités entre événements : extrait représenté en RDF/XML Processus d’agrégation Dans un second temps, notre processus global d’agrégation a été testé sur un sous-ensemble du corpus GTD : l’ensemble des traitements (extraction, calculs de similarité et agrégation sémantique) a été appliqué sur 60 dépêches de presse collectées du Web. Nous pouvons déjà faire quelques observations chiffrées sur ce corpus de test et les résultats obtenus : (a) 92 fiches d’événements de référence correspondent à ces 60 dépêches ; (b) la majorité des dépêches (40) est associée à une seule fiche de référence, 15 dépêches relatent chacune 2 événements de référence et les 5 restantes renvoient chacune à 3 événements ; (c) 223 mentions d’événements ont été repérées par notre approche d’extraction (tous types confondus) ; (d) 44 de ces événements extraits font partie des 4 types de l’alignement (voir le tableau 7.5 et ont pu donc être évalués ; (e) ces 44 mentions d’événements comprennent 20 AttackEvent, 14 BombingEvent, 2 DamageEvent et 8 KidnappingEvent ; (f) parmi ces 44 mentions, 3 possèdent une date, 7 présentent un lieu extrait et 5 impliquent des participants. Nous nous sommes ensuite intéressés aux différentes relations de similarité de type ID et PROX (les plus pertinentes pour rapprocher des événements similaires) créées entre les 44 événements extraits : – 20 relations de type isConceptuallyID ont été détectées ; – 17 relations de type isConceptuallyPROX ; – 4 relations de type isAgentivelyID ; – 3 relations de type isSpatiallyPROX. 126 Copyright c 2013 - CASSIDIAN - All rights reserved7.3. Premières expérimentions sur l’agrégation sémantique Nous pouvons constater que peu de relations de ce type ont été détectées. Cela est, tout d’abord, à corréler avec l’observation (f) faite plus haut, montrant que, parmi les 44 événements extraits et analysables, peu de dimensions ont été extraites. Ce manque est du notamment à une limite de notre corpus d’évaluation mise en avant par l’observation (b) : chaque événement du jeu de données est rapporté au maximum par 3 sources, ce qui ne reflète pas les conditions réelles d’un processus de veille, où de nombreuses sources et articles reportant le même événement sont quotidiennement collectés et analysés. Dans un second temps, nous avons appliqué un premier regroupement avec pour condition de regrouper entre eux les événements partageant au minimum une similarité de niveau ID et une similarité de niveau PROX. Au vu des relations de similarité ci-dessus, seules deux configurations ont permis d’obtenir des agrégats : – {isConceptuallyID, isT emporallyLI, isSpatiallyLI, isAgentivelyID} produit 3 agrégats d’événements ; – {isConceptuallyP ROX, isT emporallyLI, isSpatiallyP ROX, isAgentivelyLI} produit 1 agrégat d’événements. Appliqué sur les 3 agrégats produits par la première configuration, un second regroupement par la configuration {isConceptuallyDIF F, isT emporallyLI, isSpatiallyLI, isAgentivelyID} permet d’obtenir un agrégat contenant les trois événements présentés dans le tableau 7.7. Deux des événements proviennent d’une même source (Event2 et Event3) et le troisième (Event1) d’une source différente, celles-ci sont reportées en annexes M et N. Event1 Event2 Event3 gtd :source s3 s12 s12 rdfs :label kidnapping kidnapped bomb wookie :involves Taliban Taliban TABLE 7.7 – Exemple de 3 événements agrégés automatiquement Afin d’évaluer l’apport de cette agrégation, nous avons examiné les fiches de référence associées aux deux sources dont ces événements ont été extraits : la source s3 renvoie à un seul événement (que nous nommerons Reference1) et la source s12 renvoie à deux événements (que nous nommerons Reference2 et Reference3). Le tableau 7.8 ci-dessous présente ces trois fiches de référence ainsi que quelques propriétés d’intérêt pour cette expérimentation. Nous pouvons, à partir de cet exemple, faire les observations suivantes : – Les trois extractions d’événements réalisées correspondent bien aux fiches de référence associées à leurs sources respectives ; – Les deux regroupements successifs ont permis d’agréger 3 événements perpétrés par le même agent ("Taliban") ; – Une géolocalisation des 3 événements de référence (voir la figure 7.5) montrent que les 3 événements agrégés se sont produits dans la même zone géographique. Bien que les lieux des 3 événements n’aient pas été repérés automatiquement, l’agrégation permettrait à l’analyste de découvrir cette proximité (en remontant par exemple aux sources des trois événements) ; – Dans l’hypothèse qu’avec une plus grande quantité de sources analysées les dimensions manquantes (date et lieu) auraient pu être extraites, l’analyste aurait pu analyser la répartition spatiale et temporelle de ces 3 événements afin d’en déduire de nouvelles connaissances ou de nouvelles prédictions. 127 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 7. Expérimentations et résultats Reference1 Reference2 Reference3 eventID 201009260002 201001230002 201001230007 source s3 s12 s12 year 2010 2010 2010 month 9 1 1 day 26 23 23 country Afghanistan Afghanistan Afghanistan region South Asia South Asia South Asia province/state Konar Konar Paktika city Chawkay Shigal Unknown attackType1 Hostage Taking (Kidnapping) Hostage Taking (Kidnapping) Armed Assault attackType2 Armed Assault targetType1 NGO Police Transportation corp1 Development Alternatives Inc Sheigal Law Enforcement target1 Linda Norgrove police chief of Sheigal district and two other police officers Civilians targetType2 NGO corp2 Development Alternatives Inc target2 Three Afghan aid employees perpetrator Taliban Taliban Taliban TABLE 7.8 – Fiches d’événements de référence FIGURE 7.5 – Visualisation des 3 événements extraits sur une carte géographique 7.3.4 Bilan de l’expérimentation Cette première expérimentation a montré des résultats prometteurs à la fois pour notre approche de similarité entre événements et pour le processus d’agrégation sémantique en tenant compte de ces similarités. En effet, nous avons montré que notre approche globale peut être mise en œuvre sur des 128 Copyright c 2013 - CASSIDIAN - All rights reserveddonnées réelles sans nécessiter d’effort d’adaptation important et en maintenant le niveau de qualité escompté. Toutefois, celle-ci présente un certain nombre de limites qui constituent autant de perspectives à explorer à l’avenir. Tout d’abord, nous souhaitons améliorer notre prototype en implémentant la totalité des mesures de similarité proposées en section 6.3 (distances temporelles et spatiales ainsi que la desambiguïsation agentive grâce à la base DBPedia). De plus, nous complèterons le service de regroupement en y intégrant le procédé de regroupement hiérarchique par plusieurs passes (une seule passe est réalisée pour le moment). Enfin, des problèmes techniques empêchent, à l’heure actuelle, d’obtenir des résultats significatifs et quantitatifs sur un plus grand ensemble de données. Ce passage à l’échelle constitue notre plus proche perspective future et permettra d’évaluer notre approche de façon exhaustive et avec des métriques adaptées (issues par exemple de l’évaluation des méthodes de clustering) sur l’ensemble du jeu de données présenté en section 7.3.2 : soit environ 5000 événements de référence. Cette évaluation devra principalement permettre de répondre aux deux questions suivantes : – est-ce que les événements extraits correspondant à une seule et même fiche de référence sont bien rapprochés ? – est-ce que les événements extraits ne correspondant pas à la même fiche de référence sont bien différenciés ? 7.4 Conclusions Dans ce dernier chapitre, nous avons présenté deux expérimentations destinées à évaluer deux de nos contributions. Tout d’abord, une première évaluation a porté sur notre approche mixte d’extraction automatique des événements : celle-ci a été testée et comparée à un corpus de référence annoté manuellement pour les besoins de l’expérimentation. Pour ce faire, nous avons effectué une évaluation en termes de précision/rappel/F1-mesure et cela selon différentes configurations : chaque approche a été évaluée séparément puis comparée à l’union de leurs résultats. Nous avons également fait varier diffé- rents paramètres tels que la présence de l’analyse syntaxique ou encore l’automatisation de la REN. Une analyse détaillée des résultats a montré de bonnes performances en comparaison avec l’état de l’art et ceci pour l’ensemble des configurations d’évaluation testées. Cette première évaluation a également pointé quelques limites de notre approche telles que l’impact de l’extraction d’entités nommées sur les performances des extracteurs d’événements. De plus, celle-ci pourrait être améliorée en exploitant davantage les informations fournies par l’analyse syntaxique en dépendance. La seconde expérimentation a constitué une première analyse qualitative dans le but d’illustrer les résultats de notre processus d’agrégation sémantique sur un jeu de données réelles. Nous avons présenté, dans un premier temps, l’implémentation d’un prototype fonctionnel couvrant la chaine complète d’agré- gation des événements. Puis, la base de données Global Terrorism Database a été introduite ainsi qu’un sous-ensemble de celle-ci servant de corpus d’évaluation pour cette expérimentation. Les tests réalisés se sont avérés prometteurs à la fois pour ce qui est du calcul de similarité entre événements et du processus d’agrégation proposé. Le premier traitement a rapproché efficacement des mentions d’événements qui co-réfèrent et permettra ainsi de réduire la tâche de l’analyste du ROSO en lui proposant l’ensemble des liens de similarité en tant que critères supplémentaires de recherche. Puis, nous avons appliqué notre processus d’agrégation par regroupements successifs sur ce jeu de test et ceci pour pour 3 types de confi- guration. Malgré le peu d’agrégats formés (en raison de la taille réduite du corpus), nous avons montré 129 Copyright c 2013 - CASSIDIAN - All rights reservedChapitre 7. Expérimentations et résultats par un exemple l’utilité de cette agrégation du point de vue utilisateur. Cette seconde contribution pourra être améliorée en intégrant l’ensemble des fonctions de similarité proposées à notre prototype d’agrégation et en optimisant celui-ci pour permettre son passage à l’échelle d’un processus de veille réel. 130 Copyright c 2013 - CASSIDIAN - All rights reservedConclusion et perspectives Sommaire 1 Synthèse des contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 1.1 État de l’art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 1.2 Un modèle de connaissances pour le ROSO . . . . . . . . . . . . . . . . 133 1.3 Une approche mixte pour l’extraction automatique des événements . . . . 134 1.4 Un processus d’agrégation sémantique des événements . . . . . . . . . . 134 1.5 Évaluation du travail de recherche . . . . . . . . . . . . . . . . . . . . . . 135 2 Perspectives de recherche . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 131 Copyright c 2013 - CASSIDIAN - All rights reservedConclusion et perspectives 1 Synthèse des contributions La problématique étudiée durant cette thèse est la capitalisation des connaissances à partir de sources ouvertes. Nous nous sommes plus particulièrement intéressés à l’extraction et à l’agrégation automatique des événements dans le domaine du Renseignement d’Origine Sources Ouvertes. Ce sujet de recherche nous a amenés à explorer les principaux axes de recherche suivants : – La représentation et la modélisation des connaissances ; – L’extraction automatique d’information ; – La capitalisation des connaissances. Pour répondre à cette problématique, nous avons, dans une première phase de nos recherches, réalisé un état de l’art approfondi de ces trois axes scientifiques. 1.1 État de l’art Nous avons, tout d’abord, exploré les principes théoriques et les recherches actuelles dans les domaines de la représentation des connaissances, du Web sémantique et de la modélisation des événements. Ce premier état de l’art a rappelé la distinction fondamentale entre les concepts de donnée, information et connaissance. Il a également confirmé l’importance croissante des technologies du Web sémantique qui sont partie intégrante d’une majorité de travaux actuels en fouille de documents. Les ontologies, plus particulièrement, se positionnement comme un mode de représentation des connaissances en adéquation avec les nouvelles problématiques du traitement de l’information. Combinées aux bases de connaissances sémantiques et moteurs d’inférence, cela constitue un socle de technologies particulièrement adapté à la capitalisation des connaissances à partir de sources ouvertes. Dans un second temps, nous nous sommes focalisés sur la place des événements en représentation des connaissances et avons étudié les différentes théories, modèles et ontologies proposés jusqu’alors. L’événement apparait comme un concept complexe, dont la définition et la modélisation sont encore aujourd’hui des sujets de discussions et débats. Beaucoup de travaux s’accordent sur un objet multi-dimensionnel impliquant une situation spatio-temporelle et un ensemble d’acteurs et facteurs. Nous avons également exploré les spécifications utilisées par les acteurs du renseignement afin d’adapter notre proposition à ce cadre d’application. La seconde revue de littérature a été centrée sur l’extraction automatique d’information dans les textes. Celle-ci a révélé un domaine de recherche très étudié bien que relativement jeune : nous avons pu recenser un nombre important d’approches, d’applications possibles, de logiciels et plateformes développés ainsi que de campagnes et projets d’évaluation menés jusqu’à nos jours. Les méthodes développées sont historiquement réparties en deux catégories : les symboliques et les statistiques. Les premières, dé- veloppées manuellement par des experts de la langue, s’avèrent globalement plus précises, tandis que les secondes réalisent un apprentissage sur une grande quantité de données et présentent généralement un fort taux de rappel. Parallèlement à cela, nous avons constaté une certaine complémentarité des approches existantes, non seulement en termes de précision et de rappel mais également du point de vue des types d’entités ciblées, du genre textuel, du domaine d’application, etc. Il apparait, par conséquent, pertinent de combiner les approches existantes afin de tirer partie de leurs atouts respectifs. Pour ce faire, les approches hybrides constituent des alternatives intéressantes car elles s’avèrent faiblement supervisées et plus flexibles que d’autres approches statistiques. Enfin, ce tour d’horizon nous a permis de comparer 132 Copyright c 2013 - CASSIDIAN - All rights reserved1. Synthèse des contributions différents outils et logiciels pour la mise en œuvre de notre approche ainsi que différents jeux de données potentiellement adaptés à l’évaluation de nos travaux. Le dernier état de l’art autour de la capitalisation des connaissances a mis en avant une suite logique à nos travaux en extraction automatique d’information : la conception d’une approche globale permettant la transition du texte vers la connaissance proprement dite. Cette problématique a donné lieu à diverses recherches au sein de plusieurs communautés de l’IA, chacune d’elles manipulant sa propre terminologie adaptée à ses propres besoins. Ses divergences de vocabulaire n’empêchent pas d’observer la place importante réservée à la capitalisation des connaissances au sein des recherches actuelles, que ce soit en réconciliation de données, extraction d’information ou Web sémantique. La majorité des approches proposées en réconciliation de données trouve ses origines dans les bases de données. Il s’agit dans ce cadre d’assurer le maintien de cohérence des bases en détectant les entrées et champs dupliqués. Pour cela, beaucoup de travaux sont fondés sur des calculs de similarité : les plus simples opèrent une similarité entre chaines de caractère tandis que les plus avancés exploitent le contexte linguistique et extra-linguistique des données à comparer. Ces dernières se distinguent ensuite par le type et la méthode employée pour obtenir ces caractéristiques de contexte et sur leur mode de combinaison. Nous avons également exploré les techniques de capitalisation au sein du Web sémantique. A l’heure actuelle, cela est réalisé principalement par ce que l’on nomme la Wikification : il s’agit de désambuiguïser sémantiquement les mentions textuelles d’intérêt afin de les rattacher à une entité du monde référencée dans une base de connaissances externe (dans le LOD essentiellement). Enfin, la capitalisation des connaissances sur les événements extraits est l’objet d’un intérêt grandissant. Également nommée co-référence entre événements, cette problématique est adressée principalement par des méthodes statistiques supervisées ou non, d’une part, et des approches fondées sur les graphes et les calculs de similarité d’autre part. Suite à la réalisation de ces états de l’art, nous avons proposé trois contributions relatives aux trois domaines scientifiques explorés. Celles-ci s’articulent au sein de notre processus global de capitalisation des connaissances. Au regard des conclusions de l’état de l’art, l’objectif directeur de nos recherches est de concevoir un système global de reconnaissance et d’agrégation des événements le plus générique possible et intégrant des méthodes et outils de la littérature. 1.2 Un modèle de connaissances pour le ROSO Notre première contribution est l’élaboration d’un modèle de connaissances qui servira de guide à notre approche de capitalisation des connaissances. Nous avons, tout d’abord, défini une modélisation des événements fondée sur des modèles reconnus en ingénierie des connaissances et en extraction d’information. Un événement est représenté par quatre dimensions : une dimension conceptuelle (correspondant au type de l’événement), une dimension temporelle (la date/ période d’occurrence de l’événement), une dimension spatiale (le lieu d’occurrence de l’événement) et une dimension agentive (dédiée aux différents acteurs impliqués dans l’événement). Pour la définition de ses dimensions, nous avons privilégié la gé- néricité du modèle ainsi que sa reconnaissances par la communauté scientifique concernée (par exemple, le modèle TUS et la représentation spatiale en aires géographiques). Par ailleurs, afin de modéliser l’ensemble des informations d’intérêt pour les analystes du ROSO, une ontologie de domaine a été proposée. Celle-ci comprend en tant que classes de plus haut niveau les cinq entités principales de ce domaine : les organisations, les lieux, les personnes, les équipements et les événements. De plus, notre ontologie intègre de nombreux liens sémantiques vers d’autres modélisations existantes afin de maintenir une interopérabilité au sein du Web sémantique. Cette contribution présente quelques limites et nous envisageons 133 Copyright c 2013 - CASSIDIAN - All rights reservedConclusion et perspectives des perspectives d’amélioration telles que l’intégration d’une cinquième dimension contextuelle afin de représenter des éléments du contexte linguistique et extra-linguistique, mais également une définition des rôles au sein de la dimension agentive. 1.3 Une approche mixte pour l’extraction automatique des événements Notre seconde contribution est une approche mixte pour l’extraction automatique des événements fondée sur une méthode symbolique à base de grammaires contextuelles, d’une part, et sur une technique de fouille de motifs séquentiels fréquents, d’autre part. La méthode symbolique comporte un ensemble de règles d’extraction développées manuellement et exploite notamment la sortie d’une analyse syntaxique en dépendance combinée avec un ensemble de classes argumentales d’événement. La seconde technique permet d’obtenir de manière faiblement supervisée un ensemble de patrons d’extraction grâce à la fouille de motifs séquentiels fréquents dans un ensemble de textes. Nous avons également conçu et implémenté un système de reconnaissance d’entités nommées sur le modèle des approches symboliques classiques. Celui-ci permet d’annoter au préalable les différentes entités dites simples nécessaires à la reconnaissance des événements. Les deux méthodes pour l’extraction des événements ont montré leur efficacité lors de l’état de l’art réalisé et leurs performances ont été évaluées sur un corpus de test de notre domaine d’application (voir la section Évaluation du travail de recherche ci-dessous). La méthode à base de règles pourra être améliorée en tenant compte d’autres informations fournies par l’analyse syntaxique telles que la voix (passive ou active) du déclencheur, la polarité de la phrase (négative ou positive), la modalité mais aussi les phénomènes de valence multiple. L’approche à base de motifs séquentiels fréquents pourrait également tirer profit de cette analyse syntaxique en intégrant les relations de dépendance produites en tant que nouveaux items ou sous forme de contraintes. Enfin, concernant les deux approches, leur limite principale (qui est aussi celle d’autres approches de la littérature) est qu’elles réalisent l’extraction au niveau phrastique. Une granularité plus large tel que le paragraphe ou le discours pourrait permettre d’améliorer les performances de ces approches. 1.4 Un processus d’agrégation sémantique des événements Notre contribution suivante est un processus d’agrégation sémantique des événements fondé sur une échelle de similarité qualitative et sur un ensemble de mesures spécifiques à chaque type de dimension. Nous avons tout d’abord proposé des mécanismes de normalisation des entités, adaptés à leurs natures, afin d’harmoniser formellement les différentes informations extraites. Concernant ce premier aspect, nous envisageons des améliorations telles que la désambiguïsation des dates relatives, par exemple, ou encore l’intégration au sein de notre système d’un outil de désambiguïsation sémantique des participants. Nous avons ensuite proposé un une échelle de similarité qualitative orientée utilisateur et un ensemble de calculs de similarité intégrant à la fois un modèle théorique adapté à chaque dimension et une implémentation technique employant les technologies du Web sémantique (ontologies et bases de connaissances sémantiques). Nous souhaitons poursuivre ce travail en élargissant le panel de similarités employées : notamment, des mesures de proximité ontologique plus sophistiquées ainsi que des outils de distance temporelle et spatiale et, pour la similarité agentive, une distance dédiée aux ensembles d’entités. Les similarités entre événements pourraient également provenir d’une fonction d’équivalence apprise automatiquement à partir de données annotées manuellement. Enfin, un processus d’agrégation fondé sur les graphes a été proposé afin de regrouper les mentions d’événements similaires et de permettre à l’analyste 134 Copyright c 2013 - CASSIDIAN - All rights reserved1. Synthèse des contributions de découvrir de nouvelles connaissances. Ce type d’agrégation possède l’avantage principal d’être intrinsèquement adapté aux traitement des bases de connaissances et ainsi aisément généralisable à d’autres silos du Web de données. Cette agrégation pourrait également être réalisée par calcul d’une similarité globale qui combinerait les différentes similarités locales. Le point délicat pour cette méthode sera alors d’estimer le poids de chaque dimension dans cette combinaison. 1.5 Évaluation du travail de recherche Pour conclure ce bilan des contributions, nous avons proposé durant notre travail de recherche deux expérimentations destinées à évaluer deux de nos contributions. Tout d’abord, une première évaluation a porté sur notre approche mixte d’extraction automatique des événements : celle-ci a été testée et comparée à un corpus de référence (issu de la campagne d’évaluation MUC) annoté manuellement pour les besoins de l’expérimentation. Une analyse détaillée des résultats a montré de bonnes performances en comparaison avec l’état de l’art et ceci pour l’ensemble des configurations d’évaluation testées. Cette première évaluation a également pointé quelques limites de notre approche telles que l’impact de l’extraction d’entités nommées sur les performances des extracteurs d’événements. De plus, celle-ci pourrait être améliorée en exploitant davantage les informations fournies par l’analyse syntaxique en dépendance. La seconde expérimentation a constitué une première analyse qualitative dans le but d’illustrer les résultats de notre processus d’agrégation sémantique sur un jeu de données réelles. Nous avons présenté, dans un premier temps, l’implémentation d’un prototype fonctionnel couvrant la chaine complète d’agrégation des événements. Puis, la base de données Global Terrorism Database a été introduite ainsi qu’un sous-ensemble de celle-ci servant de corpus d’évaluation pour cette expérimentation. Les tests réalisés se sont avérés prometteurs à la fois pour ce qui est du calcul de similarité entre événements et du processus d’agrégation proposé. Le premier traitement a rapproché efficacement des mentions d’événements qui co-réfèrent et permettra ainsi de réduire la tâche de l’analyste du ROSO en lui proposant l’ensemble des liens de similarité en tant que critères supplémentaires de recherche. Puis, nous avons appliqué notre processus d’agrégation par regroupements successifs sur ce jeu de test et nous avons montré par un exemple l’utilité de cette agrégation du point de vue utilisateur. Cette seconde contribution pourra être améliorée en intégrant l’ensemble des fonctions de similarité proposées à notre prototype d’agrégation et en optimisant celui-ci pour permettre son passage à l’échelle d’un processus de veille réel. 135 Copyright c 2013 - CASSIDIAN - All rights reservedConclusion et perspectives 2 Perspectives de recherche Les travaux de recherche menés durant cette thèse ont permis de mettre en avant des perspectives d’amélioration de notre processus de capitalisation des connaissances. La première suite à donner à nos travaux sera de mettre en place une évaluation quantitative et sur un jeu de données significatif (la base GTD par exemple) de notre processus global de capitalisation de connaissances. Pour cela, nous envisageons d’optimiser l’implémentation de notre processus d’agrégation sémantique pour passer à l’échelle de notre corpus d’évaluation. L’extraction des événements pourra être améliorée par différentes méthodes exploitant la connaissance disponible dans le Web de données. L’ensemble des silos sémantiques liés entre eux par des équivalences et des relations ontologiques peut être exploité par divers moyens. Une autre piste d’amélioration est l’intégration d’un système d’estimation de la confiance ou qualité des extractions afin de guider soit les systèmes suivants soit l’utilisateur. De plus, il pourra être intéressant d’y ajouter d’autres méthodes performantes de l’EI comme par exemple une extraction par apprentissage statistique sous réserve de disposer d’un corpus annoté adéquat. Concernant notre approche à base de motifs séquentiels, nous pourrons diversifier le corpus d’apprentissage utilisé afin d’obtenir des patrons plus génériques et performants pour d’autres domaines ou genres de texte. Les récentes recherches visant à évaluer la qualité des extractions telles que [Habib and van Keulen, 2011], nous paraissent également à prendre en compte afin d’améliorer les performances de nos extracteurs mais aussi dans le but de réaliser une hybridation intelligente de ces différentes méthodes d’extraction. Par ailleurs, comme dit précédemment le travail présenté ici est centré sur la définition d’un système global de reconnaissance et d’agrégation d’événements le plus générique possible. Cela implique que les différentes mesures de similarité présentées sont facilement interchangeables avec d’autres mesures plus avancées sans remettre en cause notre approche globale. La similarité agentive pourra, par exemple, être améliorée en y intégrant des techniques de résolution de co-référence entre entités. Nous pourrons également prendre en compte certaines dépendances entre les dimensions d’événement : à titre d’exemple, la dimension sémantique peut influencer l’agrégation d’entités temporelles dans le cas d’événements duratifs comme des épidémies ou des guerres où deux mentions d’événement peuvent avoir deux dates différentes mais tout de même référer au même événement du monde réel. En effet, nous n’étudions pour le moment que les événements dits ponctuels mais il sera intéressant de poursuivre ce travail en étudiant les événements duratifs et plus particulièrement les liens temporels entre événements grâce aux relations d’Allen [Allen, 1981] et à l’opérateur convexify défini dans le modèle TUS. Une autre perspective pourra être d’étudier tous les types de dépendance entre dimensions, notamment en explorant certaines techniques de résolution collective d’entités. Concernant le processus d’agrégation proposé, nous envisageons d’étudier les travaux existants autour de la cotation de l’information et plus particulièrement de la fiabilité des sources et des informations. La qualité des connaissances capitalisées peut être grandement améliorée dès lors que le processus d’agrégation des informations tient compte de ces indices (voir par exemple les travaux de [Cholvy, 2007]). Pour finir, l’interaction avec l’utilisateur constitue également une piste de recherche intéressante. [Noël, 2008] met en avant que les technologies du Web sémantique ont souvent été critiquées pour le fait que les aspects utilisateur y ont souvent été négligés. Ceux-ci proposent donc l’application des techniques de recherche exploratoire au Web sémantique et, plus particulièrement, à l’accès aux bases 136 Copyright c 2013 - CASSIDIAN - All rights reservedde connaissances par les utilisateurs. De plus, la possibilité donnée à l’utilisateur de fusionner des fiches grâce aux suggestions d’agrégats soulèvera d’autres problématiques à explorer telles que la mise à jour et le maintien de cohérence de la base de connaissance en fonction des actions de l’analyste. 137 Copyright c 2013 - CASSIDIAN - All rights reservedConclusion et perspectives 138 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexes 139 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe A WOOKIE : taxonomie des concepts 141 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe A. WOOKIE : taxonomie des concepts 142 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe B WOOKIE : événements spécifiques au ROSO 143 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe B. WOOKIE : événements spécifiques au ROSO 144 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe C WOOKIE : relations entre concepts 145 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe C. WOOKIE : relations entre concepts 146 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe D WOOKIE : attributs des concepts 147 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe D. WOOKIE : attributs des concepts 148 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe E GATE : exemple de chaine de traitement 149 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe E. GATE : exemple de chaine de traitement 150 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe F Gazetteer pour la détection de personnes en français 151 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe F. Gazetteer pour la détection de personnes en français 152 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe G L’ontologie-type pizza.owl 153 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe G. L’ontologie-type pizza.owl 154 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe H Extrait de l’ontologie pizza.owl au format OWL < !−− / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / OWL C l a s s e s / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / −−> < !−− C l a s s : h t t p : / /www. co−ode . o r g / o n t o l o g i e s / p i z z a / p i z z a . owl# Ame rican −−> < o w l : C l a s s r d f : a b o u t = " # Ame rican " > < r d f s : l a b e l xml :l a n g= " p t " >Ame rica na < / r d f s : l a b e l > < r d f s : s u b C l a s s O f > < o w l : R e s t r i c t i o n > < o w l : o n P r o p e r t y r d f : r e s o u r c e = " # h a sT o p pi n g " / > < owl: s omeVal ue sF r om r d f : r e s o u r c e = " # TomatoTopping " / > < / o w l : R e s t r i c t i o n > < / r d f s : s u b C l a s s O f > < r d f s : s u b C l a s s O f > < o w l : R e s t r i c t i o n > < o w l : o n P r o p e r t y r d f : r e s o u r c e = " # h a sT o p pi n g " / > < owl: s omeVal ue sF r om r d f : r e s o u r c e = " # P e p e r o ni S a u s a g e T o p pi n g " / > < / o w l : R e s t r i c t i o n > < / r d f s : s u b C l a s s O f > < r d f s : s u b C l a s s O f > < o w l : R e s t r i c t i o n > < o w l : o n P r o p e r t y r d f : r e s o u r c e = " # h a sC o u nt r y O f O ri gi n " / > < o wl: h a s V al u e r d f : r e s o u r c e = " # Ame rica " / > < / o w l : R e s t r i c t i o n > < / r d f s : s u b C l a s s O f > < r d f s : s u b C l a s s O f > < o w l : R e s t r i c t i o n > < o w l : o n P r o p e r t y r d f : r e s o u r c e = " # h a sT o p pi n g " / > < owl: s omeVal ue sF r om r d f : r e s o u r c e = " # M o z z a r ell a T o p pi n g " / > < / o w l : R e s t r i c t i o n > 155 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe H. Extrait de l’ontologie pizza.owl au format OWL < / r d f s : s u b C l a s s O f > < r d f s : s u b C l a s s O f > < o w l : C l a s s r d f : a b o u t = " # NamedPizza " / > < / r d f s : s u b C l a s s O f > < r d f s : s u b C l a s s O f > < o w l : R e s t r i c t i o n > < o w l : o n P r o p e r t y r d f : r e s o u r c e = " # h a sT o p pi n g " / > < o wl: all V al u e s F r o m > < o w l : C l a s s > < o wl: u ni o n O f r d f : p a r s e T y p e = " C o l l e c t i o n " > < o w l : C l a s s r d f : a b o u t = " # M o z z a r ell a T o p pi n g " / > < o w l : C l a s s r d f : a b o u t = " # P e p e r o ni S a u s a g e T o p pi n g " / > < o w l : C l a s s r d f : a b o u t = " # TomatoTopping " / > < / o wl: u ni o n O f > < / o w l : C l a s s > < / o wl: all V al u e s F r o m > < / o w l : R e s t r i c t i o n > < / r d f s : s u b C l a s s O f > < / o w l : C l a s s > < !−− / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / OWL O bj e ct P r o p e r t i e s / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / −−> < !−− O bj e ct p r o p e r t y : h t t p : / /www. co−ode . o r g / o n t o l o g i e s / p i z z a / p i z z a . owl# h a sB a s e −−> < o w l : O b j e c t P r o p e r t y r d f : a b o u t = " # h a sB a s e " > < r d f : t y p e r d f : r e s o u r c e = "&owl ; F u n c t i o n a l P r o p e r t y " / > < r d f : t y p e r d f : r e s o u r c e = "&owl ; I n v e r s e F u n c t i o n a l P r o p e r t y " / > < o w l : i n v e r s e O f > < o w l : O b j e c t P r o p e r t y r d f : a b o u t = " # i sB a s e O f " / > < / o w l : i n v e r s e O f > < r d f s : d o m a i n > < o w l : C l a s s r d f : a b o u t = " # P i z z a " / > < / r d f s : d o m a i n > < r d f s : r a n g e > < o w l : C l a s s r d f : a b o u t = " # Pi z z aB a s e " / > < / r d f s : r a n g e > < / o w l : O b j e c t P r o p e r t y > < !−− / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / OWL I n d i v i d u a l s / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / −−> < !−− I n d i v i d u a l : h t t p : / /www. co−ode . o r g / o n t o l o g i e s / p i z z a / p i z z a . owl# Ame rica −−> < o wl: T hi n g r d f : a b o u t = " # Ame rica " > 156 Copyright c 2013 - CASSIDIAN - All rights reserved< r d f : t y p e r d f : r e s o u r c e = " # C o u nt r y " / > < / o wl: T hi n g > < !−− / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / OWL Axioms / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / −−> < o w l : C l a s s r d f : a b o u t = " # LaRei ne " > < o w l : d i s j o i n t W i t h > < o w l : C l a s s r d f : a b o u t = " #Mushroom " / > < / o w l : d i s j o i n t W i t h > < / o w l : C l a s s > < o w l : C l a s s r d f : a b o u t = " #Mushroom " > < o w l : d i s j o i n t W i t h > < o w l : C l a s s r d f : a b o u t = " # LaRei ne " / > < / o w l : d i s j o i n t W i t h > < / o w l : C l a s s > < o w l : A l l D i f f e r e n t > < o w l : d i s t i n c tM e m b e r s r d f : p a r s e T y p e = " C o l l e c t i o n " > < o wl: T hi n g r d f : a b o u t = " # Ame rica " / > < o wl: T hi n g r d f : a b o u t = " # I t a l y " / > < o wl: T hi n g r d f : a b o u t = " #Germany " / > < o wl: T hi n g r d f : a b o u t = " # F r a n c e " / > < o wl: T hi n g r d f : a b o u t = " # E n gla n d " / > < / o w l : d i s t i n c tM e m b e r s > < / o w l : A l l D i f f e r e n t > < / r d f:RDF > 157 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe H. Extrait de l’ontologie pizza.owl au format OWL 158 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe I Exemple de document WebLab contenant des événements < r e s o u r c e x s i : t y p e = " n s 3:D oc ume nt " u r i = " w e bl a b: / / S m a l l E n g l i s h T e s t / 1 " xml n s: n s 3 = " h t t p : / / we bla b . ow2 . o r g / c o r e / 1 . 2 / model # " x m l n s : x s i = " h t t p : / / www. w3 . o r g / 2 0 0 1 / XMLSchema−i n s t a n c e " > < a n n o t a t i o n u r i = " w e bl a b: / / S m a l l E n g l i s h T e s t /1#0−a2 " > < d a t a xml n s: n s 2 = " h t t p : / / we bla b . ow2 . o r g / 1 . 2 / model # " > < r d f:RDF xml n s: d c = " h t t p : / / p u r l . o r g / dc / el e m e n t s / 1 . 1 / " x m l n s : r d f = " h t t p : / / www. w3 . o r g / 1 9 9 9/ 0 2/ 2 2 − r d f−s y nt a x−n s # " > < r d f : D e s c r i p t i o n r d f : a b o u t = " w e bl a b: / / S m a l l E n g l i s h T e s t / 1 " > < d c : l a n g u a g e >en< / d c : l a n g u a g e > < / r d f : D e s c r i p t i o n > < / r d f:RDF > < / d a t a > < / a n n o t a t i o n > < m e di aU nit x s i : t y p e = " n s 3 : T e x t " u r i = " s o u r c e : / / x p _ s 7 8 " > < a n n o t a t i o n u r i = " s o u r c e : / / x p _ s 7 8 # a0 " > < d a t a > < r d f:RDF x m l n s : r d f s = " h t t p : / / www. w3 . o r g / 2 0 0 0 / 0 1 / r d f−schema # " x m l n s : d c t = " h t t p : / / p u r l . o r g / dc / t e rm s / " x m l n s : r d f = " h t t p : / / www. w3 . o r g / 1 9 9 9/ 0 2/ 2 2 − r d f−s y nt a x−n s # " x ml n s: wl r = " h t t p : / / we bla b . ow2 . o r g / c o r e / 1 . 2 / o nt ol o g y / r e t r i e v a l # " xml n s:wl p = " h t t p : / / we bla b . ow2 . o r g / c o r e / 1 . 2 / o nt o l o g y / p r o c e s s i n g # " xml n s:w o o ki e = " h t t p : / / we bla b . ow2 . o r g / wookie # " > < r d f : D e s c r i p t i o n r d f : a b o u t = " s o u r c e : / / x p _ s 7 8 #4 " > < w l p : r e f e r s T o r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / S e a r c h O p e r a t i o n #40 d72785− 2171−4372−8e61 −5808 b41122c3 " / > < w l p : r e f e r s T o r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / S e a r c h O p e r a t i o n #832 b376 f− c7dd−4d23−a8ea −4441027 c4115 " / > < / r d f : D e s c r i p t i o n > < r d f : D e s c r i p t i o n r d f : a b o u t = " s o u r c e : / / x p _ s 7 8 #5 " > < w l p : r e f e r s T o r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / C r a s hE v e nt #81 b9be1e−a f a 9− 4 a84 −9066−414 b8021468c " / > < w l p : r e f e r s T o 159 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe I. Exemple de document WebLab contenant des événements r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / C r a s hE v e nt #5 bbe4888 −5445− 4896−a d f 0 −0a 2 0 b 7 6ee d 2 7 " / > < / r d f : D e s c r i p t i o n > < r d f : D e s c r i p t i o n r d f : a b o u t = " s o u r c e : / / x p _ s 7 8 # a0 " > < wl p:i s P r o d u c e dB y r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / w e b s e r v i c e s / g a t e s e r v i c e " / > < / r d f : D e s c r i p t i o n > < r d f : D e s c r i p t i o n r d f : a b o u t = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / C r a s hE v e nt #5 bbe4888 −5445−4896− a d f 0 −0a 2 0 b 7 6ee d 2 7 " > < w o o k i e : s o u r c e > s o u r c e : / / x p _ s 7 8 < / w o o k i e : s o u r c e > < r d f s : l a b e l > i n c i d e n t < / r d f s : l a b e l > < r d f : t y p e r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / wookie # C r a s hE v e nt " / > < / r d f : D e s c r i p t i o n > < r d f : D e s c r i p t i o n r d f : a b o u t = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / S e a r c h O p e r a t i o n #832 b376 f−c7dd− 4 d23−a8ea −4441027 c4115 " > < w o o k i e : i n v o l v e s r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / U nit # p o l i c e " / > < w o o k i e : s o u r c e > s o u r c e : / / x p _ s 7 8 < / w o o k i e : s o u r c e > < r d f s : l a b e l > i n v e s t i g a t i n g < / r d f s : l a b e l > < r d f : t y p e r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / wookie # S e a r c h O p e r a t i o n " / > < / r d f : D e s c r i p t i o n > < r d f : D e s c r i p t i o n r d f : a b o u t = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / S e a r c h O p e r a t i o n #40 d72785−2171− 4372−8e61 −5808 b41122c3 " > < w o o k i e : s o u r c e > s o u r c e : / / x p _ s 7 8 < / w o o k i e : s o u r c e > < r d f s : l a b e l > i n v e s t i g a t i n g < / r d f s : l a b e l > < r d f : t y p e r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / wookie # S e a r c h O p e r a t i o n " / > < / r d f : D e s c r i p t i o n > < r d f : D e s c r i p t i o n r d f : a b o u t = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / C r a s hE v e nt #81 b9be1e−a f a 9 −4a84− 9066−414 b8021468c " > < w o o k i e : i n v o l v e s r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / U nit # p o l i c e " / > < w o o k i e : s o u r c e > s o u r c e : / / x p _ s 7 8 < / w o o k i e : s o u r c e > < r d f s : l a b e l > i n c i d e n t < / r d f s : l a b e l > < r d f : t y p e r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / wookie # C r a s hE v e nt " / > < / r d f : D e s c r i p t i o n > < r d f : D e s c r i p t i o n r d f : a b o u t = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / P e r s o n # muhammad_khan_ sa soli " > < w l p : i s C a n d i d a t e > t r u e < / w l p : i s C a n d i d a t e > < r d f s : l a b e l >Muhammad Khan S a s o l i < / r d f s : l a b e l > < r d f : t y p e r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / wookie # P e r s o n " / > < / r d f : D e s c r i p t i o n > < r d f : D e s c r i p t i o n r d f : a b o u t = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / U nit # k h u z d a r _ p r e s s _ c l u b " > < w l p : i s C a n d i d a t e > t r u e < / w l p : i s C a n d i d a t e > < r d f s : l a b e l >Khuzda r P r e s s Club < / r d f s : l a b e l > < r d f : t y p e r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / wookie # U nit " / > < / r d f : D e s c r i p t i o n > 160 Copyright c 2013 - CASSIDIAN - All rights reserved< r d f : D e s c r i p t i o n r d f : a b o u t = " s o u r c e : / / x p _ s 7 8 #2 " > < w l p : r e f e r s T o r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / P e r s o n # muhammad_khan_ sa soli " / > < / r d f : D e s c r i p t i o n > < r d f : D e s c r i p t i o n r d f : a b o u t = " s o u r c e : / / x p _ s 7 8 #3 " > < w l p : r e f e r s T o r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / U nit # p o l i c e " / > < / r d f : D e s c r i p t i o n > < r d f : D e s c r i p t i o n r d f : a b o u t = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / U nit # p o l i c e " > < w l p : i s C a n d i d a t e > t r u e < / w l p : i s C a n d i d a t e > < r d f s : l a b e l > P o l i c e < / r d f s : l a b e l > < r d f : t y p e r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / wookie # U nit " / > < / r d f : D e s c r i p t i o n > < r d f : D e s c r i p t i o n r d f : a b o u t = " s o u r c e : / / x p _ s 7 8 #1 " > < w l p : r e f e r s T o r d f : r e s o u r c e = " h t t p : / / we bla b . ow2 . o r g / wookie / i n s t a n c e s / U nit # k h u z d a r _ p r e s s _ c l u b " / > < / r d f : D e s c r i p t i o n > < / r d f:RDF > < / d a t a > < / a n n o t a t i o n > < se gme nt x s i : t y p e = " n s 3: Li n e a r S e g m e nt " s t a r t = " 303 " end= " 321 " u r i = " s o u r c e : / / x p _ s 7 8 #1 " / > < se gme nt x s i : t y p e = " n s 3: Li n e a r S e g m e nt " s t a r t = " 386 " end= " 406 " u r i = " s o u r c e : / / x p _ s 7 8 #2 " / > < se gme nt x s i : t y p e = " n s 3: Li n e a r S e g m e nt " s t a r t = " 523 " end= " 529 " u r i = " s o u r c e : / / x p _ s 7 8 #3 " / > < se gme nt x s i : t y p e = " n s 3: Li n e a r S e g m e nt " s t a r t = " 533 " end= " 546 " u r i = " s o u r c e : / / x p _ s 7 8 #4 " / > < se gme nt x s i : t y p e = " n s 3: Li n e a r S e g m e nt " s t a r t = " 551 " end= " 559 " u r i = " s o u r c e : / / x p _ s 7 8 #5 " / > < c o n t e n t > Wednesday , December 1 5 , 2010 E−Mail t h i s a r t i c l e t o a f r i e n d P r i n t e r F r i e n d l y V e r si o n More S h a ri n g S e r v i c e s S h a r e | S h a r e on f a c e b o o k S h a r e on t w i t t e r S h a r e on l i n k e d i n S h a r e on st um bl e u p o n S h a r e on em ail S h a r e on p r i n t | J o u r n a l i s t gunned down i n Khuzda r KALAT: U n i d e n t i f i e d a rmed men gunned down Khuzda r P r e s s Club p r e s i d e n t i n Khuzda r on T ue s da y . A c c o r di n g t o t h e l o c a l p o l i c e , Muhammad Khan S a s o l i was on h i s way home when t h e u n i d e n t i f i e d men gunned him down i n La b o u r Colony . The a s s a i l a n t s f l e d f rom t h e s c e n e . P o l i c e i s i n v e s t i g a t i n g t h e i n c i d e n t . app < / c o n t e n t > < / m e di aU nit > < / r e s o u r c e > 161 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe I. Exemple de document WebLab contenant des événements 162 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe J Exemple de règle d’inférence au formalisme Jena @ p r e fi x r d f : < h t t p : / /www. w3 . o r g / 1 9 9 9/ 0 2/ 2 2 − r d f−s y nt a x−n s # >. @ p r e fi x owl : < h t t p : / /www. w3 . o r g / 2 0 0 2 / 0 7 / owl # >. @ p r e fi x r d f s : < h t t p : / /www. w3 . o r g / 2 0 0 0 / 0 1 / r d f−schema # >. @ p r e fi x x s d : < h t t p : / /www. w3 . o r g / 2 0 0 1 / XMLSchema# >. @ p r e fi x wookie : < h t t p : / / we bla b . ow2 . o r g / wookie # >. [ I n i t i a l i z a t i o n : −> p r i n t ( " / / / / / / / / / / / / / / E v e nt s s e m a n ti c s i m i l a r i t y r u l e s / / / / / / / / / / / / / / / / / / / / " ) ] / / / / / / / / / / / / / / / S em a nti c s i m i l a r i t y / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / [ SemSIM_1 : ( ? e1 wookie : s e m a nti c all y S IM ? e2 ) <− ( ? e1 r d f : t y p e ? c1 ) , ( ? e2 r d f : t y p e ? c2 ) , i s R e l e v a n t ( ? c1 ) , i s R e l e v a n t ( ? c2 ) , n oVal ue ( ? e1 wookie : s e m a nti c all y S IM ? e2 ) , n ot E q u al ( ? e1 , ? e2 ) , n ot E q u al ( ? c1 , ? c2 ) , h a s S u bCl a s s ( ? c1 , ? c2 ) ] 163 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe J. Exemple de règle d’inférence au formalisme Jena 164 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe K Extrait d’un document du corpus d’apprentissage { However howeve r RB }{ , , , }{ Canada Canada NP P l a c e }{ c o n t i n u e s c o n t i n u e VBZ }{ t o t o TO }{ make make VB }{ p r o g r e s s p r o g r e s s NN }{ on on IN }{ t h e t h e DT }{ s i x s i x CD }{ p r i o r i t i e s p r i o r i t y NNS }{ we we PP }{ ha ve ha ve VBP }{ i d e n t i f i e d i d e n t i f y VBN }{ t h a t t h a t WDT }{ w i l l w i l l MD }{ h el p h el p VB LookupEvent }{ b u i l d b u i l d VB }{ t h e t h e DT }{ f o u n d a t i o n f o u n d a t i o n NN }{ f o r f o r IN }{ a a DT }{ more more RBR }{ s t a b l e s t a b l e J J }{ A f g h a n i s t a n A f g h a n i s t a n NP P l a c e }{ . . SENT } { The t h e DT }{ f i f t h f i f t h J J }{ q u a r t e r l y q u a r t e r l y J J }{ r e p o r t r e p o r t NN }{ h i g h l i g h t s h i g h l i g h t VBZ }{ C a n a di a n C a n a di a n J J }{ a c t i v i t y a c t i v i t y NN }{ i n i n IN }{ s e v e r a l s e v e r a l J J }{ a r e a s a r e a NNS }{ : : : }{ − − : }{ Unde r u n d e r IN }{ a a DT }{ Ca na dia n−s u p p o r t e d Ca na dia n−s u p p o r t e d J J }{ p r o j e c t p r o j e c t NN }{ t o t o TO }{ c l e a r c l e a r J J }{ l a n dmi n e s l a n dmi n e NNS }{ and and CC }{ o t h e r o t h e r J J }{ e x p l o s i v e s e x p l o s i v e NNS }{ , , , }{ t r a i n i n g t r a i n i n g NN LookupEvent }{ be ga n b e gi n VBD }{ f o r f o r IN }{ 80 @card@ CD }{ l o c a l l y l o c a l l y RB }{ r e c r u i t e d r e c r u i t VBN }{ d emi n e r s d emi n e r s NNS }{ i n i n IN }{ Ka n da ha r Ka n da ha r NP P l a c e }{ , , , }{ and and CC }{ an an DT }{ a d d i t i o n a l a d d i t i o n a l J J }{ 2 7 0 , 0 0 0 @card@ CD }{ s q u a r e s q u a r e J J }{ m et r e s m et r e NNS }{ o f o f IN }{ l a n d l a n d NN }{ we re be VBD }{ c l e a r e d c l e a r VBN }{ . . SENT } { Canada Canada NP P l a c e }{ c o n t i n u e s c o n t i n u e VBZ }{ t o t o TO }{ p u r s u e p u r s u e VB }{ i t s i t s PP$ }{ e f f o r t s e f f o r t NNS }{ t o t o TO }{ p r o t e c t p r o t e c t VB LookupEvent }{ i t s i t s PP$ }{ s e c u r i t y s e c u r i t y NN }{ by by IN }{ h e l p i n g h el p VBG LookupEvent }{ t h e t h e DT }{ A fghan A fghan J J U nit }{ g o v e r nm e nt g o v e r nm e nt NN U nit }{ t o t o TO }{ p r e v e n t p r e v e n t VB }{ A f g h a n i s t a n A f g h a n i s t a n NP P l a c e }{ f rom f rom IN }{ a g a i n a g a i n RB }{ becoming become VBG }{ a a DT }{ b a s e b a s e NN }{ f o r f o r IN }{ t e r r o r i s m t e r r o r i s m NN }{ d i r e c t e d d i r e c t VBN }{ a g a i n s t a g a i n s t IN }{ Canada Canada NP P l a c e }{ o r o r CC }{ i t s i t s PP$ }{ a l l i e s a l l y NNS }{ . . SENT } 165 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe K. Extrait d’un document du corpus d’apprentissage 166 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe L Extrait d’un document du corpus de test TST4−MUC4−0001 C o n c e p ci o n < / Pl a c e > , 23 Aug 88< / Date> ( S a n t i a g o D ome stic S e r v i c e < / Uni t> ) −− [ R e p o rt ] [ Mi g uel Angel V a l d e b e n i t o < / Person> ] [ T e xt ] P o l i c e < / Uni t> s o u r c e s ha ve r e p o r t e d t h a t u n i d e n t i f i e d i n d i v i d u a l s p l a n t e d a bomb< / LookupEvent> i n f r o n t o f a < Pl a c e >Mormon Chu rch < / Pl a c e > i n T al c a h u a n o D i s t r i c t < / Pl a c e > . The bomb , which e x pl o d e d < / LookupEvent> and c a u s e d p r o p e r t y damage< / LookupEvent> w o rt h 5 0 , 0 0 0 p e s o s , was p l a c e d a t a c h a p e l o f t h e Chu rch o f J e s u s C h r i s t o f L a t t e r −Day S a i n t s < / Pl a c e > l o c a t e d a t No 3856 Gomez C a r r e n o S t r e e t < / Pl a c e > . The s h o c k wave d e s t r o y e d a w all , t h e r o o f , and t h e windows o f t h e c h u rc h , b ut di d n ot c a u s e any i n j u r i e s . C a r a b i n e r o s bomb s q u a d < / Uni t> p e r s o n n e l i m m e di at el y went < / LookupEvent> t o t h e l o c a t i o n and d i s c o v e r e d t h a t t h e bomb was made o f 50 g rams o f an−f o [ ammonium n i t r a t e −f u e l o i l b l a s t i n g a g e n t s ] and a sl ow f u s e . C a r a b i n e r o s s p e c i a l f o r c e s < / Uni t> s o o n r a i d e d < / LookupEvent> a l a r g e a r e a t o t r y t o a r r e s t < / LookupEvent> t h o s e r e s p o n s i b l e f o r t h e a t t a c k , b ut t h e y we re u n s u c c e s s f u l . The p o l i c e < / Uni t> ha ve a l r e a d y i n f o rm e d t h e a p p r o p r i a t e a u t h o r i t i e s , t h a t i s , t h e n a t i o n a l p r o s e c u t o r and t h e T al c a h u a n o c r i m i n a l c o u r t < / Uni t> , o f t h i s a t t a c k . 167 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe L. Extrait d’un document du corpus de test 168 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe M Source s12 : dépêche de presse à l’origine des événements Event1 et Event2 J a n u a r y 2 4 , 2010 Two U. S . S o l d i e r s Are Among 17 A fghan D e at h s By ROD NORDLAND and SANGAR RAHIMI KABUL, A f g h a n i s t a n − At l e a s t 17 p e o pl e di e d i n f o u r s e p a r a t e e p i s o d e s i n A f g h a n i s t a n on S at u r d a y , w hil e a p o l i c e c h i e f was ki d n a p p e d and a p r o v i n c i a l g o v e r n o r n a r r o wl y e s c a p e d a s s a s s i n a t i o n . T h r e e women and a young boy we re k i l l e d when a t a x i crammed wit h a t l e a s t e i g h t p a s s e n g e r s t r i e d t o r u n an i l l e g a l T a l i b a n c h e c k p o i n t i n P a k t i k a P r o vi n c e , i n t h e e a s t , and t h e m i l i t a n t s r i d d l e d t h e c a r wit h b u l l e t s . F o u r A fghan s o l d i e r s g u a r di n g t h e g o v e r n o r o f Wardak P r o vi n c e , j u s t w e st o f Kabul , we re k i l l e d when t h e T a l i b a n s e t o f f a hi d d e n bomb a s he t r a v e l e d t o a s c h o o l b u i l d i n g i n s p e c t i o n ; t h e g o v e r n o r was unha rmed . Two Ame rican s o l d i e r s we re k i l l e d by an i m p r o vi s e d e x p l o s i v e d e v i c e i n s o u t h e r n A f g h a ni st a n , a c c o r d i n g t o a p r e s s r e l e a s e f rom t h e i n t e r n a t i o n a l m i l i t a r y command h e r e . And s e v e n A f g ha n s we re k i l l e d i n t h e r em ot e v i l l a g e o f Qulum B al a q i n F a r y a b P r o vi n c e , i n n o r t h e r n A f g h a ni st a n , when t h e y t r i e d t o e x c a v a t e an ol d bomb d r o p p e d by an a i r c r a f t many y e a r s ago , a c c o r d i n g t o a s t a t e m e n t f rom t h e I n t e r i o r M i n i s t r y . One p e r s o n was wounded . I n a d d i t i o n , t h e p o l i c e c h i e f o f S h e i g a l d i s t r i c t i n Kuna r P r o vi n c e , J a m a t u l l a h Khan , and two o f h i s o f f i c e r s we re ki d n a p p e d w hil e p a t r o l l i n g j u s t a f t e r mi d ni g ht on S a t u r d a y c l o s e t o t h e b o r d e r wit h P a k i s t a n . Gen . K h a l i l u l l a h Zia yee , t h e p r o v i n c i a l p o l i c e c h i e f , s a i d t h e y we re a b d u ct e d " by t h e e n emi e s o f p e a c e and s t a b i l i t y i n t h e c o u nt r y , " t h e g o v e r nm e nt ’ s c at c h−a l l t e rm f o r i n s u r g e n t s . "We don ’ t ha ve any i n f o r m a t i o n a b o ut him y et , " G e n e r al Zi a y e e s ai d , s p e a ki n g o f t h e p o l i c e c h i e f . He a d de d t h a t a s e a r c h was u n d e r way . The T a l i b a n and common c r i m i n a l s o f t e n ki d n a p o f f i c i a l s f o r ransom . 169 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe M. Source s12 : dépêche de presse à l’origine des événements Event1 et Event2 The t a x i c a b s h o o t i n g o c c u r r e d a s t h e d r i v e r was t r y i n g t o t a k e h i s p a s s e n g e r s t o g e t m e di c al c a r e a t a n e a r b y m i l i t a r y b a s e r u n by i n t e r n a t i o n a l f o r c e s . I n a d d i t i o n t o t h e t h r e e women and a boy o f 5 o r 6 who we re k i l l e d , t h r e e o t h e r p a s s e n g e r s we re wounded , a c c o r d i n g t o Mukhles Afghan , a spokesman f o r t h e p r o v i n c i a l g o v e r n o r i n P a k t i k a . The a t t e m p t e d a s s a s s i n a t i o n o f t h e g o v e r n o r o f Wardak , Mohammad Halim F e di y e e , o c c u r r e d d u r i n g a t r i p t h a t had bee n announced , l e a v i n g h i s convoy v u l n e r a b l e . "We we re awa re o f t h e pl a n n e d a t t a c k and we had a l r e a d y d e f u s e d two bombs p l a n t e d on o u r way , " s a i d S h a h e d ull a h Shahed , a spokesman f o r t h e g o v e r n o r who was t r a v e l i n g wit h him . He s a i d a T a l i b a n l o c a l commander named Ahmadullah and a n o t h e r f i g h t e r had p l a n t e d a new bomb j u s t b e f o r e t h e convoy c r o s s e d a c u l v e r t , d e t o n a t i n g i t u n d e r t h e f i r s t a rm o re d v e h i c l e i n t h e convoy . The b l a s t k i l l e d f o u r s o l d i e r s i n t h e v e h i c l e . Mr . Shahed s a i d o t h e r s o l d i e r s managed t o c a p t u r e t h e two T a l i b a n members a s t h e y t r i e d t o f l e e . " T hi s t r i p was an a n n o u nce d t r i p , and e v e r y b o d y was w a i t i n g f o r t h e g o v e r n o r t o h el p them s o l v e t h e i r p r o blem s , " Mr . Shahed s a i d . " H u n d re d s o f t r i b a l e l d e r s and l o c a l p e o pl e we re w a i t i n g t o s e e t h e g o v e r n o r . " An A fghan em pl o yee o f The New York Times i n K h o st P r o vi n c e c o n t r i b u t e d r e p o r t i n g . 170 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe N Source s3 : dépêche de presse à l’origine de l’événement Event3 F o u r ki d n a p p e d i n e a s t A f g h a n i s t a n Sun Sep 2 6 , 2010 3: 7PM M i l i t a n t s ha ve ki d n a p p e d a B r i t i s h woman al o n g wit h t h r e e l o c a l s i n e a s t e r n A f g h a n i s t a n a s s e c u r i t y c o n t i n u e s t o d e t e r i o r a t e i n t h e war−r a v a g e d c o u n t r y . Those a b d u ct e d i n t h e p r o v i n c e o f Kunar a r e r e p o r t e d l y em pl o y e e s o f an Ame rican company . L o c al o f f i c i a l s ha ve blamed t h e ki d n a p pi n g on t h e T a l i b a n b ut t h e m i l i t a n t s ha ve n ot y e t cl aim e d r e s p o n s i b i l i t y . Ki d n a p pi n g s ha ve r e c e n t l y bee n on t h e r i s e i n A f g h a n i s t a n a s t h e s e c u r i t y s i t u a t i o n d e t e r i o r a t e s t o i t s w o r st l e v e l s s i n c e t h e 2001 US−l e d i n v a s i o n t h e r e . The T a l i b a n ha ve a b d u ct e d o v e r a d oze n p e o pl e a c r o s s A f g h a n i s t a n d u r i n g t h e r e c e n t p a r l i a m e n t a r y e l e c t i o n s . T hi s i s w hil e some 1 5 0 , 0 0 0 US−l e d f o r e i g n t r o o p s a r e r e s p o n s i b l e f o r s e c u r i t y i n t h e war−t o r n n a t i o n . JR /AKM/MMN 171 Copyright c 2013 - CASSIDIAN - All rights reservedAnnexe N. Source s3 : dépêche de presse à l’origine de l’événement Event3 172 Copyright c 2013 - CASSIDIAN - All rights reservedBibliographie [Agrawal et al., 1993] Agrawal, R., Imielinski, T., and Swami, A. (1993). Mining association rules ´ between sets of items in large databases. In Proceedings of the 1993 ACM SIGMOD international conference on Management of data, SIGMOD ’93, pages 207–216, New York. ACM. 97 [Ahn, 2006] Ahn, D. (2006). The stages of event extraction. Proceedings of the Workshop on Annotating and Reasoning about Time and Events - ARTE ’06, pages 1–8. 27, 48 [Alatrish, 2012] Alatrish, E. S. (2012). Comparison of ontology editors. eRAF Journal on Computing, 4 :23–38. 25 [Allen, 1981] Allen, J. F. (1981). An interval-based representation of temporal knowledge. In Proceedings of the 7th international joint conference on Artificial intelligence - Volume 1, pages 221–226, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. 136 [Allen, 1983] Allen, J. F. (1983). Maintaining knowledge about temporal intervals. Commun. ACM, 26(11) :832–843. 33 [Allen, 1991] Allen, J. F. (1991). Time and time again : The many ways to represent time. Journal of Intelligent Systems, 6(4) :341–355. 33 [Allen and Ferguson, 1994] Allen, J. F. and Ferguson, G. (1994). Actions and events in interval temporal logic. Journal of Logic and Computation, 4 :531–579. 29 [Aone et al., 1998] Aone, C., Halverson, L., Hampton, T., and Ramos-Santacruz, M. (1998). SRA : Description of the IE2 system used for MUC-7. In Proceedings Seventh Message Understanding Conference (MUC-7), Fairfax, VA. 48 [Aone and Ramos-Santacruz, 2000] Aone, C. and Ramos-Santacruz, M. (2000). REES : A large-scale relation and event extraction system. In ANLP, pages 76–83. 48 [Appelt, 1999] Appelt, D. E. (1999). Introduction to information extraction. AI Commun., 12(3) :161– 172. 56 [Appelt et al., 1995] Appelt, D. E., Hobbs, J. R., Bear, J., Israel, D., Kameyama, M., Martin, D., Myers, K., and Tyson, M. (1995). SRI International FASTUS system : MUC-6 test results and analysis. In Proceedings of the 6th conference on Message understanding, MUC6 ’95, pages 237–248, Stroudsburg, PA, USA. Association for Computational Linguistics. 48 [Appelt and Onyshkevych, 1998] Appelt, D. E. and Onyshkevych, B. (1998). The common pattern specification language. In Proceedings of a workshop on held at Baltimore, Maryland : October 13-15, 1998, TIPSTER ’98, pages 23–30, Stroudsburg, PA, USA. Association for Computational Linguistics. 44 173 Copyright c 2013 - CASSIDIAN - All rights reservedBibliographie [Augenstein et al., 2012] Augenstein, I., Padó, S., and Rudolph, S. (2012). Lodifier : generating linked data from unstructured text. In Proceedings of the 9th international conference on The Semantic Web : research and applications, ESWC’12, pages 210–224, Berlin, Heidelberg. Springer-Verlag. 65 [Bachmair and Ganzinger, 2001] Bachmair, L. and Ganzinger, H. (2001). Resolution theorem proving. In Handbook of Automated Reasoning, pages 19–99. Elsevier and MIT Press. 22 [Bagga and Baldwin, 1999] Bagga, A. and Baldwin, B. (1999). Cross-document event coreference : Annotations, experiments, and observations. In In Proc. ACL-99 Workshop on Coreference and Its Applications, pages 1–8. 64, 66 [Balmisse, 2002] Balmisse, G. (2002). Gestion des connaissances. Outils et applications du knowledge management. Vuibert. 18 [Baumgartner and Retschitzegger, 2006] Baumgartner, N. and Retschitzegger, W. (2006). A survey of upper ontologies for situation awareness. Proc. of the 4th IASTED International Conference on Knowledge Sharing and Collaborative Engineering, St. Thomas, US VI, pages 1–9+. 36 [Béchet et al., 2012] Béchet, N., Cellier, P., Charnois, T., and Crémilleux, B. (2012). Discovering linguistic patterns using sequence mining. In CICLing (1), pages 154–165. 97, 98 [Benjelloun et al., 2006] Benjelloun, O., Garcia-Molina, H., Kawai, H., Larson, T. E., Menestrina, D., Su, Q., Thavisomboon, S., and Widom, J. (2006). Generic entity resolution in the serf project. IEEE Data Eng. Bull., 29(2) :13–20. 63 [Berners-Lee et al., 2001] Berners-Lee, T., Hendler, J., and Lassila, O. (2001). The Semantic Web. Scientific American, 284(5) :34–43. 19 [Besançon et al., 2010] Besançon, R., de Chalendar, G., Ferret, O., Gara, F., Mesnard, O., Laïb, M., and Semmar, N. (2010). LIMA : A multilingual framework for linguistic analysis and linguistic resources development and evaluation. In Chair), N. C. C., Choukri, K., Maegaard, B., Mariani, J., Odijk, J., Piperidis, S., Rosner, M., and Tapias, D., editors, Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), Valletta, Malta. European Language Resources Association (ELRA). 49 [Besançon et al., 2011] Besançon, R., Ferret, O., and Jean-Louis, L. (2011). Construire et évaluer une application de veille pour l’information sur les événements sismiques. In CORIA, pages 287–294. 53 [Best and Cumming, 2007] Best, R. and Cumming, A. (2007). Open source intelligence (osint) : Issues for congress. Rl 34270, Congressional Research Service. 3 [Bhattacharya and Getoor, 2007] Bhattacharya, I. and Getoor, L. (2007). Collective entity resolution in relational data. ACM Transactions on Knowledge Discovery from Data, 1(1) :5–es. 63 [Bilenko et al., 2003] Bilenko, M., Mooney, R. J., Cohen, W. W., Ravikumar, P., and Fienberg, S. E. (2003). Adaptive name matching in information integration. IEEE Intelligent Systems, 18(5) :16–23. 65, 66 [Bloch, 2005] Bloch, I. (2005). Fusion d’informations numériques : panorama méthodologique. In Journées Nationales de la Recherche en Robotique 2005, pages 79–88, Guidel, France. 62 [Bond et al., 2003] Bond, D., Bond, J., Oh, C., Jenkins, J. C., and Taylor, C. L. (2003). Integrated Data for Events Analysis (IDEA) : An Event Typology for Automated Events Data Development. Journal of Peace Research, 40(6) :733–745. 36 [Bontcheva et al., 2002] Bontcheva, K., Dimitrov, M., Maynard, D., Tablan, V., and Cunningham, H. (2002). Shallow Methods for Named Entity Coreference Resolution. In TALN 2002. 46 174 Copyright c 2013 - CASSIDIAN - All rights reserved[Borsje et al., 2010] Borsje, J., Hogenboom, F., and Frasincar, F. (2010). Semi-automatic financial events discovery based on lexico-semantic patterns. Int. J. Web Eng. Technol., 6(2) :115–140. 53 [Boury-Brisset, 2003] Boury-Brisset, A.-C. (2003). Ontological approach to military knowledge modeling and management. In NATO RTO Information Systems Technology Symposium (RTO MP IST 040), Prague. 36 [Bowman et al., 2001] Bowman, M., Lopez, A. M., and Tecuci, G. (2001). Ontology development for military applications. In Proceedings of the Thirty-ninth Annual ACM Southeast Conference. ACM Press. 36 [Brill, 1992] Brill, E. (1992). A simple rule-based part of speech tagger. In Proceedings of the third conference on Applied natural language processing, ANLC ’92, pages 152–155, Stroudsburg, PA, USA. Association for Computational Linguistics. 86 [Bundschus et al., 2008] Bundschus, M., Dejori, M., Stetter, M., Tresp, V., and Kriegel, H.-P. (2008). Extraction of semantic biomedical relations from text using conditional random fields. BMC Bioinformatics, 9(1) :1–14. 53 [Buscaldi, 2010] Buscaldi, D. (2010). Toponym Disambiguation in Information Retrieval. PhD thesis, Universidad Politecnica de Valencia. 103 [Califf and Mooney, 2003] Califf, M. E. and Mooney, R. J. (2003). Bottom-Up Relational Learning of Pattern Matching Rules for Information Extraction. J. Mach. Learn. Res., 4 :177–210. 43, 54 [Capet et al., 2011] Capet, P., Delavallade, T., Généreux, M., Poibeau, T., Sándor, Á., and Voyatzi, S. (2011). Un système de détection de crise basé sur l’extraction automatique d’événements. In et P. Hoogstoel, M. C., editor, Sémantique et multimodalité en analyse de l’information, pages 293– 313. Lavoisier. 42 [Capet et al., 2008] Capet, P., Delavallade, T., Nakamura, T., Sandor, A., Tarsitano, C., and Voyatzi, S. (2008). A risk assessment system with automatic extraction of event types. In Shi, Z., MercierLaurent, E., and Leake, D., editors, Intelligent Information Processing IV, volume 288 of IFIP – The International Federation for Information Processing, pages 220–229. Springer US. 53 [Caron et al., 2012] Caron, C., Guillaumont, J., Saval, A., and Serrano, L. (2012). Weblab : une plateforme collaborative dédiée à la capitalisation de connaissances. In Extraction et gestion des connaissances (EGC’2012), Bordeaux, France. 77 [Casati and Varzi, 1997] Casati, R. and Varzi, A. (1997). Fifty years of events : an annotated bibliography 1947 to 1997. ❤tt♣✿✴✴✇✇✇✳♣❞❝♥❡t✳♦r❣✴♣❛❣❡s✴Pr♦❞✉❝ts✴❡❧❡❝tr♦♥✐❝✴❡✈❡♥ts❜✐❜✳❤t♠. 26 [Cellier and Charnois, 2010] Cellier, P. and Charnois, T. (2010). Fouille de données séquentielle d’itemsets pour l’apprentissage de patrons linguistiques. In Traitement Automatique des Langues Naturelles (short paper). 58 [Cellier et al., 2010] Cellier, P., Charnois, T., and Plantevit, M. (2010). Sequential patterns to discover and characterise biological relations. In Gelbukh, A., editor, Computational Linguistics and Intelligent Text Processing, volume 6008 of Lecture Notes in Computer Science, pages 537–548. Springer Berlin Heidelberg. 47 [Ceri et al., 1989] Ceri, S., Gottlob, G., and Tanca, L. (1989). What you always wanted to know about datalog (and never dared to ask). IEEE Transactions on Knowledge and Data Engineering, 1(1) :146– 166. 44 [Charlet et al., 2004] Charlet, J., Bachimont, B., and Troncy, R. (2004). Ontologies pour le Web sémantique. In Revue I3, numéro Hors Série «Web sémantique». Cépaduès. 21, 25 175 Copyright c 2013 - CASSIDIAN - All rights reservedBibliographie [Charlot and Lancini, 2002] Charlot, J.-M. and Lancini, A. (2002). De la connaissance aux systèmes d’information supports. In Rowe, F., editor, Faire de la recherche en systèmes d’information, pages 139–145. Vuibert FNEGE. 18 [Charnois et al., 2009] Charnois, T., Plantevit, M., Rigotti, C., and Cremilleux, B. (2009). Fouille de données séquentielles pour l’extraction d’information dans les textes. Revue Traitement Automatique des Langues (TAL), 50(3) :59–87. 45 [Charton et al., 2011] Charton, E., Gagnon, M., and Ozell, B. (2011). Génération automatique de motifs de détection d’entités nommées en utilisant des contenus encyclopédiques. In 18e Conférence sur le Traitement Automatique des Langues Naturelles (TALN 2011), Montpellier. Association pour le Traitement Automatique des Langues (ATALA). 45 [Chasin, 2010] Chasin, R. (2010). Event and temporal information extraction towards timelines of wikipedia articles. In UCCS REU 2010, pages 1–9. Massachusetts Institute of Technology. 54 [Chau and Xu, 2012] Chau, M. and Xu, J. (2012). Business intelligence in blogs : understanding consumer interactions and communities. MIS Q., 36(4) :1189–1216. 53 [Chau et al., 2002] Chau, M., Xu, J. J., and Chen, H. (2002). Extracting meaningful entities from police narrative reports. In Proceedings of the 2002 annual national conference on Digital government research, dg.o ’02, pages 1–5. Digital Government Society of North America. 54 [Chaudet, 2004] Chaudet, H. (2004). Steel : A spatio-temporal extended event language for tracking epidemic spread from outbreak reports. In In U. Hahn (Ed.), Proceedings of KR-MED 2004, First International Workshop on Formal Biomedical Knowledge Representation. 53 [Chen and Ji, 2009] Chen, Z. and Ji, H. (2009). Graph-based event coreference resolution. In Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing, TextGraphs- 4, pages 54–57, Stroudsburg, PA, USA. Association for Computational Linguistics. 66, 67 [Chieu, 2003] Chieu, H. L. (2003). Closing the gap : Learning-based information extraction rivaling knowledge-engineering methods. In In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 216–223. 48 [Chisholm, 1970] Chisholm, R. (1970). Events and propositions. Noûs, 4(1) :15–24. 26 [Chiticariu et al., 2010] Chiticariu, L., Krishnamurthy, R., Li, Y., Reiss, F., and Vaithyanathan, S. (2010). Domain adaptation of rule-based annotators for named-entity recognition tasks. In In EMNLP (To appear. 58 [Cholvy, 2007] Cholvy, L. (2007). Modelling information evaluation in fusion. In FUSION, pages 1–6. 136 [Ciravegna, 2001] Ciravegna, F. (2001). Adaptive information extraction from text by rule induction and generalisation. In Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2, IJCAI’01, pages 1251–1256, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. 45, 54 [Crié, 2003] Crié, D. (2003). De l’extraction des connaissances au Knowledge Management. Revue française de gestion, 29(146) :59–79. 18 [Cucerzan, 2007] Cucerzan, S. (2007). Large-scale named entity disambiguation based on Wikipedia data. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 708–716, Prague, Czech Republic. Association for Computational Linguistics. 105 176 Copyright c 2013 - CASSIDIAN - All rights reserved[Culotta et al., 2006] Culotta, A., Kristjansson, T., McCallum, A., and Viola, P. (2006). Corrective feedback and persistent learning for information extraction. Artificial Intelligence, 170(14–15) :1101 – 1122. 58 [Cunningham et al., 2002] Cunningham, H., Maynard, D., Bontcheva, K., and Tablan, V. (2002). GATE : A framework and graphical development environment for robust nlp tools and applications. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, PA, USA. 51, 84 [Cunningham et al., 2000] Cunningham, H., Maynard, D., and Tablan, V. (2000). JAPE : a Java Annotation Patterns Engine (Second Edition). Technical Report Technical Report CS–00–10, of Sheffield, Department of Computer Science. 44 [Daille et al., 2000] Daille, B., Fourour, N., and Morin, E. (2000). Catégorisation des noms propres : une étude en corpus. In Cahiers de Grammaire - Sémantique et Corpus, volume 25, pages 115–129. Université de Toulouse-le-Mirail. 43 [Daumé et al., 2010] Daumé, III, H., Kumar, A., and Saha, A. (2010). Frustratingly easy semisupervised domain adaptation. In Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing, DANLP 2010, pages 53–59, Stroudsburg, PA, USA. Association for Computational Linguistics. 58 [Davidson, 1967] Davidson, D. (1967). The logical form of action sentences. In Rescher, N., editor, The Logic of Decision and Action. University of Pittsburgh Press, Pittsburgh. 26 [Davidson, 1969] Davidson, D. (1969). The individuation of events. In Rescher, N., editor, Essays in honor of Carl G. Hempel, pages 216–234. D. Reidel, Dordrecht. reprinted in Davidson, Essays on Actions and Events. 27 [De Marneffe and Manning, 2008] De Marneffe, M.-C. and Manning, C. D. (2008). Stanford typed dependencies manual. Technical report, Stanford University. 95 [Desclés, 1990] Desclés, J.-P. (1990). "State, event, process, and topology". General Linguistics, 29(3) :159–200. 26 [Desodt-Lebrun, 1996] Desodt-Lebrun, A.-M. (1996). Fusion de données. In Techniques de l’ingénieur Automatique avancée, number 12 in 96, pages 1–9. Editions Techniques de l’Ingénieur. 62 [Dey et al., 1998] Dey, D., Sarkar, S., and De, P. (1998). A probabilistic decision model for entity matching in heterogeneous databases. Management Science, 44(10) :1379–1395. 63 [Dong and Pei, 2007] Dong, G. and Pei, J. (2007). Sequence Data Mining, volume 33 of Advances in Database Systems. Kluwer. 99 [Dong et al., 2005] Dong, X., Halevy, A., and Madhavan, J. (2005). Reference reconciliation in complex information spaces. Proceedings of the 2005 ACM SIGMOD international conference on Management of data - SIGMOD ’05, page 85. 63 [Dredze et al., 2010] Dredze, M., McNamee, P., Rao, D., Gerber, A., and Finin, T. (2010). Entity disambiguation for knowledge base population. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING ’10, pages 277–285, Stroudsburg, PA, USA. Association for Computational Linguistics. 64, 65 [Drozdzynski et al., 2004] Drozdzynski, W., Krieger, H.-U., Piskorski, J., Schäfer, U., and Xu, F. (2004). Shallow processing with unification and typed feature structures — foundations and applications. Künstliche Intelligenz, 1 :17–23. 49 177 Copyright c 2013 - CASSIDIAN - All rights reservedBibliographie [Elmagarmid et al., 2007] Elmagarmid, A. K., Ipeirotis, P. G., and Verykios, V. S. (2007). Duplicate record detection : A survey. IEEE Trans. on Knowl. and Data Eng., 19(1) :1–16. 63, 65, 66 [Enjalbert, 2008] Enjalbert, P. (2008). « Préface ». In Plate-formes pour le traitement automatique des langues, volume 49 of Revue interntaionale Traitement Automatique des Langues, chapter 2, pages 7–10. ATALA. 50 [Etzioni et al., 2005] Etzioni, O., Cafarella, M., Downey, D., Popescu, A.-M., Shaked, T., Soderland, S., Weld, D. S., and Yates, A. (2005). Unsupervised named-entity extraction from the web : An experimental study. Artif. Intell., 165(1) :91–134. 43 [Fellegi and Sunter, 1969] Fellegi, I. P. and Sunter, A. B. (1969). A theory for record linkage. Journal of the American Statistical Association, 64 :1183–1210. 63 [Fensel et al., 2001] Fensel, D., van Harmelen, F., Horrocks, I., McGuinness, D. L., and Patel-Schneider, P. F. (2001). OIL : An ontology infrastructure for the semantic web. IEEE Intelligent Systems, 16(2) :38–45. 22 [Ferré, 2007] Ferré, S. (2007). CAMELIS : Organizing and Browsing a Personal Photo Collection with a Logical Information System. In Diatta, J., Eklund, P., and Liquière, M., editors, Int. Conf. Concept Lattices and Their Applications, volume 331, pages 112–123, Montpellier, France. 98, 99 [Fialho et al., 2010] Fialho, A., Troncy, R., Hardman, L., Saathoff, C., and Scherp, A. (2010). What’s on this evening ? Designing user support for event-based annotation and exploration of media. In EVENTS 2010, 1st International Workshop on EVENTS - Recognising and tracking events on the Web and in real life, May 4, 2010, Athens, Greece, Athens, GRÈCE. 29 [Finin et al., 2009] Finin, T., Syed, Z., Mayfield, J., McNamee, P., and Piatko, C. (2009). Using Wikitology for Cross-Document Entity Coreference Resolution. In Proceedings of the AAAI Spring Symposium on Learning by Reading and Learning to Read. AAAI Press. 46, 109 [Fisher et al., 2005] Fisher, M., Gabbay, D., and Vila, L. (2005). Handbook of Temporal Reasoning in Artificial Intelligence. Foundations of Artificial Intelligence. Elsevier Science. 76 [Fleischman and Hovy, 2002] Fleischman, M. and Hovy, E. (2002). Fine grained classification of named entities. Proceedings of the 19th international conference on Computational linguistics -, 1 :1–7. 58 [Fourour, 2002] Fourour, N. (2002). Nemesis, un système de reconnaissance incrémentielle des entités nommées pour le français. Actes de la 9ème Conférence Nationale sur le Traitement Automatique des Langues Naturelles (TALN 2001), 1 :265–274. 45 [François et al., 2007] François, J., Le Pesant, D., and Leeman, D. (2007). Présentation de la classification des verbes français de jean dubois et françoise dubois-charlier. Langue Française, 153(153) :3– 32. 96 [Friburger, 2006] Friburger, N. (2006). « Linguistique et reconnaissance automatique des noms propres ». Meta : journal des traducteurs / Meta : Translators’ Journal, 51(4) :637–650. 44 [Fundel et al., 2007] Fundel, K., Küffner, R., Zimmer, R., and Miyano, S. (2007). Relex–relation extraction using dependency parse trees. Bioinformatics, 23. 46 [Garbin and Mani, 2005] Garbin, E. and Mani, I. (2005). Disambiguating toponyms in news. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing. 103 [Genesereth, 1991] Genesereth, M. R. (1991). Knowledge interchange format. In KR, pages 599–600. 22 178 Copyright c 2013 - CASSIDIAN - All rights reserved[Giroux et al., 2008] Giroux, P., Brunessaux, S., Brunessaux, S., Doucy, J., Dupont, G., Grilheres, B., Mombrun, Y., and Saval, A. (2008). Weblab : An integration infrastructure to ease the development of multimedia processing applications. ICSSEA. 6 [Goujon, 2002] Goujon, B. (2002). Annotation d’événements dans les textes pour la veille stratégique. Event (London). 53 [Grishman et al., 2002a] Grishman, R., Huttunen, S., and Yangarber, R. (2002a). Information extraction for enhanced access to disease outbreak reports. Journal of biomedical informatics, 35(4) :236–46. 53 [Grishman et al., 2002b] Grishman, R., Huttunen, S., and Yangarber, R. (2002b). Real-time event extraction for infectious disease outbreaks. Proceedings of the second international conference on Human Language Technology Research -, pages 366–369. 48 [Grishman and Sundheim, 1996] Grishman, R. and Sundheim, B. (1996). Message understanding conference-6 : a brief history. In Proceedings of the 16th conference on Computational linguistics - Volume 1, pages 466–471, Morristown, NJ, USA. Association for Computational Linguistics. 41 [Gruber, 1993] Gruber, T. R. (1993). A translation approach to portable ontology specifications. Knowl. Acquis., 5(2) :199–220. 21 [Guarino, 1998] Guarino, N. (1998). Formal ontology and information systems. In Proceedings of Formal Ontology in Information System, pages 3–15. IOS Press. 22 [Haase et al., 2008] Haase, P., Lewen, H., Studer, R., Tran, D. T., Erdmann, M., d’Aquin, M., and Motta, E. (2008). The NeOn Ontology Engineering Toolkit. In WWW 2008 Developers Track. 25 [Habib and van Keulen, 2011] Habib, M. B. and van Keulen, M. (2011). Improving named entity disambiguation by iteratively enhancing certainty of extraction. Technical Report TR-CTIT-11-29, Centre for Telematics and Information Technology University of Twente, Enschede. 136 [Hall et al., 2009] Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., and Witten, I. H. (2009). The WEKA data mining software : an update. SIGKDD Explor. Newsl., 11(1) :10–18. 51 [Hasegawa et al., 2004] Hasegawa, T., Sekine, S., and Grishman, R. (2004). Discovering relations among named entities from large corpora. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, ACL ’04, Stroudsburg, PA, USA. Association for Computational Linguistics. 47 [Hayes, 1995] Hayes, P. (1995). A catalog of temporal theories. Technical report, University of Illinois. Tech report UIUC-BI-AI-96-01. 33 [Hecking, 2003] Hecking, M. (2003). Information extraction from battlefield reports. Proceedings of the 8thInternational Command and Control Research and Technology Symposium (ICCRTS). 53 [Higginbotham et al., 2000] Higginbotham, J., Pianesi, F., and Varzi, A. (2000). Speaking of Events. Oxford University Press. 26 [Hobbs et al., 1997] Hobbs, J. R., Appelt, D. E., Bear, J., Israel, D. J., Kameyama, M., Stickel, M. E., and Tyson, M. (1997). FASTUS : A Cascaded Finite-State Transducer for Extracting Information from Natural-Language Text. CoRR, cmp-lg/9705013. 44 [Hobbs and Riloff, 2010] Hobbs, J. R. and Riloff, E. (2010). Information extraction. In Handbook of Natural Language Processing, Second Edition. CRC Press, Taylor and Francis Group, Boca Raton, FL. 40, 43 179 Copyright c 2013 - CASSIDIAN - All rights reservedBibliographie [Hogenboom et al., 2011] Hogenboom, F., Frasincar, F., Kaymak, U., and de Jong, F. (2011). An Overview of Event Extraction from Text. In van Erp, M., van Hage, W. R., Hollink, L., Jameson, A., and Troncy, R., editors, Workshop on Detection, Representation, and Exploitation of Events in the Semantic Web (DeRiVE 2011) at Tenth International Semantic Web Conference (ISWC 2011), volume 779 of CEUR Workshop Proceedings, pages 48–57. CEUR-WS.org. 42, 56 [Horrocks, 2002] Horrocks, I. (2002). daml+oil : a description logic for the semantic web. IEEE Data Engineering Bulletin, 25 :4–9. 22 [Huffman, 1995] Huffman, S. (1995). Learning information extraction patterns from examples. In Connectionist, Statistical, and Symbolic Approaches to Learning for Natural Language Processing, pages 246–260. Springer. 49 [Humphreys et al., 1997] Humphreys, K., Gaizauskas, R., and Azzam, S. (1997). Event coreference for information extraction. In Proceedings of the ACL-97 Workshop on Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts, Madrid, Spain. 66 [Inyaem et al., 2010a] Inyaem, U., Haruechaiyasak, C., Meesad, P., and Tran, D. (2010a). Terrorism event classification using fuzzy inference systems. CoRR, abs/1004.1772. 53 [Inyaem et al., 2010b] Inyaem, U., Meesad, P., Haruechaiyasak, C., and Tran, D. (2010b). Construction of fuzzy ontology-based terrorism event extraction. In WKDD, pages 391–394. IEEE Computer Society. 36 [Ireson et al., 2005] Ireson, N., Ciravegna, F., Califf, M. E., Freitag, D., Kushmerick, N., and Lavelli, A. (2005). Evaluating machine learning for information extraction. In Proceedings of the 22nd international conference on Machine learning, ICML ’05, pages 345–352, New York, NY, USA. ACM. 43 [Isozaki and Kazawa, 2002] Isozaki, H. and Kazawa, H. (2002). Efficient support vector classifiers for named entity recognition. In In Proceedings of the 19th International Conference on Computational Linguistics (COLING’02, pages 390–396. 45 [Ittoo et al., 2006] Ittoo, A., Zhang, Y., and Jiao, J. (2006). A text mining-based recommendation system for customer decision making in online product customization. In Management of Innovation and Technology, 2006 IEEE International Conference on, volume 1, pages 473–477. 54 [Jansche and Abney, 2002] Jansche, M. and Abney, S. P. (2002). Information extraction from voicemail transcripts. In In Proc. Conference on Empirical Methods in NLP. 53 [Jarrar and Meersman, 2009] Jarrar, M. and Meersman, R. (2009). Ontology engineering — the DOGMA approach. In Dillon, T. S., Chang, E., Meersman, R., and Sycara, K., editors, Advances in Web Semantics I, pages 7–34. Springer-Verlag, Berlin, Heidelberg. 22 [Jason et al., 2004] Jason, R. S., Crawford, J., Kephart, J., and Leiba, B. (2004). Spamguru : An enterprise anti-spam filtering system. In In Proceedings of the First Conference on E-mail and Anti-Spam, page 2004. 54 [Jean-Louis et al., 2012] Jean-Louis, L., Romaric, B., and Ferret, O. (2012). Une méthode d’extraction d’information fondée sur les graphes pour le remplissage de formulaires. In Actes de la 19e conférence sur le Traitement Automatique des Langues Naturelles (TALN’2012), pages 29–42, Grenoble, France. 49 [Ji, 2010] Ji, H. (2010). Challenges from information extraction to information fusion. In Proceedings of the 23rd International Conference on Computational Linguistics : Posters, COLING ’10, pages 507–515, Stroudsburg, PA, USA. Association for Computational Linguistics. 62 180 Copyright c 2013 - CASSIDIAN - All rights reserved[Ji et al., 2009] Ji, H., Grishman, R., Chen, Z., and Gupta, P. (2009). Cross-document Event Extraction and Tracking : Task, Evaluation, Techniques and Challenges. Society, pages 166–172. 64 [Jonasson, 1994] Jonasson, K. (1994). Le Nom Propre, Constructions et interprétations. Champs linguistiques. Duculot. 44 [Khrouf and Troncy, 2012] Khrouf, H. and Troncy, R. (2012). Réconcilier les événements dans le web de données. In Actes de IC2011, pages 723–738, Chambéry, France. 66, 67 [Kifer et al., 1995] Kifer, M., Lausen, G., and Wu, J. (1995). Logical foundations of object-oriented and frame-based languages. J. ACM, 42(4) :741–843. 22 [Kim, 1973] Kim, J. (1973). Causation, nomic subsumption, and the concept of event. Journal of Philosophy, 70(8) :217–236. 26 [Krieg-Planque, 2009] Krieg-Planque, A. (2009). A propos des noms propres d’événement, volume 11, pages 77–90. Les carnets du Cediscor. 27 [Kripke, 1980] Kripke, S. (1980). Naming and Necessity. Harvard University Press. 41 [Ladkin, 1987] Ladkin, P. (1987). The logic of time representation. phdphd, University of California, Berkeley. 33, 76 [Lafferty et al., 2001] Lafferty, J. D., McCallum, A., and Pereira, F. C. N. (2001). Conditional random fields : Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. 45 [LaFree and Dugan, 2007] LaFree, G. and Dugan, L. (2007). Introducing the Global Terrorism Database. Terrorism and Political Violence, 19(2) :181–204. 123 [Largeron et al., 2009] Largeron, C., Kaddour, B., and Fernandez, M. (2009). Softjaccard : une mesure de similarité entre ensembles de chaînes de caratères pour l’unification d’entités nommées. In Ganascia, J.-G. and Gançarski, P., editors, EGC, volume RNTI-E-15 of Revue des Nouvelles Technologies de l’Information, pages 443–444. Cépaduès-Éditions. 112 [Lavelli et al., 2004] Lavelli, A., Califf, M. E., Ciravegna, F., Freitag, D., Giuliano, C., Kushmerick, N., and Romano, L. (2004). IE evaluation : Criticisms and recommendations. In In AAAI-2004 Workshop on Adaptive Text Extraction and Mining. 58 [Lawrence et al., 1999] Lawrence, S., Giles, C. L., and Bollacker, K. (1999). Digital libraries and autonomous citation indexing. IEEE COMPUTER, 32(6) :67–71. 54 [LDC, 2005] LDC, L. D. C. (2005). ACE (Automatic Content Extraction) : English annotation guidelines for events, version 5.4.3 2005.07.01 edition edition. 28 [Lee et al., 2012] Lee, H., Recasens, M., Chang, A., Surdeanu, M., and Jurafsky, D. (2012). Joint entity and event coreference resolution across documents. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 489–500, Stroudsburg, PA, USA. Association for Computational Linguistics. 66 [Lejeune et al., 2010] Lejeune, G., Doucet, A., Yangarber, R., and Lucas, N. (2010). Filtering news for epidemic surveillance : towards processing more languages with fewer resources. In CLIA/COLING, pages 3–10. 53 [Ligeza and Bouzid, 2008] Ligeza, A. and Bouzid, M. (2008). Temporal specifications with xtus. a hierarchical algebraic approach. In Cotta, C., Reich, S., Schaefer, R., and Lig˛eza, A., editors, KnowledgeDriven Computing, volume 102 of Studies in Computational Intelligence, pages 133–148. Springer Berlin / Heidelberg. 76 181 Copyright c 2013 - CASSIDIAN - All rights reservedBibliographie [Liu et al., 2008] Liu, M., Liu, Y., Xiang, L., Chen, X., and Yang, Q. (2008). Extracting key entities and significant events from online daily news. In IDEAL, pages 201–209. 53 [Llorens Martínez, 2011] Llorens Martínez, H. (2011). A semantic approach to temporal information processing. PhD thesis, Universidad de Alicante. 103 [Lofi et al., 2012] Lofi, C., Selke, J., and Balke, W.-T. (2012). Information extraction meets crowdsourcing : A promising couple. Datenbank-Spektrum, 12(2) :109–120. 58 [Luberg et al., 2012] Luberg, A., Järv, P., and Tammet, T. (2012). Information extraction for a tourist recommender system. In Fuchs, M., Ricci, F., and Cantoni, L., editors, Information and Communication Technologies in Tourism 2012, pages 332–343. Springer Vienna. 54 [Mann and Yarowsky, 2003] Mann, G. S. and Yarowsky, D. (2003). Unsupervised personal name disambiguation. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 - Volume 4, CONLL ’03, pages 33–40, Stroudsburg, PA, USA. Association for Computational Linguistics. 105 [Mannes and Golbeck, 2005] Mannes, A. and Golbeck, J. (2005). Building a terrorism ontology. In ISWC Workshop on Ontology Patterns for the Semantic Web. 36 [Maurel et al., 2011] Maurel, D., Friburger, N., Antoine, J.-Y., Eshkol, I., and Nouvel, D. (2011). Cascades de transducteurs autour de la reconnaissance des entités nommées. Traitement Automatique des Langues, 52(1) :69–96. 44 [McCallum, 2005] McCallum, A. (2005). Information extraction : Distilling structured data from unstructured text. Queue, 3(9) :48–57. 51 [McDermott, 1982] McDermott, D. (1982). A temporal logic for reasoning about processes and plans*. Cognitive Science, 6(2) :101–155. 33 [McDonald, 1996] McDonald, D. D. (1996). Internal and external evidence in the identification and semantic categorization of proper names. In Boguraev, B. and Pustejovsky, J., editors, Corpus processing for lexical acquisition, pages 21–39. MIT Press, Cambridge, MA, USA. 44 [Mihalcea and Csomai, 2007] Mihalcea, R. and Csomai, A. (2007). Wikify ! : linking documents to encyclopedic knowledge. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, CIKM ’07, pages 233–242, New York, NY, USA. ACM. 58 [Mikheev, 1999] Mikheev, A. (1999). A knowledge-free method for capitalized word disambiguation. Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics -, pages 159–166. 89 [Mikheev et al., 1999] Mikheev, A., Moens, M., and Grover, C. (1999). Named entity recognition without gazetteers. In Proceedings of the ninth conference on European chapter of the Association for Computational Linguistics, EACL’99, pages 1–8, Stroudsburg, PA, USA. Association for Computational Linguistics. 45 [Milne and Witten, 2008] Milne, D. and Witten, I. H. (2008). Learning to link with wikipedia. In Proceedings of the 17th ACM conference on Information and knowledge management, CIKM ’08, pages 509–518, New York, NY, USA. ACM. 58, 65 [Minard, 2008] Minard, A.-L. (2008). Etat de l’art des ontologies d’objets géographiques. Master’s thesis, Laboratoire COGIT (IGN). 35 [Mintz et al., 2009] Mintz, M., Bills, S., Snow, R., and Jurafsky, D. (2009). Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual 182 Copyright c 2013 - CASSIDIAN - All rights reservedMeeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP : Volume 2 - Volume 2, ACL ’09, pages 1003–1011, Stroudsburg, PA, USA. Association for Computational Linguistics. 47 [Mizoguchi, 2003a] Mizoguchi, R. (2003a). Tutorial on ontological engineering : Part 1 : Introduction to ontological engineering. New Generation Comput., 21(4) :365–384. 22, 78 [Mizoguchi, 2003b] Mizoguchi, R. (2003b). Tutorial on ontological engineering : Part 2 : Ontology development, tools and languages. New Generation Comput., 22(1) :61–96. 22 [Montague, 1969] Montague, R. (1969). On the nature of certain philosophical entities. The Monist, 53(2) :159–194. 26 [Moreau et al., 2008] Moreau, E., Yvon, F., and Cappé, O. (2008). Robust similarity measures for named entities matching. In Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1, COLING ’08, pages 593–600, Stroudsburg, PA, USA. Association for Computational Linguistics. 66 [Muller and Tannier, 2004] Muller, P. and Tannier, X. (2004). Annotating and measuring temporal relations in texts. In Coling 2004 , Genève,, pages 50–56. Association for Computational Linguistics. 46 [Muslea, 1999] Muslea, I. (1999). Extraction Patterns for Information Extraction Tasks : A Survey. Proc. AAAI-99 Workshop Machine Learning for Information Extraction, pages 1–6. 43 [Nadeau and Sekine, 2007] Nadeau, D. and Sekine, S. (2007). A survey of named entity recognition and classification. Linguisticae Investigationes, 30(1) :3–26. Publisher : John Benjamins Publishing Company. 41, 43, 44, 58 [Nakamura-Delloye and Villemonte De La Clergerie, 2010] Nakamura-Delloye, Y. and Villemonte De La Clergerie, É. (2010). Exploitation de résultats d’analyse syntaxique pour extraction semisupervisée des chemins de relations. In TALN 2010, page taln2010_submission_164, Montréal, Canada. 46 [NATO, 2001] NATO (2001). The NATO Military Intelligence Data Exchange Standard AINTP-3(A). Technical report, NATO. 36 [NATO, 2007] NATO (2007). Joint c3 information exchange data model - jc3iedm. Technical report, NATO. 36 [Naughton et al., 2006] Naughton, M., Kushmerick, N., and Carthy, J. (2006). Event extraction from heterogeneous news sources. In Proc. Workshop Event Extraction and Synthesis. American Nat. Conf. Artificial Intelligence. 41, 66 [Neches et al., 1991] Neches, R., Fikes, R., Finin, T., Gruber, T., Patil, R., Senator, T., and Swartout, W. R. (1991). Enabling technology for knowledge sharing. AI Mag., 12(3) :36–56. 21 [Neveu and Quéré, 1996] Neveu, E. and Quéré, L. (1996). Le temps de l’événement I, chapter Présentation, pages 7–21. Number 75 in Réseaux. CNET. 27, 75 [Newcombe et al., 1959] Newcombe, H. B., Kennedy, J. M., Axford, S. J., and James, A. P. (1959). Automatic Linkage of Vital Records. Science, 130(3381) :954–959. 63 [Nishihara et al., 2009] Nishihara, Y., Sato, K., and Sunayama, W. (2009). Event extraction and visualization for obtaining personal experiences from blogs. In Proceedings of the Symposium on Human Interface 2009 on Human Interface and the Management of Information. Information and Interaction. Part II : Held as part of HCI International 2009, pages 315–324, Berlin, Heidelberg. Springer-Verlag. 54 183 Copyright c 2013 - CASSIDIAN - All rights reservedBibliographie [NIST, 2005] NIST (2005). The ACE 2005 (ACE05) Evaluation Plan. 28 [Noël, 2008] Noël, L. (2008). From semantic web data to inform-action : a means to an end. Workshop SWUI (Semantic Web User Interaction), CHI 2008, 5-10 Avril, 2008, Florence, Italie. 136 [Noy and Mcguinness, 2001] Noy, N. F. and Mcguinness, D. L. (2001). Ontology Development 101 : A Guide to Creating Your First Ontology. Technical Report KSL-01-05, Stanford Knowledge Systems Laboratory. 22, 78, 79 [Padró and Stanilovsky, 2012] Padró, L. and Stanilovsky, E. (2012). Freeling 3.0 : Towards wider multilinguality. In Proceedings of the Language Resources and Evaluation Conference (LREC 2012), Istanbul, Turkey. ELRA. 51 [Paquet, 2008] Paquet, P. (2008). De l’information à la connaissance. In Information et communication et management dans l’entreprise : quels enjeux ?, pages 17–48. Harmattan. 18 [Paumier, 2003] Paumier, S. (2003). A Time-Efficient Token Representation for Parsers. In Proceedings of the EACL Workshop on Finite-State Methods in Natural Language Processing, pages 83–90, Budapest. 52 [Pauna and Guillemin-Lanne, 2010] Pauna, R. and Guillemin-Lanne, S. (2010). Comment le text mining peut-il aider à gérer le risque militaire et stratégique ? Text. 53 [Piskorski and Atkinson, 2011] Piskorski, J. and Atkinson, M. (2011). Frontex real-time news event extraction framework. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’11, pages 749–752, New York, NY, USA. ACM. 53 [Piskorski et al., 2007] Piskorski, J., Tanev, H., and Wennerberg, P. O. (2007). Extracting violent events from on-line news for ontology population. In Proceedings of the 10th international conference on Business information systems, BIS’07, pages 287–300, Berlin, Heidelberg. Springer-Verlag. 54 [Piskorski and Yangarber, 2013] Piskorski, J. and Yangarber, R. (2013). Information extraction : Past, present and future. In Poibeau, T., Saggion, H., Piskorski, J., and Yangarber, R., editors, Multi-source, Multilingual Information Extraction and Summarization, Theory and Applications of Natural Language Processing, pages 23–49. Springer Berlin Heidelberg. 57 [Poibeau, 2003] Poibeau, T. (2003). Extraction automatique d’information : Du texte brut au web sé- mantique. Lavoisier. 40 [Polanyi, 1966] Polanyi, M. (1966). The tacit dimension. Routledge and Keagan Paul. 18 [Popescu et al., 2011] Popescu, A.-M., Pennacchiotti, M., and Paranjpe, D. (2011). Extracting events and event descriptions from Twitter. In Proceedings of the 20th international conference companion on World wide web, WWW ’11, pages 105–106, New York, NY, USA. ACM. 54 [Pustejovsky et al., 2003] Pustejovsky, J., Castaño, J. M., Ingria, R., Sauri, R., Gaizauskas, R. J., Setzer, A., Katz, G., and R., R. D. (2003). TimeML : Robust specification of event and temporal expressions in text. In New Directions in Question Answering’03, pages 28–34. 27 [Quine, 1985] Quine, W. V. (1985). Events and reification. In Actions and Events : Perspectives on the Philosophy of Davidson, pages 162–71. Blackwell. 66 [Quine, 1960] Quine, W. V. O. (1960). Word and Object. MIT Press paperback series. Technology Press of the Massachusetts Inst. of Technology. 26 [Radev et al., 2001] Radev, D. R., Blair-Goldensohn, S., Zhang, Z., and Raghavan, R. S. (2001). Interactive, domain-independent identification and summarization of topically related news articles. In Proceedings of the 5th European Conference on Research and Advanced Technology for Digital Libraries, ECDL ’01, pages 225–238, London, UK, UK. Springer-Verlag. 54 184 Copyright c 2013 - CASSIDIAN - All rights reserved[Raimond et al., 2007] Raimond, Y., Abdallah, S. A., Sandler, M. B., and Giasson, F. (2007). The music ontology. In ISMIR, pages 417–422. 28 [Randell et al., 1992] Randell, D. A., Cui, Z., and Cohn, A. G. (1992). A spatial logic based on regions and connection. In Proceedings of the 3rd international conference on knowledge representation and reasoning. 35 [Ratinov et al., 2011] Ratinov, L., Roth, D., Downey, D., and Anderson, M. (2011). Local and global algorithms for disambiguation to wikipedia. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics : Human Language Technologies - Volume 1, HLT ’11, pages 1375–1384, Stroudsburg, PA, USA. Association for Computational Linguistics. 58 [Rattenbury et al., 2007] Rattenbury, T., Good, N., and Naaman, M. (2007). Towards automatic extraction of event and place semantics from flickr tags. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’07, pages 103–110, New York, NY, USA. ACM. 54 [Ricœur, 1983] Ricœur, P. (1983). Temps et récit 1. L´intrigue et le récit historique, volume 227 of Points : Essais. Ed. du Seuil, Paris. 27 [Rosario and Hearst, 2004] Rosario, B. and Hearst, M. A. (2004). Classifying semantic relations in bioscience texts. Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics - ACL ’04, pages 430–es. 47 [Rosario and Hearst, 2005] Rosario, B. and Hearst, M. A. (2005). Multi-way relation classification : application to protein-protein interactions. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 732–739, Stroudsburg, PA, USA. Association for Computational Linguistics. 41, 53 [Saurí et al., 2005] Saurí, R., Knippen, R., Verhagen, M., and Pustejovsky, J. (2005). Evita : a robust event recognizer for QA systems. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 700–707, Stroudsburg, PA, USA. Association for Computational Linguistics. 49, 54 [Saval, 2011] Saval, A. (2011). Modèle temporel, spatial et sémantique pour la découverte de relations entre évènements. PhD thesis, Univeristé de Caen Basse-Normandie. 27 [Saval et al., 2009] Saval, A., Bouzid, M., and Brunessaux, S. (2009). A Semantic Extension for Event Modelisation. 2009 21st IEEE International Conference on Tools with Artificial Intelligence, pages 139–146. 74, 75 [Sayyadi et al., 2009] Sayyadi, H., Hurst, M., and Maykov, A. (2009). Event Detection and Tracking in Social Streams. In Proceedings of International Conference on Weblogs and Social Media (ICWSM). 54 [Saïs et al., 2009] Saïs, F., Pernelle, N., and Rousset, M.-C. (2009). Combining a logical and a numerical method for data reconciliation. In Spaccapietra, S., editor, Journal on Data Semantics XII, volume 5480 of Lecture Notes in Computer Science, pages 66–94. Springer Berlin Heidelberg. 63 [Schmid, 1994] Schmid, H. (1994). Probabilistic part-of-speech tagging using decision trees. In Proceedings of the International Conference on New Methods in Language Processing, Manchester, UK. 99 [Sekine et al., 2002] Sekine, S., Sudo, K., and Nobata, C. (2002). Extended named entity hierarchy. In Rodríguez, M. G. and Araujo, C. P. S., editors, Proceedings of 3 rd International Conference on Language Resources and Evaluation (LREC’02), pages 1818–1824, Canary Islands, Spain. 58 185 Copyright c 2013 - CASSIDIAN - All rights reservedBibliographie [Serrano et al., 2013a] Serrano, L., Bouzid, M., Charnois, T., Brunessaux, S., and Grilheres, B. (2013a). Events extraction and aggregation for open source intelligence : from text to knowledge. In IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2013), Washington DC, USA. [Serrano et al., 2013b] Serrano, L., Bouzid, M., Charnois, T., Brunessaux, S., and Grilheres, B. (2013b). Extraction et agrégation automatique d’événements pour la veille en sources ouvertes : du texte à la connaissance. In Ingénierie des Connaissances 2013 (IC 2013), Lille, France. [Serrano et al., 2012a] Serrano, L., Bouzid, M., Charnois, T., and Grilheres, B. (2012a). Vers un système de capitalisation des connaissances : extraction d’événements par combinaison de plusieurs approches. In Atelier des Sources Ouvertes au Web de Données (SOS-DLWD’2012) en conjonction avec la conférence internationale francophone (EGC 2012), Bordeaux, France. [Serrano et al., 2012b] Serrano, L., Charnois, T., Brunessaux, S., Grilheres, B., and Bouzid, M. (2012b). Combinaison d’approches pour l’extraction automatique d’événements (automatic events extraction by combining multiple approaches) [in french]. In Actes de la conférence conjointe JEP-TALNRECITAL 2012, volume 2 : TALN, Grenoble, France. ATALA/AFCP. [Serrano et al., 2011] Serrano, L., Grilheres, B., Bouzid, M., and Charnois, T. (2011). Extraction de connaissances pour le renseignement en sources ouvertes. In Atelier Sources Ouvertes et Services (SOS 2011) en conjonction avec la conférence internationale francophone (EGC 2011), Brest, France. [Silberztein et al., 2012] Silberztein, M., Váradi, T., and Tadic, M. (2012). Open source multi-platform nooj for nlp. In COLING (Demos), pages 401–408. 52 [Smart et al., 2007] Smart, P., Russell, A., Shadbolt, N., Shraefel, M., and Carr, L. (2007). Aktivesa. Comput. J., 50 :703–716. 36 [Soderland, 1999] Soderland, S. (1999). Learning information extraction rules for semi-structured and free text. Mach. Learn., 34 :233–272. 43 [Sun et al., 2005] Sun, Z., Lim, E.-P., Chang, K., Ong, T.-K., and Gunaratna, R. K. (2005). Event-driven document selection for terrorism information extraction. In Proceedings of the 2005 IEEE international conference on Intelligence and Security Informatics, ISI’05, pages 37–48, Berlin, Heidelberg. Springer-Verlag. 53 [Tanev et al., 2008] Tanev, H., Piskorski, J., and Atkinson, M. (2008). Real-time news event extraction for global crisis monitoring. In Proceedings of the 13th international conference on Natural Language and Information Systems : Applications of Natural Language to Information Systems, NLDB ’08, pages 207–218, Berlin, Heidelberg. Springer-Verlag. 53 [Tarski, 1956] Tarski, A. (1956). Logic, Semantics, Metamathematics, chapter Foundations of the Geometry of Solids. Oxford, Clarendon Press. 35 [Thompson et al., 1999] Thompson, C. A., Califf, M. E., and Mooney, R. J. (1999). Active learning for natural language parsing and information extraction. In Proceedings of the Sixteenth International Conference on Machine Learning, ICML ’99, pages 406–414, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. 58 [Tkachenko and Simanovsky, 2012] Tkachenko, M. and Simanovsky, A. (2012). Named entity recognition : Exploring features. In Jancsary, J., editor, Proceedings of KONVENS 2012, pages 118–127. ÖGAI. Main track : oral presentations. 45 [Troncy et al., 2010] Troncy, R., Shaw, R., and Hardman, L. (2010). LODE : une ontologie pour repré- senter des événements dans le web de données. In IC 2010, 21st Journées Francophones d’Ingénierie des Connaissances, June 8-11, 2010, Nîmes, France, Nîmes, FRANCE. 29 186 Copyright c 2013 - CASSIDIAN - All rights reserved[Van De Velde, 2006] Van De Velde, D. (2006). Grammaire des événements. Presses Universitaires du Septentrion. 26 [van Hage et al., 2011] van Hage, W. R., Malaisé, V., Segers, R., Hollink, L., and Schreiber, G. (2011). Design and use of the Simple Event Model (SEM). Web Semantics : Science, Services and Agents on the World Wide Web, 9(2) :128–136. 30 [Vargas-Vera and Celjuska, 2004] Vargas-Vera, M. and Celjuska, D. (2004). Event recognition on news stories and semi-automatic population of an ontology. In Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence, WI ’04, pages 615–618, Washington, DC, USA. IEEE Computer Society. 54 [Varjola and Löffler, 2010] Varjola, M. and Löffler, J. (2010). PRONTO : Event Recognition for Public Transport. Proceedings of 17th ITS World Congress, Busan, Korea. 53 [Verykios and Elmagarmid, 1999] Verykios, V. S. and Elmagarmid, A. K. (1999). Automating the approximate record matching process. Information Sciences, 126 :83–98. 64 [Viola and Narasimhan, 2005] Viola, P. and Narasimhan, M. (2005). Learning to extract information from semi-structured text using a discriminative context free grammar. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’05, pages 330–337, New York, NY, USA. ACM. 45 [Vlahovic, 2011] Vlahovic, N. (2011). Information Retrieval and Information Extraction in Web 2.0 environment. nternational Journal of Computers, 5(1). 40, 54 [Wacholder et al., 1997] Wacholder, N., Ravin, Y., and Choi, M. (1997). Disambiguation of proper names in text. In Proceedings of the fifth conference on Applied natural language processing, ANLC ’97, pages 202–208, Stroudsburg, PA, USA. Association for Computational Linguistics. 64 [Wakao et al., 1996] Wakao, T., Gaizauskas, R., and Wilks, Y. (1996). Evaluation of an algorithm for the recognition and classification of proper names. In Proceedings of the 16th conference on Computational linguistics - Volume 1, COLING ’96, pages 418–423, Stroudsburg, PA, USA. Association for Computational Linguistics. 44 [Wang et al., 2011] Wang, W., Besançon, R., Ferret, O., and Grau, B. (2011). Filtering and clustering relations for unsupervised information extraction in open domain. Proceedings of the 20th ACM international conference on Information and knowledge management - CIKM ’11, page 1405. 47 [Whitehead, 1920] Whitehead, A. (1920). The Concept of Nature. Dover science books. Dover Publications. 35 [Widlocher et al., 2006] Widlocher, A., Bilhaut, F., Hernandez, N., Rioult, F., Charnois, T., Ferrari, S., and Enjalbert, P. (2006). Une approche hybride de la segmentation thématique : collaboration du traitement automatique des langues et de la fouille de texte. In Actes de DEfi Fouille de Texte (DEFT’06), Semaine du Document Numérique (SDN’06), Fribourg, Suisse. 52 [Winkler et al., 2006] Winkler, W. E., Winkler, W. E., and P, N. (2006). Overview of record linkage and current research directions. Technical report, Bureau of the Census. 63, 109 [Xu et al., 2006] Xu, F., Uszkoriet, H., and Li, H. (2006). Automatic Event and Relation Detection with Seeds of Varying Complexity. In AAAI 2006 Workshop on Event Extraction and Synthesis. 49 [Yates and Etzioni, 2009] Yates, A. and Etzioni, O. (2009). Unsupervised methods for determining object and relation synonyms on the web. J. Artif. Intell. Res. (JAIR), 34 :255–296. 66 187 Copyright c 2013 - CASSIDIAN - All rights reservedBibliographie [Zanasi, 2009] Zanasi, A. (2009). Virtual weapons for real wars : Text mining for national security. In Corchado, E., Zunino, R., Gastaldo, P., and Herrero, A., editors, Proceedings of the International Workshop on Computational Intelligence in Security for Information Systems CISIS’08, volume 53 of Advances in Soft Computing, pages 53–60. Springer Berlin Heidelberg. 53 [Zhao, 2004] Zhao, S. (2004). Named entity recognition in biomedical texts using an hmm model. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications, JNLPBA ’04, pages 84–87, Stroudsburg, PA, USA. Association for Computational Linguistics. 45 [Zhu and Porter, 2002] Zhu, D. and Porter, A. L. (2002). Automated extraction and visualization of information for technological intelligence and forecasting. Technological Forecasting and Social Change, 69(5) :495 – 506. TF Highlights from {ISF} 2001. 53 [Zhu et al., 2009] Zhu, J., Nie, Z., Liu, X., Zhang, B., and Wen, J.-R. (2009). StatSnowball : a statistical approach to extracting entity relationships. In Proceedings of the 18th international conference on World wide web, WWW ’09, pages 101–110, New York, NY, USA. ACM. 47 188 Copyright c 2013 - CASSIDIAN - All rights reserved Normalisation et Apprentissage de Transductions d’Arbres en Mots Gr´egoire Laurence To cite this version: Gr´egoire Laurence. Normalisation et Apprentissage de Transductions d’Arbres en Mots. Databases. Universit´e des Sciences et Technologie de Lille - Lille I, 2014. French. . HAL Id: tel-01053084 https://tel.archives-ouvertes.fr/tel-01053084 Submitted on 29 Jul 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destin´ee au d´epˆot et `a la diffusion de documents scientifiques de niveau recherche, publi´es ou non, ´emanant des ´etablissements d’enseignement et de recherche fran¸cais ou ´etrangers, des laboratoires publics ou priv´es.Universit´e des Sciences et Technologies de Lille – Lille 1 D´epartement de formation doctorale en informatique Ecole doctorale SPI Lille ´ UFR IEEA Normalisation et Apprentissage de Transductions d’Arbres en Mots THESE ` pr´esent´ee et soutenue publiquement le 4 Juin 2014 pour l’obtention du Doctorat de l’Universit´e des Sciences et Technologies de Lille (sp´ecialit´e informatique) par Gr´egoire Laurence Composition du jury Pr´esident : Olivier Carton (Olivier.Carton@liafa.univ-paris-diderot.fr) Rapporteurs : Olivier Carton (Olivier.Carton@liafa.univ-paris-diderot.fr) Marie-Pierre B´eal (beal@univ-mlv.fr) Directeur de th`ese : Joachim Niehren (joachim.niehren@lifl.fr) Co-Encadreur de th`ese : Aur´elien Lemay (aurelien.lemay@univ-lille3.fr) Laboratoire d’Informatique Fondamentale de Lille — UMR USTL/CNRS 8022 INRIA Lille - Nord EuropeRésumé Le stockage et la gestion de données sont des questions centrales en informatique. La structuration sous forme d’arbres est devenue la norme (XML, JSON). Pour en assurer la pérennité et l’échange efficace des données, il est nécessaire d’identifier de nouveaux mécanismes de transformations automatisables. Nous nous concentrons sur l’étude de transformations d’arbres en mots représentées par des machines à états finies. Nous définissons les transducteurs séquentiels d’arbres en mots ne pouvant utiliser qu’une et unique fois chaque nœud de l’arbre d’entrée pour décider de la production. En réduisant le problème d’équivalence des transducteurs séquentiels à celui des morphismes appliqués à des grammaires algébriques (Plandowski, 95), nous prouvons qu’il est décidable en temps polynomial. Cette thèse introduit la notion de transducteur travailleur, forme normalisée de transducteurs séquentiels, cherchant à produire la sortie le «plus tôt possible» dans la transduction. A l’aide d’un algorithme de normalisation et de minimisation, nous prouvons qu’il existe un représentant canonique, unique transducteur travailleur minimal, pour chaque transduction de notre classe. La décision de l’existence d’un transducteur séquentiel représentant un échantillon, i.e. paires d’entrées et sorties d’une transformation, est prouvée NP-difficile. Nous proposons un algorithme d’apprentissage produisant à partir d’un échantillon le transducteur canonique le représentant, ou échouant, le tout en restant polynomial. Cet algorithme se base sur des techniques d’infé- rence grammaticales et sur l’adaptation du théorème de Myhill-Nerode. Titre : Normalisation et Apprentissage de Transductions d’Arbres en MotsAbstract Storage, management and sharing of data are central issues in computer science. Structuring data in trees has become a standard (XML, JSON). To ensure preservation and quick exchange of data, one must identify new mechanisms to automatize such transformations. We focus on the study of tree to words transformations represented by finite state machines. We define sequential tree to words transducers, that use each node of the input tree exactly once to produce an output. Using reduction to the equivalence problem of morphisms applied to contextfree grammars (Plandowski, 95), we prove that equivalence of sequential transducers is decidable in polynomial time. We introduce the concept of earliest transducer, sequential transducers normal form, which aim to produce output "as soon as possible" during the transduction. Using normalization and minimization algorithms, we prove the existence of a canonical transducer, unique, minimal and earliest, for each transduction of our class. Deciding the existence of a transducer representing a sample, i.e. pairs of input and output of a transformation, is proved NP-hard. Thus, we propose a learning algorithm that generate a canonical transducer from a sample, or fail, while remaining polynomial. This algorithm is based on grammatical inference techniques and the adaptation of a Myhill-Nerode theorem. Title : Normalization and Learning of Tree to Words TransductionsRemerciements Cet espace me permettant de remercier toutes les personnes m’ayant aider à effectuer, rédiger, soutenir et fêter cette thèse, je tiens tout d’abord à saluer le travail effectué par Marie-Pierre Béal et Olivier Carton, qui ont du rapporter et être jury de cette thèse. Je les remercie d’avoir eu le courage de relire l’intégralité de ce manuscrit, des retours qu’ils m’en ont fait, et de l’intérêt qu’ils ont portés à mon travail. Cette thèse n’aurait jamais eu lieu sans la présence de mes encadrants, tout particulièrement Joachim Niehren qui m’a permis d’intégrer cette équipe dès mon stage de recherche, et lancé les travaux qui mènent à ce que vous tenez maintenant entre vos mains. Merci à Joachim, Aurélien Lemay, Slawek Staworko et Marc Tommasi de m’avoir suivi pendant cette (longue) épreuve, de m’avoir guidé, soutenu, et aider à mener cette thèse pour arriver à ce résultat qui je l’espère vous fait autant plaisir qu’a moi. Tout n’a pas toujours été facile, je n’ai pas toujours été aussi investi qu’il aurait fallut, mais vous avez toujours réussi à me remettre sur la route, et permis d’arriver à cette étape. Je ne peut remercier mes encadrants sans penser à l’intégralité des membres de l’équipe Mostrare (links, magnet, et même avant) qui m’ont permis de passer ces quelques années dans un parfait environnement de travail (mais pas que). Ayant plus ou moins intégré cette équipe depuis mon stage de maitrise (grâce à Isabelle Tellier et Marc), il me serait difficile d’énumérer ici l’intégralité des membres que j’y ai croisé, et ce que chacun m’a apporté. Je m’adonne quand même à ce petit exercice pour l’ensemble des doctorants qui ont partagé cette position de thésard dans l’équipe : Jérôme, Olivier, Edouard, Benoît, Antoine, Tom, Jean, Adrien, Radu, et Guillaume (même si il n’était pas doctorant). Ces années ont également été occupées par mes différents postes dans l’enseignements à Lille 3 ou encore Lille 1 et je remercie l’ensemble des enseignants qui m’ont aidés et accompagné durant cette tâche, tout particulièrement Alain Taquet avec qui j’ai partagé mes premiers enseignements. Même si il n’ont pas à proprement contribué à ce que contient cette thèse (enfin ça dépend lesquels), malgré eux ils y sont pour beaucoup, je parle bien entendu de mes amis et ma famille. Je n’en ferait pas non plus la liste ici, n’en déplaise à certains, mais je remercie tout particulièrement mes parents, ma soeur pour m’avoir supporter et permis d’en arriver là, mon parrain et ma marraine pour leur présence et l’intérêt tout particulier qu’ils ont portés à ma thèse. Merci à toute ma famille et mes amis d’avoir été présents avant, pendant, et je l’espère encore longtemps après cette thèse, d’avoir réussi à me lafaire oubliée parfois. Une pensée toute particulière à Sébatien Lemaguer qui a eu la lourde tâche de relire l’intégralité de cette thèse (sauf ces remerciements) à la recherche de fautes (trop nombreuses). Je ne pouvait pas finir ces remerciement sans parler de mon amie, amour, Manon, a qui cette thèse appartient au moins autant qu’a moi, support inconditionnel dans l’ombre sans qui je n’aurai surement pas réussi à terminer cette thèse. Elle a vécue au moins autant que moi toute cette épreuve, jusqu’à son dernier moment, et m’a permis de malgré tout mes périodes de doutes d’aboutir à ce résultat. Merci à toi, et merci à vous tous.Table des matières Introduction 1 1 Automates 11 1.1 Mots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.1.1 Mots et langages . . . . . . . . . . . . . . . . . . . . . . . . 11 1.1.2 Grammaires . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.1.3 Automates . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.2 Arbres d’arité bornée . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.2.1 Définition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.2.2 Automates d’arbres . . . . . . . . . . . . . . . . . . . . . . 21 1.3 Arbres d’arité non-bornée . . . . . . . . . . . . . . . . . . . . . . . 25 1.3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.3.2 Comment définir les automates ? . . . . . . . . . . . . . . 25 1.3.3 Codage binaire curryfié (ascendant) . . . . . . . . . . . . 26 1.3.4 Codage binaire frère-fils (descendant) . . . . . . . . . . . 29 1.4 Mots imbriqués . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 1.4.1 Définition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 1.4.2 Linéarisation d’arbres d’arité non-bornée . . . . . . . . . 32 1.4.3 Automates de mots imbriqués . . . . . . . . . . . . . . . . 33 1.4.4 Automate descendant . . . . . . . . . . . . . . . . . . . . . 34 2 Transducteurs 37 2.1 Transducteurs de mots . . . . . . . . . . . . . . . . . . . . . . . . 39 2.1.1 Transducteurs rationnels . . . . . . . . . . . . . . . . . . . 39 2.1.2 Transducteurs déterministes . . . . . . . . . . . . . . . . . 50 2.1.3 Transducteurs déterministes avec anticipation . . . . . . 76 2.2 Transducteur d’arbres d’arité bornée . . . . . . . . . . . . . . . . 80 2.2.1 Transducteur descendants . . . . . . . . . . . . . . . . . . 81 2.2.2 Transducteur ascendant . . . . . . . . . . . . . . . . . . . 88 2.2.3 Transducteur avec anticipation . . . . . . . . . . . . . . . 90 2.2.4 Macro-transducteurs descendants . . . . . . . . . . . . . . 93 2.3 Transducteurs d’arbres en mots . . . . . . . . . . . . . . . . . . . 95 2.3.1 Transducteurs descendants . . . . . . . . . . . . . . . . . . 95 2.3.2 Transducteurs ascendants . . . . . . . . . . . . . . . . . . 98 2.4 Transducteurs d’arbres d’arité non bornée . . . . . . . . . . . . . 100 2.4.1 Transducteurs de mots imbriqués en mots . . . . . . . . 101 2.4.2 Transducteurs de mots imbriqués en mots imbriqués . . 1043 Transformations XML 105 3.1 XSLT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 3.1.1 XSLT en Pratique . . . . . . . . . . . . . . . . . . . . . . . 105 3.2 Transducteurs et xslt . . . . . . . . . . . . . . . . . . . . . . . . . 109 3.2.1 dST2W . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 3.2.2 Macro-Transducteurs et xpath . . . . . . . . . . . . . . . 110 4 Équivalence de transducteurs séquentiels d’arbres en mots 115 4.1 Relation avec l’équivalence de morphismes sur CFGs . . . . . . 115 4.1.1 Exécution d’un dNW2W . . . . . . . . . . . . . . . . . . 116 4.1.2 Arbre syntaxique étendu . . . . . . . . . . . . . . . . . . . 116 4.2 Relation entre dB2W et dT2W . . . . . . . . . . . . . . . . . . . 121 5 Normalisation et minimisation des transducteurs descendants d’arbres en mots 125 5.1 dST2W travailleur . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5.2 Caractérisation sémantique . . . . . . . . . . . . . . . . . . . . . . 127 5.2.1 Approche naïve . . . . . . . . . . . . . . . . . . . . . . . . 127 5.2.2 Décompositions et résiduels . . . . . . . . . . . . . . . . . 129 5.2.3 Transducteur canonique . . . . . . . . . . . . . . . . . . . 133 5.3 Minimisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 5.3.1 Minimisation des edST2Ws . . . . . . . . . . . . . . . . . 135 5.3.2 Minimisation de dST2Ws arbitraires . . . . . . . . . . . 138 5.4 Normalisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 5.4.1 Réduction de langages . . . . . . . . . . . . . . . . . . . . 143 5.4.2 Faire traverser un mot dans un langage . . . . . . . . . . 143 5.4.3 Déplacement de gauche à droite . . . . . . . . . . . . . . 148 5.4.4 Algorithme de normalisation . . . . . . . . . . . . . . . . 151 5.4.5 Bornes exponentielles . . . . . . . . . . . . . . . . . . . . . 158 6 Apprentissage de transducteurs descendants d’arbres en mots161 6.1 Théorème de Myhill-Nerode . . . . . . . . . . . . . . . . . . . . . 161 6.2 Consistence des dST2Ws . . . . . . . . . . . . . . . . . . . . . . . 162 6.3 Modèle d’apprentissage . . . . . . . . . . . . . . . . . . . . . . . . 165 6.4 Algorithme d’apprentissage . . . . . . . . . . . . . . . . . . . . . . 166 6.5 Décompositions, résiduels et équivalence . . . . . . . . . . . . . . 166 6.6 Echantillon caractéristique . . . . . . . . . . . . . . . . . . . . . . 170 7 Conclusion 177 7.1 Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Bibliographie 183Introduction De tout temps, le stockage, la gestion et l’échange de données sont des questions centrales en informatique, plus encore de nos jours, la quantité de données devenant de plus en plus importante et partagée entre différents services. La structuration des données participe énormément à ces défis. Elle permet par exemple d’intégrer une certaine sémantique absolument nécessaire pour l’échange, d’effectuer plus efficacement des traitements comme des requêtes ou des transformations. Les arbres de données sont une telle représentation structurée des informations. Mon travail de thèse est une contribution à l’étude de ces arbres de données et particulièrement leurs transformations. Une question centrale que j’aborde consiste en la définition d’algorithmes d’apprentissage de ces programmes de transformations, que je représente sous la forme de machines à états finis : les transducteurs. Arbres de données Les arbres de données permettent d’apporter une structure à des données textuelles. Ces arbres sont composés de nœuds étiquetés par des symboles issus d’un alphabet fini. Les valeurs textuelles se retrouvent au niveau du feuillage de l’arbre ou dans ses nœuds internes sous forme d’attributs. Elles sont composés à partir d’un alphabet fini, comme par exemple celui de l’ASCII ou de l’unicode, mais ne sont pas bornées en taille. Certains modèles existants, tels que le xml (recommandation du W3C) ou JSON (JavaScript Object Notation), cherchent à homogénéiser le format des données des documents structurés sous forme d’arbre en proposant une syntaxe permettant de les représenter. Que ce soit sous forme d’arbres d’arité non bornée, ordonnés, associés à des champs textuels et des enregistrements pour le xml, ou d’arbres d’arité bornée et d’ensembles d’enregistrements non bornés ni ordonnés pour ce qui est de JSON, le but reste de spécifier une syntaxe en limitant le moins possible la sémantique que l’utilisateur souhaite associer à la structure. Certains formats de données qui en découlent limitent le choix des étiquettes de nœuds utilisables dans la structure. Ces étiquettes (les balises) doivent appartenir à un ensemble fini et fixé et portent chacune une sémantique forte. C’est le cas de fichiers html ou des méta-données contenues dans certains document issus de logiciels de traitement de texte. Les arbres de données peuvent se trouver également dans les bases de données, ou encore2 Introduction dans le «Cloud computing», où même si parfois ils partagent des syntaxes proches, peuvent être utilisés dans des domaines totalement différents avec des sémantiques qui leurs sont propres. Que ce soit pour l’échange, la mise en relation ou la sélection de données, il devient de plus en plus nécessaire, au delà des modifications automatiques de texte, comme pouvait le permettre des programmes tels que Perl, d’identifier de nouveaux mécanismes de transformation basés sur la structure. Il est surtout intéressant de voir jusqu’où les méta-données contenues dans la structure même jouent un rôle dans le choix des transformations à effectuer. Transformations Pour cela, il faut dans un premier temps se demander quel type de transformation nous intéresse. Il existe déjà plusieurs moyens de transformer des arbres de données, que ce soit pour une tâche spécifique ou comme le permet le langage xslt, une représentation des transformations dans le sens le plus large du terme. Le langage de programmation xslt (Clark, 1999), pour «eXtensible Stylesheet Language Transformations», est un langage défini au sein de la recommandation xsl du consortium W3C. Il permet de transformer des arbres de données, représentés par un ou plusieurs xml, en données de différents types, xml, textuels ou même binaires. Il est lui même repré- senté sous la forme d’un arbre de données dont certaines étiquettes de noeuds spécifient les fonctions de transformation à appliquer. Cette application passe par la sélection de noeuds à transformer à l’aide de requêtes xpath (Clark et DeRose, 1999). Ces requêtes permettent la sélection de noeuds dans un arbre de données à l’aide d’opérations primitives de déplacement dans la fi- liation de l’arbre. Une grande expressivité se faisant souvent au détriment de la complexité, xslt est un programme Turing-complet (Kepser, 2004; Onder et Bayram, 2006). Pour pouvoir étudier théoriquement et se diriger vers un possible apprentissage, Il est nécessaire d’en introduire des sous classes, moins expressives. L’intérêt d’un arbre de données étant de structurer des informations pour en simplifier la recherche et l’utilisation, il n’est pas surprenant qu’une des fonctions les plus utilisées dans le langage xslt soit la fonction «xsl ∶ value − of » retournant l’ensemble des valeurs textuelles contenues dans les feuilles des sous-arbres traités. Par exemple, comme l’illustre la figure 1, cherchons à récupérer le nom et prénom du premier contact contenu dans un carnet d’adresse sous format xml, pour un futur affichage dans un document html . Cela donne en xslt, l’appel de la balise suivante : 3 contacts contact identite nom Laurence prenom Grégoire adresse ⋯ ⋯ ⋯ ⇒ "Laurence Grégoire" Figure 1 – Exemple de transformation xslt basique Seule la structure de l’arbre importe ici pour décider du traitement à effectuer sur les feuilles. De ce fait, les valeurs textuelles pourraient être abstraites sous la forme de balises dédiées ne servant qu’à les identifier, et permettre leur réutilisation dans la sortie. Toutefois, la structure interne n’est pas toujours suffisante pour décider du traitement à effectuer. Les valeurs textuelles sont parfois des éléments centraux d’une transformation. Les opérations telles que les jointures utilisent ces valeurs textuelles comme repères pour regrouper certaines informations. Dès lors elles font partie intégrante de la sélection des noeuds à manipuler et ne peuvent plus être ignorés. Qu’en est il de la sortie ? Il n’est pas toujours intéressant de garder la structure de l’entrée. Comme l’illustre la transformation représentée dans la figure 1, il est parfois nécessaire de se concentrer sur la concaténation de champs textuels sélectionnés. Même si une structure est parfois nécessaire, il reste tout à fait possible de la représenter sous format xml, une chaîne composée de balises ouvrantes et fermantes représentant un arbre de données. Cette liberté dans la représentation de la sortie se fait au détriment du contrôle de la structure de sortie, qui n’est plus reconnue en tant qu’arbre de données, empêchant ainsi son utilisation directe en entrée d’une autre transformation. Il n’est plus possible dès lors de composer plusieurs transformations sans pré- traitements sur les données intermédiaires. Nous décidons dans cette thèse de nous concentrer sur les transformations d’arbres en mots qui permettent la concaténation dans la sortie. Cela repré- sente une simplification des transformations d’arbres de données à arbres de données. Il est dès lors plus possible d’exprimer les opérations de jointures.4 Introduction Nous perdons également les propriétés de compositions, la structure d’entrée n’étant plus présente dans la production. Cela nous permet de nous concentrer sur la possibilité de manipuler les chaînes de sortie, ce qui nous intéresse ici. Motivation générale Notre but reste l’automatisation de la tâche de transformation. Cela passe par un apprentissage de la transformation que l’on souhaite effectuer. L’apprentissage considéré ici revient à chercher à identifier une cible, appartenant à une classe de langages connue, à partir d’exemples (et de possibles contreexemples). Il nous reste donc à identifier formellement une classe de transformations d’arbres en mots, de choisir le modèle d’apprentissage que nous souhaitons appliquer, et les exemples à partir desquels apprendre. L’apprentissage que nous souhaitons utiliser se base sur l’inférence grammaticale (Gold, 1967) en cherchant à identifier à la limite un langage cohérent avec les exemples d’entrée. Le plus souvent cette technique se divise en deux étapes, une représentation des exemples dans un formalisme les regroupant, et la généralisation de ce modèle pour en déduire un langage reconnaissant souvent un langage plus large que celui représenté par les exemples, mais tout en restant cohérent avec cet échantillon. Pour ce qui est des exemples, la transformation d’arbres en mots peut être vue comme un ensemble de paires composées d’un arbre et de la chaîne de mots résultant de la transformation de cet arbre. Il est connu que, pour apprendre un langage de mots régulier, ne considérer que des exemples positifs n’est pas suffisant en inférence grammaticale (Gold, 1967). Il est nécessaire d’avoir des contre-exemples ou autres éléments permettant d’éviter une surgénéralisation. Qu’en est-il de nos transformations ? Une des propriétés permettant de contrôler cette possible sur-généralisation est le fait qu’une transformation doit rester fonctionnelle. Nous devons pour cela nous assurer qu’une entrée ne puisse être transformée qu’en au plus un résultat. Le deuxième contrôle peut être fait à l’aide du domaine d’entrée, le langage d’arbres sur lequel la transformation peut s’effectuer. Il existe pour cela de nombreuses possibilités, que ce soit à l’aide d’exemples négatifs ou encore par la connaissance directe du domaine, donné en entrée de l’apprentissage. L’apprentissage de langage d’arbres étant un problème connu et ayant déjà été résolu par inférence grammaticale (Oncina et Garcia, 1992), nous opterons pour la deuxième solution, en supposant que le domaine nous est déjà donné. Une fois le type d’exemples et d’apprentissage choisi, il nous reste à choisir quel modèle utiliser pour représenter notre transformation, vérifier que ce mo-5 dèle dispose des propriétés nécessaires pour un apprentissage, et enfin définir l’algorithme d’apprentissage à proprement parler. Nous choisissons d’utiliser les transducteurs, machines à états finis permettant d’évaluer une entrée en produisant la sortie associée. Nous souhaitons donc que toute transformation de la classe qui nous intéresse soit définissable par la classe de transducteurs choisie, ce qu’il reste à prouver. Nous voulons également que la cible d’un apprentissage soit unique, qu’a chaque transformation lui soit associé un transducteur canonique la représentant. Cela repose sur la normalisation du transducteur, pour en homogénéiser le contenu, et de sa minimisation pour assurer l’unicité. Il est donc nécessaire d’introduire une classe de transducteurs permettant de représenter les transformations d’arbres en mots, disposant d’une forme normale. La cible de l’apprentissage d’une transformation sera le transducteur canonique, unique minimal, de cette classe la représentant. Il est donc nécessaire de définir un théorème de type Myhill-Nerode (Nerode, 1958) pour ce modèle, assurant, entre autre, l’existence d’un transducteur canonique pour chaque transformation de notre classe. L’algorithme d’apprentissage en lui même se résume à la décomposition de la transformation représentée par les exemples, pour en déduire les états du transducteur cible. Cela repose sur la possibilité de tester efficacement l’équivalence de fragments de la transformation pour identifier les fragments communs. Il reste à spécifier le modèle de transducteurs à l’aide duquel nous souhaitons modéliser notre transformation. Transducteurs Que ce soit sur les mots ou les arbres, de nombreux modèles de transducteurs ont déjà été proposés et étudiés dans le domaine. Nous nous intéressons particulièrement aux modèles déterministes, ne permettant qu’une exécution possible pour une donnée d’entrée fixée, qui en plus d’assurer la fonctionnalité de la transformation représentée, simplifie la décidabilité de problèmes importants. Avant de nous concentrer sur un modèle permettant de transformer des arbres en mots, nous pouvons évoquer deux classes de transducteurs permettant respectivement de gérer les mots et les arbres. Les transducteurs sous-séquentiels, transducteurs déterministes de mots introduits par Schützenberger (1975), transforment, lettre par lettre un mot d’entrée dans sa sortie. Cette production est obtenue par concaténation de toutes les chaînes produites par le transducteur. Pour ce qui est des arbres, les transducteurs déterministes d’arbres d’arité bornée, basés sur les travaux de Rounds (1968) et Thatcher (1970), permettent, en parcourant un arbre de sa racine à ses feuilles, de produire un arbre. Chaque règle de cet arbre associe6 Introduction à un noeud de l’arbre d’entrée et ses fils, un contexte, arbre à trous. Chacun de ces sous-arbres manquants s’obtient par l’application d’une transduction à un fils de ce noeud. Aucune contrainte n’est faite sur l’ordre d’utilisation de ces fils ou encore du nombre de fois qu’ils sont utilisés. Cela revient à autoriser la copie et le réordonnancement des fils d’un noeud dans la sortie. Il n’est cependant pas possible de fusionner le contenu de plusieurs noeuds ou de supprimer des noeuds internes. Ces classes de transducteurs disposent de résultats de décidabilité sur les problèmes théorique importants. Ainsi, le problème d’équivalence est montré décidable, par Engelfriet et al. (2009) pour les transducteurs d’arbres, et par Choffrut (1979) pour les transducteurs de mots. Une forme normale, assurant à l’aide de contraintes une représentation normalisé d’un transducteur, a été introduite pour chacun d’entre eux, que ce soit par Choffrut (1979) pour les mots ou par Lemay et al. (2010) pour les arbres, afin d’assurer l’existence d’un transducteur canonique, normalisé minimal unique, pour chaque transformation exprimable par un transducteur de la classe. Des algorithmes d’apprentissages ont été proposés pour chacune de ces classes, que ce soit l’algorithme OSTIA (Oncina et al., 1993) pour les transducteurs sous-séquentiels, ou plus récemment sur les transducteurs d’arbres en arbres (Lemay et al., 2010). Pour ce qui est des transformations d’arbres en mots que nous cherchons à représenter, nous nous dirigeons sur une machine proche des transducteurs d’arbres, du moins sur le traitement de l’entrée, mais produisant cette foisci des mots. Les règles produisent maintenant, pour un noeud et ses fils, la concaténation de mots et du résultat de transduction des fils. Les problèmes théoriques, tel que l’équivalence et l’existence de représentants canoniques, sont ouverts pour cette classe de transductions. Il est également possible de pousser vers d’autres modèles de transducteurs plus expressifs tels que les macro transducteurs d’arbres (Engelfriet, 1980), plus proche des transformations de type XSLT que les modèles évoqués pré- cédemment. Mais ce modèle autorisant également des opérations telles que la concaténation, il n’est pas intéressant d’attaquer cette classe alors que des classes strictement moins expressives, et utilisant le même type d’opérateurs, n’ont toujours pas de résultats de décidabilité sur les problèmes théoriques. Maintenant que nous avons décidé du type de transformation qui nous intéresse, d’arbres en mots, ainsi que la manière de les représenter, transducteurs déterministes d’arbres en mots, nous pouvons nous concentrer sur l’apprentissage en lui-même.7 Contributions Dans cette thèse, nous définissons et étudions les transducteurs séquentiels déterministes descendants d’arbres d’arité bornée en mots. Être séquentiel signifie que chaque règle de ces transducteurs associe à un noeud une production obtenue par concaténation de mots et de l’application d’une transduction sur chacun de ces fils. Chaque fils doit être utilisé une et une seule fois, et ce dans leur ordre d’apparition dans l’entrée. Les règles d’un transducteur séquentiel sont de la forme : ⟨q, f(x1, . . . , xk)⟩ → u0 ⋅ ⟨q1, x1⟩ ⋅ . . .⟨qk, xk⟩uk qui spécifie que la transformation d’un arbre f(t1, . . . , tk) à partir d’un état q sera le résultat de la concaténation des chaînes ui et des résultats de transduction des sous arbres ti à partir des états qi respectifs. Le fait d’interdire la réutilisation et le réordonnancement des fils dans la production de la sortie est une simplification du modèle assez conséquente. Mais comme nous le verrons par la suite, cette restriction est nécessaire pour décider efficacement de problèmes théoriques cruciaux dans l’élaboration d’un apprentissage. Le problème d’équivalence de ce modèle est par exemple connu décidable pour la famille plus générale de transduction. Toute fois, il ne permet pas de disposer d’une solution efficace. Équivalence efficace En réduisant le problème d’équivalence à celui de l’équivalence de morphismes appliqués à des grammaires algébriques, prouvé décidable en temps polynomial dans la taille des morphismes et de la grammaire (Plandowski, 1995), nous prouvons que l’équivalence des transducteurs séquentiels est décidable également en temps polynomial dans la taille du transducteur considéré. Cette réduction se base sur la définition d’une grammaire représentant l’exécution parallèle de transducteurs séquentiels sur lesquels sont définis deux morphismes recomposant la sortie de transducteurs séquentiels respectifs. Toute la sortie est contenue dans les règles représentées sur cette exécution. Malgré des expressivités foncièrement différentes, les transducteurs d’arbres en mots ascendants, descendants, et les transducteurs de mots imbriqués en mots partagent des représentations d’exécutions similaires, l’ordre de l’entrée y étant toujours gardé. Cette similarité permet de mettre en relations ces principales classes de transductions d’arbres en mots en étendant le résultat de décidabilité de l’équivalence à chacune d’entre elles (Staworko et al., 2009).8 Introduction Normalisation Par la suite (Laurence et al., 2011), nous introduisons une forme normale pour les transducteurs séquentiels d’arbres en mots, cherchant à produire la sortie le plus tôt possible dans le transducteur. Cette notion de «plus tôt» pouvant être perçue de plusieurs manières nous allons l’illustrer à l’aide d’un court exemple. Exemple 1. Prenons un transducteur séquentiel M composé de deux états q0 et q1, de l’axiome initial permettant de produire de la sortie avant même de commencer la lecture d’un arbre entrée q0, n’ajoutant ici aucune sortie, et des règles suivantes. ⟨q0, f(x1, x2)⟩ → ⟨q1, x1⟩ ⋅ ac ⋅ ⟨q1, x2⟩, ⟨q1, g(x1)⟩ → ⟨q1, x1⟩ ⋅ abc, ⟨q1, a⟩ → ε. Nous pouvons représenter son exécution par l’annotation de l’arbre transformé, la sortie s’obtenant par la concaténation des mots produits en suivant le parcours en profondeur, comme l’illustre le premier arbre de la figure 2. f g g a g a ac ε ε ε abc ε ε abc ε abc ε f g g a g a ε a c bca ε ε cab ε cab ε ε Figure 2 – Exemple d’exécution d’un transducteur séquentiel et sa version normalisée Nous choisissons comme normalisation de débuter par la sortie la production le plus haut possible dans l’arbre. Par exemple, si l’arbre à pour racine f, nous savons directement que sa sortie débutera par un “a” et se terminera par un “c”. Il nous faut donc remonter ces deux lettres, respectivement à gauche et à droite, en les faisant passer à travers la transduction des sous arbres. Une fois que la sortie se trouve le plus haut possible, pour assurer un seul résultat pour la normalisation d’un arbre, nous avons décider de forcer la sortie à être9 produite le plus à gauche d’une règle, comme l’illustre le passage de droite à gauche des productions sur les noeuds étiquetés par un g. Cela donne le transducteur suivant. L’état q1 a été divisé en deux états qg et qd, le traitement du sous arbre gauche et droit étant maintenant différents. Les règles du transducteur obtenu sont les suivantes : ⟨q0, f(x1, x2)⟩ → ⟨qg, x1⟩ ⋅ ac ⋅ ⟨qd, x2⟩, ⟨qg, g(x1)⟩ → bca ⋅ ⟨qg, x1⟩, ⟨qg, a⟩ → ε, ⟨qd, g(x1)⟩ → cab ⋅ ⟨qd, x1⟩, ⟨qd, a⟩ → ε. L’exécution de ce transducteur est illustré sur le deuxième arbre de la figure 2. Nous définissons un algorithme de normalisation permettant à partir de tout transducteur séquentiel de le renvoyer sous forme normalisée. La difficulté de cette normalisation est la manipulation de la sortie, les mots devant à la fois être tirés à gauche et droite du transducteur et parfois poussés à travers la transduction de sous arbres, comme l’illustre l’exemple. Si on se limite à la sortie, cette opération revient à manipuler des mots à travers des grammaires algébriques de mots, projection du transducteur sur la sortie. La force de notre normalisation est, qu’a partir de contraintes globales devant s’appliquer les unes après les autres, nous avons réussi à en déduire des contraintes locales. En arrivant à représenter ces modifications à l’aide d’un langage d’opérations sur les mots, pouvant être directement appliquées aux états d’un transducteur, nous avons mis en place une normalisation efficace et locale d’un transducteur séquentiel d’arbres en mots. Nous identifions également les bornes sur la taille d’un transducteur sé- quentiel normalisé en fonction de la taille du transducteur de base, pouvant parfois atteindre une taille double exponentielle dans celle de la source. Apprentissage Nous aboutissons, en nous basant sur les précédents résultats, sur l’apprentissage des transducteurs séquentiels d’arbres en mots (Laurence et al., 2014). Pour cela, il faut tout d’abord identifier les transformations apprenables (i.e. représentables par des transducteurs séquentiels). Pour cela, on cherche à retrouver directement la structure des transducteurs dans une transformation. Cette restructuration passe par la décomposition de la sortie par rapport à l’entrée, l’introduction de sous transformations associées aux chemins de l’arbre d’entrée, ainsi que l’instauration de classes d’équivalence regroupant ces résiduels. À partir de cela, nous pouvons introduire des propriétés sur les transformations assurant une représentation possible sous la forme d’un transducteur séquentiel, puis prouver ces résultats par l’adaptation du théorème de Myhill-Nerode à notre classe de transduction.10 Introduction Il reste à adapter cette même approche pour l’apprentissage, non plus à partir d’une transformation complète en notre possession, mais à partir d’un échantillon d’exemples et d’un automate représentant son domaine. Nous limitant au modèle séquentiel, toute transformation et tout échantillon d’exemples n’est pas représentable par un transducteur séquentiel. Notre algorithme doit pouvoir échouer si les exemples ne peuvent être représentés par un transducteur séquentiel. En adaptant chaque étape, de la décomposition à l’identifi- cation des classes d’équivalences, l’algorithme s’assure en temps polynomial de l’existence d’un transducteur séquentiel cohérent avec l’entrée, et produit le transducteur canonique correspondant, ou le cas échéant échoue. À l’aide de la notion d’échantillon caractéristique, nous prouvons cependant que cet algorithme n’échoue pas dans les cas où l’entrée est suffisante pour aboutir à un transducteur cible représenté par cet échantillon. Plan de thèse Après avoir rappelé les différentes notions nécessaires à la compréhension de cette thèse, nous rappellerons également les différents modèles d’automates permettant de reconnaître des langages d’arbres, ces modèles étant la base des transducteurs qui nous intéressent. Nous y présenterons également les diffé- rents résultats tels que la décidabilité de l’équivalence ou encore l’apprentissage, ce qui nous permettra à travers des modèles plus simples d’illustrer les approches que nous appliquerons par la suite sur les transducteurs. Le deuxième chapitre nous permettra, à travers un survol des différentes classes de transductions existantes, de montrer les différents résultats existant dans le domaine, et où se placent les modèles que nous étudieront dans cet ensemble. Nous profiterons du troisième chapitre pour montrer une des facettes pratiques de la transformation à travers le langage xslt, qui permet de représenter et effectuer la transformation de fichiers xml sous différents formats. Les trois derniers chapitres présenteront les contributions de cette thèse. Le premier chapitre se concentre sur le problème d’équivalence, en améliorant la décidabilité pour les transducteurs d’arbres en mots. La réduction qui y est introduite permettra également d’apporter un autre regard sur les différentes classes de transductions d’arbres en mots évoquées. Les deux derniers chapitres présenteront les résultats centraux de cette thèse, à savoir la normalisation, minimisation et l’apprentissage des transducteurs séquentiels.Chapitre 1 Automates Avant de pouvoir parler de transducteurs, il est indispensable de poser les bases sur lesquelles ils se construisent. Après avoir défini formellement les différents modèles de données manipulés dans cette thèse, nous nous attarderons sur les automates, structure de base à partir desquelles les transducteurs sont définis. Pour cela, nous partirons des automates de mots, pour ensuite survoler les différents types d’automates permettant la manipulations d’arbres. 1.1 Mots 1.1.1 Mots et langages Un alphabet est un ensemble fini et non vide de symboles. La taille d’un alphabet Σ est le nombre de symboles qu’il contient, noté ∣Σ∣. Un mot est une séquence possiblement vide de symboles appartenant à un alphabet Σ. On représente par ε le mot vide. L’étoile de Kleene est la clôture réflexive et transitive d’un langage, noté Σ∗ pour l’alphabet Σ. Σ∗ est l’ensemble de tout les mots définissable à partir de l’alphabet Σ. On note u1 ⋅ u2 la concaténation de deux mots u1 et u2. La taille d’un mot u est le nombre de symboles qui le compose, noté ∣u∣. Les mots sont liés entre eux par une relation d’ordre totale . HAL Id: tel-00987630 https://tel.archives-ouvertes.fr/tel-00987630 Submitted on 6 May 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destin´ee au d´epˆot et `a la diffusion de documents scientifiques de niveau recherche, publi´es ou non, ´emanant des ´etablissements d’enseignement et de recherche fran¸cais ou ´etrangers, des laboratoires publics ou priv´es.Th`ese de Doctorat Universit´e Pierre et Marie Curie – Paris 6 Sp´ecialit´e SYSTEMES ` INFORMATIQUES pr´esent´ee par Mme. Nada SBIHI pour obtenir le grade de Docteur de l’universit´e Pierre et Marie Curie – Paris 6 Gestion du trafic dans les r´eseaux orient´es contenus Jury James ROBERTS Directeur de th`ese Chercheur, INRIA et SystemX Serge FDIDA Directeur de th`ese Professeur, UPMC Sorbonne universit´es – Paris 6 Leila SAIDANE Rapporteur Professeur, ENSI–Tunisie Walid DABBOUS Rapporteur Chercheur, INRIA Sophia Antipolis Sebastien TIXEUIL Examinateur Professeur, UPMC Sorbonne universit´es – Paris 6 Diego PERINO Examinateur Chercheur, Bell Labs–Alcatel-Lucent Gwendal SIMON Examinateur Maˆıtre de conf´erence, Telecom BretagneRemerciements Je tiens tout particuli`erement `a remercier mon encadrant de th`ese M. Jim Roberts de m’avoir donn´e la chance de poursuivre mes ´etudes. Son suivi et ses pr´ecieux conseils m’ont apport´es beaucoup d’aide durant ma th`ese. Son experience est une source in´epuisable dont j’ai pu apprendre grˆace `a son encadrement et sa g´en´erosit´e. Je remercie ´egalement M.Serge Fdida d’avoir accept´e de diriger ma th`ese ainsi que Mme. Leila Saidane et Mr. Walid Dabbous mes rapporteurs, pour leurs remarques et suggestions sur la version pr´eliminaire de ce document. Je suis reconnaissante `a M.S´ebastien Tixeuil, M. Diego Perino et M. Gwendal Simon pour avoir accept´e de m’honorer de leur participation au jury de cette th`ese. Je tiens `a remercier M. Philippe Robert de m’avoir acceuilli au sein de l’´equipe R´eseaux, algorithmes et Probabilit´es(RAP) `a l’INRIA. Je suis tr`es reconnaissante `a Mme. Christine Fricker pour son aide, sa g´en´erosit´e et sa gentillesse. Ton savoir scientifique, ta p´edagogie et ta modestie m’ont donn´e beaucoup d’espoirs. Je remercie ´egalement les anciens et nouveaux membres de l’´equipe Rap et particuli`erement Virginie Collette. Un grand merci `a mes parents pour leur encouragement et Otmane, mon ´epoux pour son soutien, petit clin d’oeil particulier `a mon fils Amine. Merci `a ma famille, amies, et coll´egues. 3Resum ´ e´ Les r´eseaux orient´es contenus (CCN) ont ´et´e cr´e´es afin d’optimiser les ressources r´eseau et assurer une plus grande s´ecurit´e. Le design et l’impl´ementation de cette architecture est encore `a ces d´ebuts. Ce travail de th`ese pr´esente des propositions pour la gestion de trafic dans les r´eseaux du future. Il est n´ecessaire d’ajouter des m´ecanismes de contrˆole concernant le partage de la bande passante entre flots. Le contrˆole de trafic est n´ecessaire pour assurer un temps de latence faible pour les flux de streaming vid´eo ou audio, et pour partager ´equitablement la bande passante entre flux ´elastiques. Nous proposons un m´ecanisme d’Interest Discard pour les r´eseaux CCN afin d ?optimiser l’utilisation de la bande passante. Les CCN favorisant l’utilisation de plusieurs sources pour t´el´echarger un contenu, nous ´etudions les performances des Multipaths/ Multisources ; on remarque alors que leurs performances d´ependent des performances de caches. Dans la deuxi`eme partie de cette th`ese, nous ´evaluons les performances de caches en utilisant une approximation simple et pr´ecise pour les caches LRU. Les performances des caches d´ependent fortement de la popularit´e des objets et de la taille des catalogues. Ainsi, Nous avons ´evalu´e les performances des caches en utilisant des popularit´es et des catalogues repr´esentant les donn´ees r´eelles ´echang´ees sur Internet. Aussi, nous avons observ´e que les tailles de caches doivent ˆetre tr`es grandes pour assurer une r´eduction significative de la bande passante ; ce qui pourrait ˆetre contraignant pour l’impl´ementation des caches dans les routeurs. Nous pensons que la distribution des caches devrait r´epondre `a un compromis bande passante/m´emoire ; la distribution adopt´ee devrait r´ealiser un coˆut minimum. Pour ce faire, nous ´evaluons les diff´erences de coˆut entre architectures. 4Abstract Content Centric Network (CCN) architecture has been designed to optimize network resources and ensure greater security. The design and the implementation of this architecture are only in its beginning. This work has made some proposals in traffic management related to the internet of the future. We argue that it is necessary to supplement CCN with mechanisms enabling controlled sharing of network bandwidth by competitive flows. Traf- fic control is necessary to ensure low latency for conversational and streaming flows, and to realize satisfactory bandwidth sharing between elastic flows. These objectives can be realized using ”per-flow bandwidth sharing”. As the bandwidth sharing algorithms in the IP architecture are not completely satisfactory, we proposed the Interest Discard as a new technique for CCN. We tested some of the mechanisms using CCNx prototype software and simulations. In evaluating the performance of multi-paths we noted the role of cache performance in the choice of selected paths. In the second part, we evaluate the performance of caches using a simple approximation for LRU cache performance that proves highly accurate. As caches performance heavily depends on populations and catalogs sizes, we evaluate their performance using popularity and catalogs representing the current Internet exchanges. Considering alpha values, we observe that the cache size should be very large ; which can be restrictive for caches implementation in routers. We believe that the distribution of caches on an architecture creates an excessive bandwidth consumption. Then, it is important to determine a tradeoff bandwidth/memory to determine how we should size caches and where we should place them ; this amounts to evaluate differences, in cost, between architectures. 5Table des matieres ` Introduction g´en´erale 10 I La gestion du trafic dans les r´eseaux orient´es contenus 16 1 Introduction 17 1.1 Probl´ematique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.2 Etat de l’art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.2.1 Contrˆole du trafic au coeur du r´eseau . . . . . . . . . . . . . . . . . 18 1.2.2 Protocole de transport . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.2.3 Multipath et multisource . . . . . . . . . . . . . . . . . . . . . . . . 19 2 Partage de bande passante 21 2.1 Identification des flots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2 Caches et files d’attente . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.3 Le principe du flow aware networking . . . . . . . . . . . . . . . . . . . . . 23 2.3.1 Partage des ressources dans les CCN . . . . . . . . . . . . . . . . . . 23 2.3.2 Contrˆole de surcharge . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3 M´ecanismes pour CCN 26 3.1 M´ecanismes pour les utilisateurs . . . . . . . . . . . . . . . . . . . . . . . . 26 3.1.1 D´etection des rejets . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.1.2 Protocole de transport . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.2 M´ecanismes pour op´erateurs . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2.1 Motivation ´economique . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2.2 Interest Discard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4 Strat´egies d’acheminement 31 4.1 Multicast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.2 Multisources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.2.1 Protocole de transport Multipath . . . . . . . . . . . . . . . . . . . . 32 64.2.2 Performance de MPTCP . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.2.3 CCN et routage multipath . . . . . . . . . . . . . . . . . . . . . . . . 38 4.2.4 Performances des multipaths . . . . . . . . . . . . . . . . . . . . . . 38 5 Simulations et exp´erimentations 42 5.1 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 5.2 Exp´erimentations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 5.2.1 Fair sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 5.2.2 Interest discard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5.2.3 Sc´enarios et r´esultats . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 6 Conclusion 46 II Performances des caches 48 7 Introduction 49 7.1 Probl´ematique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 7.2 Etat de l’art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 7.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 8 Mesure du trafic et performances des caches 51 8.1 Mesure du trafic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 8.1.1 Types de contenu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 8.1.2 La taille des contenus et des objets . . . . . . . . . . . . . . . . . . . 52 8.1.3 Distribution de la popularit´e . . . . . . . . . . . . . . . . . . . . . . 53 8.2 Le taux de hit d’un cache LRU . . . . . . . . . . . . . . . . . . . . . . . . . 54 8.2.1 Independent Reference Model . . . . . . . . . . . . . . . . . . . . . . 54 8.2.2 Les mod`eles analytiques . . . . . . . . . . . . . . . . . . . . . . . . . 55 8.2.3 La formule de Che . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 8.3 Autres politiques de remplacement . . . . . . . . . . . . . . . . . . . . . . . 57 8.3.1 Le cache Random . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 8.3.1.1 Relation entre taux de hit et temps moyen de s´ejour . . . . 57 8.3.1.2 Approximation de Fricker . . . . . . . . . . . . . . . . . . . 59 8.3.1.3 Approximation de Gallo . . . . . . . . . . . . . . . . . . . . 60 8.3.2 Le cache LFU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 8.3.3 Comparaison des politiques de remplacement . . . . . . . . . . . . . 63 9 Les performances des hi´erarchies de caches 64 9.1 Caract´eristiques d’une hi´erarchie de caches . . . . . . . . . . . . . . . . . . 64 9.1.1 Politique de remplacement . . . . . . . . . . . . . . . . . . . . . . . . 64 9.1.2 Les politiques de meta-caching . . . . . . . . . . . . . . . . . . . . . 65 9.1.3 Les politiques de forwarding . . . . . . . . . . . . . . . . . . . . . . . 67 9.2 Performances des hi´erarchies de caches . . . . . . . . . . . . . . . . . . . . . 69 9.2.1 G´en´eralisation de la formule de Che . . . . . . . . . . . . . . . . . . 699.2.2 Cas d’application : hi´erarchie de caches avec mix de flux . . . . . . . 72 10 Conclusion 76 III Coˆuts d’une hi´erarchie de caches 77 11 Introduction 78 11.1 Probl´ematique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 11.2 Etat de l’art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 12 Coˆut d’une hi´erarchie de caches `a deux niveaux 82 12.1 Diff´erence des coˆuts entre structures . . . . . . . . . . . . . . . . . . . . . . 83 12.2 Estimation num´erique des coˆuts . . . . . . . . . . . . . . . . . . . . . . . . . 84 12.2.1 Coˆut normalis´e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 12.2.2 Solution optimale en fonction du taux de hit global . . . . . . . . . . 87 12.3 Exemple : coˆuts des torrents . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 13 Coop´eration de caches 90 13.1 Load sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 13.2 Caches coop´eratifs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 13.3 Routage orient´e contenu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 14 Hi´erarchies alternatives 99 14.1 Coˆut d’une hi´erarchie de caches `a plusieurs niveaux . . . . . . . . . . . . . . 99 14.1.1 Evaluation des coˆuts . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 14.1.2 Coop´eration de caches . . . . . . . . . . . . . . . . . . . . . . . . . . 101 14.2 Coˆuts et politiques de remplacement . . . . . . . . . . . . . . . . . . . . . . 102 14.2.1 Politique LFU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 14.2.2 Politique Random . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 15 Conclusion 107 Conclusion g´en´erale 108Introduction gen´ erale ´ Les r´eseaux orient´es contenus L’id´ee de passer d’un r´eseau `a connexion point `a point `a un r´eseau orient´e contenu date de plus d’une d´ecennie. En effet le projet TRIAD 1 proposait d´ej`a une architecture ICN. Cependant, peu de travaux ont ´et´e construits sur la base de ce projet, sans doute parce que les contenus `a cette ´epoque n’avaient pas le mˆeme poids ´enorme qu’ils prennent actuellement dans le trafic Internet. Quelques ann´ees apr`es, d’autres propositions commencent `a ´eclairer la recherche dans le domaine des r´eseaux orient´es contenu. DONA, une architecture orient´ee donn´ees a ´et´e propos´ee en 2007 par Koponen et al. [1]. Elle se base sur l’utilisation de noms autocertifi´es et incorpore des fonctionnalit´es de caching. Plus r´ecemment l’architecture CCN a attir´e l’attention de la communaut´e scienti- fique, alert´ee par la croissance ´enorme du trafic de distribution de contenus et le succ`es des CDNs proposant des services payants aux fournisseurs de contenus. Les CDNs utilisent d’une mani`ere intelligente les chemins menant aux contenus et implantent des caches partout dans le monde afin d’acc´el´erer les t´el´echargements. Akamai reste le leader mondiale dans ce domaine. Les op´erateurs ne peuvent pas rester passifs dans cet univers des contenus et sont oblig´es d’envisager une mise `a niveau de l’Internet afin de r´eduire les coˆuts engendr´es par l’augmentation rapide du trafic de contenus, notamment de vid´eos. L’architecture CCN vient au bon moment et suscite beaucoup d’int´erˆet de la communaut´e scientifique, d’autant plus que celui qui propose cette architecture n’est autre que Van Jacobson tr`es connu pour des contributions marquantes au d´eveloppement d’IP [2]. Plusieurs autres projets d’architecture orient´ee contenu ont ´et´e propos´ees dont 4WARD/SAIL [3] et PSIRP/PURSUIT [4]. Ghodsi et al. [5] ont compar´e les diff´erentes propositions de r´eseaux ICN et ont dress´e les points de divergences et les points communs entre ces architectures. Ils d´eplorent aussi le peu de remise en question de l’utilisation des ICN en comparaison avec une solution d’´evolution des CDNs. Les architectures ICN pr´esentent plusieurs points communs de design. Les ´echanges sur ces architecture sont bas´es sur le mod`ele publish/subsribe, le nom de l’objet est publi´e, un objet est demand´e par l’envoi de paquet Interest. Les caches ICN sont utilis´es pour tout 1http ://gregorio.stanford.edu/triad/ 1011 type de donn´ees et protocole. Ils sont ouverts `a tout utilisateur qui peut ainsi d´eposer ses propres contenus dans les caches. Tous les noeuds du r´eseau utilisent un cache localis´e au niveau routeur et les contenus sont s´ecuris´es ind´ependamment de leur emplacement. C’est le serveur d’origine qui signe les contenus, et les utilisateurs peuvent v´erifier les contenus grˆace `a leur signature publique. Les architectures diff`erent dans certains points. Certains designs proposent la s´ecurisation des contenus avec un m´ecanisme d’auto-certification. Dans ce cas, les contenus portent des noms qui n’ont pas de signification ´evidente pour l’homme (un code `a plusieurs caract`eres par exemple). La s´ecurit´e des contenus est assur´ee par la signature des objets. L’utilisateur peut v´erifier l’authenticit´e de l’objet en validant sa signature au moyen d’une clef publique r´ecup´er´ee grˆace aux PKI. Le routage orient´e contenus est aussi un point de divergence entre architectures, certaines architectures pr´econisent le routage orient´e contenus bas´e sur BGP alors que d’autres proposent leur propre mod`ele de routage. Dans tous les cas, ce domaine reste encore mal explor´e et pose des probl`emes s´erieux de passage `a l’´echelle. CCN Van Jacobson propose la nouvelle architecture d’Internet orient´ee contenu nomm´ee CCN (Content-centric networking) [2]. Cette architecture permettrait une recherche directe d’un contenu sans avoir `a identifier son d´etenteur, comme dans le cas des r´eseaux IP. En effet, dans les r´eseaux IP les donn´ees sont recherch´ees par leur localisation et non pas par leur nom. Un utilisateur cherchant une donn´ee doit avoir une information sur l’adresse IP de la machine contenant cette information pour pouvoir la r´ecup´erer. Ce fonctionnement pose plusieurs probl`emes de s´ecurit´e, de disponibilit´e et de complexit´e des processus de r´ecup´eration de donn´ees. L’architecture CCN a pour objectif de simplifier les processus de recherche. Un utilisateur cherche une donn´ee `a travers son nom. D´es lors que la requˆete est lanc´ee, une demande sous forme d’un paquet dit “Interest” est envoy´ee `a son routeur d’acc`es. Si la donn´ee n’est pas pr´esente dans le content store de ce routeur, la requˆete se propage au fur et `a mesure dans le r´eseau. Une fois la donn´ee trouv´ee, elle suit le chemin inverse de la requˆete de recherche jusqu’`a l’utilisateur final et sera stock´ee dans un “Content Store” dans les routeurs CCN interm´ediaires. Cette architecture offre plusieurs possibilit´es de disponibilit´e ind´ependamment de l’adresse d’une machine. La s´ecurit´e est associ´ee directement aux donn´ees et pas aux “conteneurs” ( liens, routeurs, serveurs,...) ce qui permet d’ajuster de mani`ere tr`es flexible le niveau de s´ecurit´e `a la nature du contenu en question. Plus int´eressant encore, les contenus ne sont plus associ´es `a des conteneurs pr´ecis mais peuvent ˆetre dupliqu´es `a volont´e et stock´es notamment dans des m´emoires caches au sein du r´eseau. Les contenus sont divis´es en “chunks”, chaque chunk ayant typiquement la taille d’un paquet IP. CCN respecte le d´eroulement logique d’une requˆete : un utilisateur demande une donn´ee en ´emettant des paquets de type “Interest” et re¸coit en retour des paquets de donn´ees de type “Data”. A chaque paquet Interest correspond un seul paquet Data et12 B Provider P1 S2 Interests U2 S1 of Data U1 Provider P2 A C source emitter of Figure 1 – Un segment du r´eseau CCN reliant un utilisateur U1 `a une source S1 chaque paquet Data correspond `a un Chunk. La figure 1 repr´esente un segment d’un r´eseau CCN. Pour r´ecup´erer des donn´ees du fournisseur P2, l’usager U1 envoie des paquets “Interest” pour le contenu demand´e au travers des routeurs A et B. Supposant que les Content Stores de A et B ne contiennent pas le document demand´e, les paquets Data suivent le chemin inverse de S1 vers U1 en passant par B et A. Pour g´erer l’envoi des donn´ees, trois types de m´emoires sont utilis´ees au niveau de chaque noeud : • Content Store : Dans cette base de donn´ees sont stock´es des objets physiquement. • Pending Interest Table (PIT) : Dans cette table sont stock´es les noms des donn´ees et les interfaces demandeuses correspondantes. • FIB : par analogie avec IP, le FIB dans CCN stocke les pr´efixes des noms des donn´ees ainsi que les prochaines interfaces `a emprunter pour arriver `a la donn´ee. Cette table est enrichie `a travers les d´eclarations de d´etention de donn´ees par les noeuds du r´eseau. Dans CCN il n’y a nul besoin d’acheminer les adresses source et destination `a travers le r´eseau pour r´ecup´erer la donn´ee. Le format des paquets Interests et Data est explicit´e `a la figure 2. Selector Nonce Content Name Signature Content Name Signed Info Data Interest Packet Data Packet Figure 2 – Format des paquets CCN13 A la r´eception d’un Interest par un noeud, ce dernier v´erifie si le chunk demand´e existe dans son Content Store. Si c’est le cas, le paquet Data sera envoy´e `a l’interface demandeuse. Sinon le chunk demand´e sera recherch´e dans le PIT. S’il est trouv´e, l’interface demandeuse sera rajout´ee au PIT. Si les deux bases de donn´ees ne fournissent aucune information, on cherchera dans le FIB si une entr´ee matche avec le chunk recherch´e. Alors le paquet Interest sera achemin´e vers les interfaces conduisantes `a la donn´ee. La table PIT sera mise `a jour avec une nouvelle entr´ee pour le chunk en question. A la r´eception d’un paquet Data par un noeud, une recherche est effectu´ee dans le Content Store. Si une entr´ee matche, alors le paquet re¸cu est supprim´e, car ceci implique que le chunk est d´ej`a livr´e `a toutes les interfaces demandeuses. Sinon la donn´ee sera recherch´ee dans le PIT. Si une entr´ee matche avec la donn´ee re¸cue, elle sera achemin´ee vers les interfaces demandeuses. Le chunk sera typiquement stock´e en mˆeme temps dans le Content Store. image video 1,0 1 video ...... ...... file 0 0 0 0 video fichier 3 2 1 image video 1 2 image data ...... A ....... C Interest[video] FIB(A) Content Store(C) Interest[image] Interest[video] B D FIB(B) PIT(B) Interest[image] Content Store(D) Figure 3 – La recherche des donn´ees dans CCN Dans l’exemple de la figure 3, le noeud A cherche les chunks “video” et “image”. Le FIB du noeud A indique que les paquets Interests doivent ˆetre achemin´es vers l’interface 0 et 1 pour l’image, et vers l’interface 1 pour la vid´eo. A la r´eception de l’Interest demandant la vid´eo par le noeud B, ce dernier ignore l’Interest re¸cu car le PIT contient d´ej`a une entr´ee. Cette entr´ee est mise `a jour. Cependant, quand le noeud B re¸coit l’Interest demandant l’image, il l’envoie `a l’interface 1 indiqu´ee par le FIB. Le noeud D, par la suite, achemine la donn´ee vers le noeud A. Cette donn´ee sera stock´ee dans tous les Content Stores des noeuds l’ayant re¸cu ou transit´e. Le s´equencement des paquets est ´etabli grˆace aux noms des chunks. Ces derniers sont organis´es d’une fa¸con hi´erarchique. Ainsi, pour demander un segment il faut indiquer le14 nom hi´erarchique du chunk demand´e dans le paquet Interest. Le FIB d´etermine l’interface de sortie ad´equate grˆace `a un algorithme “longest prefixe match”. La gestion du trafic La gestion du trafic consiste `a contrˆoler le partage de la bande passante des liens afin d’assurer la qualit´e de service des diverses applications. Le partage ´equitable de la bande passante entre flots r´epond aux exigences des flux sensibles `a d´ebit faible et prot`ege les flux adaptatifs des flux gourmands. La gestion du trafic dans les r´eseaux IP a fait l’objet de plusieurs travaux de recherches, mais jusqu’`a pr´esent aucune solution enti`erement satisfaisante n’a ´et´e trouv´ee. Jacobson et al. proposent un mod`ele CCN bas´e sur un ´echange Interest/Data : pour recevoir un paquet Data, l’utilisateur devrait envoyer un paquet Interest. Ils assurent que ce m´ecanisme r´ealise une bonne gestion de trafic. Ceci est insuffisant en r´ealit´e. Tous les utilisateurs n’utilisent pas un protocole de transport unique ; ce dernier peut ˆetre modifi´e par l’utilisateur final. Il est alors n´ecessaire de d´efinir des m´ecanismes pour g´erer le partage de la bande passante. La d´etection de rejets par num´eros de s´equences des paquets n’est plus applicable sur CCN ; l’utilisation des multipaths et multichemins change l’ordre des paquets, et le timeout ne suffit pas pour assurer de bonnes performances du protocole de transport. D’autre part, dans CCN, l’utilisateur ne peut plus utiliser un protocole de transport multipath car il ignore les destinations possibles de ces paquets. En plus des diff´erences entre CCN et IP en termes de mod`ele d’´echange de donn´ees, les CCN impl´ementent des caches au niveau de chaque routeur. Ainsi, l’utilisation des caches r´eduit la consommation de bande passante, et les d´elais de transmission. Contributions de la th`ese Le rapport est structur´e comme suit : • Gestion du trafic dans les r´eseaux orient´es contenus : Nous proposons des m´ecanismes pour g´erer les flots dans CCN, en identifiant les flots par le nom d’objet, et en adaptant le Fair queuing `a CCN. Nous proposons un m´ecanisme Interest Discard pour prot´eger les op´erateurs des rejets de Data, un nouveau mod`ele de tarification, une d´etection rapide des rejets pour contrˆoler les fenˆetres des protocoles de transport, et nous ´evaluons les performances des Multipaths. Nous ´evaluons les m´ecanismes propos´es par simulation et exp´erimentation. • Performance des caches : Nous ´evaluons les types des contenus, leur taille, et leur loi de popularit´e. Nous utilisons la formule de Che comme mod`ele analytique fiable pour mesurer les taux de hit des caches LRU, nous adaptons cette formule au cas Random,15 nous ´evaluons le taux de hit d’une hi´erarchie `a deux niveaux, et nous proposons une diff´erentiation de stockage de donn´ees pour am´eliorer la probabilit´e de hit. • Coˆuts des hi´erarchies de caches : Nous cherchons un optimum pour un tradeoff bande passante/m´emoire, nous effectuons une estimation des couts, nous appliquons les calculs `a un cas r´eel de donn´ees de type torrents et nous comparons les performances d’une hi´erarchie coop´erative et non-coop´erative. Le travail de th`ese a contribu´e `a la r´edaction des articles suivants : • S. Oueslati, J. Roberts, and N. Sbihi, Flow-Aware traffic control for a content-centric network, in Proc of IEEE Infocom 2012. • C. Fricker, P. Robert, J. Roberts, and N. Sbihi, Impact of traffic mix on caching performance in a content-centric network, in IEEE NOMEN 2012, Workshop on Emerging Design Choices in Name-Oriented Networking • J. Roberts, N. Sbihi, Exploring the Memory-Bandwidth Tradeoff in an InformationCentric Network, International Teletraffic Congress, ITC 25, Shanghai, 2013.Premi`ere partie La gestion du trafic dans les r´eseaux orient´es contenus 16Chapitre 1 Introduction 1.1 Probl´ematique Jacobson et al [2] remettent en cause l’architecture TCP/IP en constatant qu’il est n´ecessaire de mettre en place une nouvelle architecture Internet r´epondant mieux `a la croissance exponentielle de la demande pour des contenus de tous types. Con¸cu au d´ebut pour des ´echanges simples, l’Internet devient en effet le moyen incontournable pour consulter les sites du Web, ´echanger vid´eos et audios, et partager des fichiers. Son utilisation ayant ´evolu´e, son architecture devrait suivre. Plusieurs projets sont consacr´es `a la conception du r´eseau du futur. La proposition CCN de Jacobson et al. [2] est une des plus cr´edibles et a fait l’object de nombreuses ´etudes. Nous avons constat´e cependant que la d´efinition de cette nouvelle architecture reste incompl`ete, notamment en ce qui concerne la gestion du trafic. L’objet de la pr´esente partie du rapport est de d´ecrire nos ´etudes sur cette question. Le contrˆole de trafic est essentiel afin d’assurer aux flots audios et vid´eos des d´elais faibles mˆeme quand ils partagent la bande passante avec des flots de donn´ees `a haut d´ebit. Il importe ´egalement d’empˆecher d’´eventuels “flots gourmands” d’alt´erer la qualit´e des autres flots en s’accaparant une part excessive de la bande passante des liens qu’ils partagent. Nous pouvons diviser les axes de recherches concernant la gestion du trafic en 3 volets : • Le partage de bande passante. Dans CCN comment faut il partager la bande passante entre flots et comment identifier un flot quand les adresses IP ne sont plus utilis´ees dans les paquets ? Comment exploiter la particularit´e des r´eseaux CCN, dite de “flow balance”, o`u les paquets Data suivent exactement le chemin inverse des paquets Interest. • Le protocole de transport. La conception d’un protocole de transport sous CCN est plus compliqu´e qu’en IP. En effet, nous ne pouvons plus prendre en compte la succession des num´eros de s´equence des paquets comme garantie de non congestion, car, sous CCN mˆeme sans congestion les paquets ne se suivent pas forc´ement. Un flot ne 1718 1.2. ETAT DE L’ART se dirige pas vers une destination particuli`ere et unique. Les paquets Data peuvent venir de n’importe quelle source de donn´ees, y compris des caches, et peuvent suivre des chemins diff´erents. • Multipath et multisource. Sous CCN le t´el´echargement des objets de sources multiples devient une opportunit´e int´eressante pour augmenter les d´ebits. Cette multiplicit´e a l’avantage de pouvoir am´eliorer les d´ebits de t´el´echargment. Cependant, il y a ´egalement un inconv´enient car, les paquets empruntant plusieurs chemins diff´erents, l’opportunit´e d’exploiter les caches devient plus faible. 1.2 Etat de l’art Lorsque nous avons fait les ´etudes pr´esent´ees dans cette partie, `a partir du Chapitre 2, il n’y avait pas de publications qui abordaient les questions mentionn´ees ci-dessus. Cet ´etat de fait a chang´e depuis et dans la pr´esente section nous ´evoquons quelques articles dont les propositions peuvent mettre en cause nos choix. 1.2.1 Contrˆole du trafic au coeur du r´eseau Nous proposons, dans le chapitre 2, la mise en oeuvre dans les files d’attente des routeurs d’un ordonnancement de type Deficit Round Robin (DRR) afin d’assurer l’´equit´e du partage de bande passante. Nous associons `a cet ordonnancement un m´ecanisme “Interest discard” de rejet s´electif de paquets Interest dans la direction inverse. D’autres ont propos´e d’assurer l’´equit´e du partage en ordonnan¸cant plutˆot les flots de paquets Interest dans un m´ecanisme dit “Interest shaping”. Dans ce cas, les paquets Data sont achemin´es dans une simple file FIFO. Dans [6], Fdida et al. d´ecrivent un m´ecanisme d’Interest shaping impl´ement´e au niveau de chaque routeur CCN. La file de transmission des paquets Data est observ´ee et un seuil de remplissage est fix´e. Le d´ebit des paquets Interest est r´egul´e suivant le d´ebit des paquets Data et en tenant compte de la diff´erence entre la taille de la file d’attente et le seuil. Ainsi les paquets Interest peuvent ˆetre retard´es si le nombre de paquets Data stock´es en file d’attente d´epassent le seuil. Dans l’article [7], Gallo et al. proposent un m´ecanisme d’Interest shaping presque identique `a la nˆotre sauf que les paquets Interest subissent un retard d’envoi en cas de congestion afin de limiter leur d´ebit. Des files virtuelles accueillent les paquets Interest selon l’objet recherch´e et un compteur est attribu´e `a chaque file. Les m´ecanismes d’Interest shaping peuvent provoquer des pertes de d´ebits dans certains sc´enarios. Dans l’exemple de la figure 1.1, si on applique l’Interest shaping sur tout le trafic on va retarder les paquets Interest ind´ependamment de leur appartenance aux flots. On va alors transmettre plus de paquets Interest du flot2 que du flot1. Sachant que le flot 2 sur le lien suivant perd en bande passante puisque la capacit´e du lien est inf´erieur `a la capacit´e du lien emprunt´e par le flot 1, cette situation particuli`ere entraine une perte en capacit´e. Il nous semble que notre choix d’agir directement sur le flux de donn´ees en imposant l’´equit´e par ordonnancement DRR est plus robuste. l’Interest discard n’intervient alorsCHAPITRE 1. INTRODUCTION 19 Client ccn node Serveur1 Serveur2 3Mb 3Mb 1Mb Flow1 Flow2 Figure 1.1 – Exemple illustrant la perte en d´ebit en utilisant le shaping sans classification par flot que comme m´ecanisme de contrˆole secondaire permettant le routeur de ne pas inutilement demander des paquets Data qui ne peuvent pas ˆetre achemin´es `a cause de la congestion sur la voie descendante. Il est notable aussi que le m´ecanisme d’Interest discard s’int`egre naturellement avec l’ordonnancement DRR dans la mˆeme “line card”. 1.2.2 Protocole de transport Dans notre proposition, le protocole de transport n’est pas critique car le contrˆole d’´equit´e du partage est r´ealis´e par ordonnancement DRR. D’autres ont propos´e plutˆot de maintenir de simples files FIFO en se fiant au protocole de transport pour le contrˆole d’´equit´e. Dans IP deux moyens permettent de d´etecter un rejet de paquet : les num´eros de s´equences des paquets s’ils ne se suivent pas ou le timeout si le d´elai d’acquittement d´epasse un seuil. Dans CCN les paquets Data, mˆeme s’ils ne subissent aucun rejet, n’ont pas forc´ement le bon ordre puisque les paquets peuvent ˆetre re¸cus de plusieurs sources diff´erentes. D’autre part le timeout calcul´e sur la base du RTT change tout le temps et ceci pourrait conduire `a des timeouts inopin´es. Gallo et al. [8] d´efinissent un protocole de transport d´enomm´e ICP (Interest Control Protocol). Ils proposent la d´etection de rejet par timeout mais en mettant `a jour r´eguli`erement le seuil de temporisation selon un historique. CCN dans ses premi`eres versions proposait un timeout fixe ce qui sanctionnait lourdement les flots en n´ecessitant une attente excessive avant de d´etecter les rejets. Vu que les RTTs peuvent varier dans CCN non seulement `a cause de la congestion, mais, du fait qu’on peut r´ecup´erer les objets de plusieurs sources, cette d´etection par timeout semble difficile `a mettre en oeuvre. Dans notre approche, d´etaill´ee plus loin, nous proposons un m´ecanisme de notification explicite de congestion qui permettrait une d´etection rapide de congestion sans besoin de timeout. 1.2.3 Multipath et multisource Dans CCN, un utilisateur ne peut pas connaitre d’avance les chemins emprunt´es par les paquets, car les paquets Interest ne contiennent que le nom de l’objet et le num´ero du20 1.2. ETAT DE L’ART paquet demand´e, sans aucune pr´ecision sur la destination. La difficult´e alors de d´etecter ou de s´eparer les chemins rend l’utilisation des protocoles de type MPTCP (multipath TCP) quasiment impossible. De ce fait, on ne peut pas utiliser une fenˆetre par chemin car l’utilisateur n’a aucun moyen d’acheminer ses paquets en suivant un chemin pr´ecis. Ce sont les routeurs qui choisissent le chemin d’un paquet Interest. Gallo et al. [7] proposent un couplage entre protocole de transport multipath et un m´ecanisme de forwarding au niveau des routeurs. Ils adaptent le protocole ICP au cas multipath en collectant les RTT venant de diff´erents chemins. En effet, un paquet Interest ne peut pas “savoir” `a l’avance vers quel chemin il serait achemin´e, mais, une fois achemin´e par le r´eseau, un paquet Data “connait” surement le chemin qu’il a emprunt´e. Le r´ecepteur peut donc identifier et enregistrer les chemins emprunt´es par les paquets. Ceci permet notamment d’estimer le RTT de chaque chemin. Cette approche nous paraˆıt lourde `a mettre en oeuvre. Dans notre proposition, on envisage l’utilisation de multipaths mais l’imposition d’un ordonnancement DRR par flot rend inefficace les m´ecanismes de coop´eration `a la MPTCP. Nous envisageons donc d’autres m´ecanismes pour ´eviter les ´ecueils d’un contrˆole de congestion ind´ependant par chemin. Cependant, nous notons ´egalement que le gain de d´ebit dˆu `a l’utilisation simultan´ee de plusieurs chemins n’est r´ealisable que par des flots ne subissant pas de limitation au niveau de l’acc`es. Cette observation nous m`ene `a croire qu’un seul chemin, judicieusement choisi, peut largement suffire dans la grande majorit´e des cas. Par ailleurs, Rossini et al. [9] identifie un ph´enom`ene de “pollution” des caches caus´e par l’utilisation des multipaths. Leurs r´esultats montrent des taux de hit nettement inf´erieurs en cas d’utilisation de chemins multiples, dˆu `a l’´eparpillement des copies de paquets dans les diff´erents caches.Chapitre 2 Partage de bande passante L’utilisation des caches au sein des CCN contribue `a l’am´elioration des d´elais de t´el´echargement et r´eduit les probl`emes de congestion. Mais la demande en trafic est en augmentation permanente, et il reste n´ecessaire d’utiliser des m´ecanismes de gestion du trafic. En effet, le contrˆole de congestion est n´ecessaire pour assurer des d´elais n´egligeables pour les flots voix et vid´eos, et pour ´eviter une consommation abusive de la bande passante par des flots agressifs. Des travaux ant´erieurs sur le r´eseau IP sugg`erent que le partage de bande passante par flot impos´e par un ordonnancement dans les routeurs est une solution efficace. Il r´ealise en effet une diff´erentiation de service implicite et assure de bonnes performances. Les flots dans les CCN peuvent ˆetre identifi´es selon l’objet recherch´e et il est possible ainsi d’appliquer une politique de congestion orient´ee flot. Pour g´erer le trafic et ´eviter la congestion, nous adoptons donc une politique “flowaware” o`u les actions de gestion tiennent compte de l’appartenance des paquets `a des flots. Nous pensons que ceci est pr´ef´erable `a un contrˆole de congestion o`u la performance d´epend d’un protocole mis en oeuvre dans l’´equipement des utilisateurs, comme dans le r´eseau IP actuel. En effet, rien ne garantit l’utilisation fid`ele de ce protocole. De plus, plusieurs variantes de TCP qui voient le jour le rendant de plus en plus agressif vis `a vis des versions ant´erieures. 2.1 Identification des flots Dans IP, un flot est d´efini par les adresses IP et les num´eros de port ; cela correspond typiquement `a une requˆete pr´ecise et sp´ecifique. Dans un CCN, au niveau de l’interface r´eseau, on ne peut acheminer qu’une seule demande pour le mˆeme objet. Un CCN utilise le multicast en envoyant l’objet re¸cu `a toutes les interfaces indiqu´ees dans la PIT. Nous identifions un flot grˆace aux noms de l’objet recherch´e et le fait que les paquets sont observ´es au mˆeme point du r´eseau et sont rapproch´es dans le temps. La figure 2.1 montre le format de l’entˆete des paquets en CCN. Les paquets Data et Interest portent le mˆeme nom d’objet mais, actuellement, il n’y a pas moyen de parser 2122 2.2. CACHES ET FILES D’ATTENTE chunk name user supplied name version chunk number other fields object name Figure 2.1 – Format des paquets CCN ce nom car c’est le ‘chunk name’ qui identifie un paquet data. Le champ ‘object name’ pourrait ˆetre analys´e afin d’identifier d’une fa¸con unique les paquets correspondants `a la demande d’objet mais ceci n´ecessiterait une modification du code CCNx. 2.2 Caches et files d’attente La d´eclaration faite dans l’article de Van Jacobson que “LRU remplace FIFO” semble ˆetre inexacte. Une file d’attente ne peut jouer le rˆole d’un cache, et vice versa, puisque la file d’attente devrait ˆetre de taille faible pour permettre un acc`es rapide `a la m´emoire, alors qu’un cache devrait ˆetre de taille grande mˆeme si le temps d’acc`es est plus lent. On d´emontre ceci par des exemples. On consid`ere une politique d’ordonnancement id´eal (LFU) o`u les objets les plus populaires sont stock´es dans un cache de taille N. L’impl´ementation d’une telle politique est possible selon [10]. On consid`ere des applications g´en´erant la majorit´e du trafic Internet comme YouTube et BitTorrent. La popularit´e des objets de ces applications suit la loi Zipf de param`etre α, c’est `a dire que la popularit´e du i eme ` objet le plus populaire est proportionnelle `a i −α. Des observations rapport´ees dans les articles [11] et [12] placent ce param`etre `a 0.75 environ. On consid`ere que les objets ont tous la mˆeme taille et que le catalogue est de M objets. La probabilit´e de hit global h pour une politique LFU peut ˆetre exprim´ee par : h = PN i=1 i −α PM i=1 i−α . Si on consid`ere un catalogue et une taille de cache assez grands, on a h ≈ N M 1−α . Pour 100 millions vid´eos Youtube de taille 4MB [13] il faut 640 GB de m´emoire pour un taux de hit de 20 % ou de 25 TB de m´emoire pour un taux de hit de 50 %. Pour 400 000 torrents avec une taille moyenne de 7 GB, il faut 4 TB pour un taux de hit de 20 %, ou de 175 TB de m´emoire pour un taux de hit de 50 %. Par ailleurs, la taille d’une file d’attente est au maximum ´egale au produit de la bande passante par le d´elai RTT [14] ; pour un lien `a 10 Gb/s et un d´elai de 200 ms, la taille d’une file d’attente est donc de l’ordre de 300 MB. Il est donc clairement n´ecessaire de distinguer deux m´emoires diff´erentes : des m´emoires cache de grande taille pour le stockage des donn´ees, et une file d’attente de taille beaucoup moins importante et avec un acc`es rapide.CHAPITRE 2. PARTAGE DE BANDE PASSANTE 23 Le contrˆole du trafic s’appuie sur la gestion des files d’attente. Il est donc important ´egalement pour cette raison de bien reconnaˆıtre la distinction entre ces files et le Content Store. 2.3 Le principe du flow aware networking Afin de prot´eger les flots sensibles audio et vid´eo, des techniques pour la gestion de la QoS comme Diffserv et Intserv ont ´et´e propos´ees. Le marquage de paquets dans Diffserv est une technique qui a ´et´e impl´ement´ee pour prioriser les flots voix ou vid´eo. Cependant, c’est une technique qui reste complexe `a mettre en oeuvre et sujette `a une possibilit´e de modification illicite du marquage. Il est bien connu par ailleurs que les approches orient´ees flots d’Intserv sont trop complexes et ne passent pas `a l’´echelle. On pourrait ´eventuellement prot´eger les flots sensibles et assurer une certaine ´equit´e dans le partage des ressources en utilisant un protocole de transport comme TCP. Cependant, l’utilisateur garde dans tous les cas une possibilit´e de modifier les impl´ementations ou bien d’utiliser des versions agressives de TCP. Nous pensons que le flow aware networking, d´eja propos´e pour les r´eseaux IP peut ˆetre adapt´e aux r´eseaux CCN. Ceci consiste `a mettre en place deux fonctionnalit´es : 1. le partage ´equitable de bande passante par flot, 2. le contrˆole de surcharge. Nous d´eveloppons ces deux points ci-dessous. 2.3.1 Partage des ressources dans les CCN Le partage ´equitable de la bande passante offre plusieurs avantages tr`es connus (voir [15], [16], [17] et [18]). Nous citons `a titre d’exemple : • les flots ´emettant des paquets `a un d´ebit inf´erieur au d´ebit ´equitable ne subissent pas de rejets de paquets. • les flot sensibles (conversationnels et streaming) ayant g´en´eralement des d´ebits faibles sont prot´eg´es et b´en´eficient d’un service de diff´erentiation implicite, • les flots agressifs ne gagnent rien en ´emettant des paquets trop rapidement car leur d´ebit ne peut d´epasser le d´ebit ´equitable. Pour une utilisation ´equitable de la bande passante, et pour prot´eger les flot sensibles, nous proposons que le r´eseau CCN assure lui-mˆeme le partage ´equitable des ressources r´eseau. Les avantages de cette technique ont ´et´e largement d´evelopp´es dans les articles pr´ecit´es ; elle peut s’adapter parfaitement aux r´eseaux CCN. Pour le partage ´equitable des ressources, nous avons revu plusieurs propositions. Nous comparons par simulations certains protocoles fair drop, et un protocole fair queuing. On consid`ere un lien `a 10 Mb/s partag´e par les flux suivants :24 2.3. LE PRINCIPE DU FLOW AWARE NETWORKING • TCP1 : facteur AIMD=1/2, RTT=10ms • TCP2 : facteur AIMD=1/2, RTT=30ms • TCP3 : facteur AIMD=1/2, RTT=50ms • CBR : flux `a d´ebit constant de 3Mb/s • Poisson : Un flux poissonien de paquets repr´esentant la superposition d’un grand nombre de flux de faible d´ebit. La figure 2.2 pr´esente les r´esultats obtenus. 0 1 2 3 4 5 Tail Drop Fred Afd tcp0 tcp1 tcp2 cbr Poisson (a) Algorithmes Fair drop 0 1 2 3 4 5 Fair Queuing Throughput(Mb/s) tcp0 tcp1 tcp2 cbr Poisson (b) Fair queuing Figure 2.2 – Comparaison des protocoles fair drop et du protocole Fair queuing Il est clair que le fair queuing est l’algorithme le plus efficace. Il a ´egalement l’avantage d’ˆetre sans param`etre. Son passage `a l’´echelle a ´et´e d´emontr´e `a condition d’assurer ´egalement un contrˆole de surcharge. En effet, il a ´et´e d´emontr´e dans [19] que le nombre de flots ne d´epasse pas quelques centaines si la charge du r´eseau reste inf´erieur `a 90 %, une valeur de charge au-dessus des exigences des op´erateurs. Le d´ebit ´equitable pour un lien de capacit´e C et de charge ρ est estim´e `a C(1 − ρ) [20]. Dans un r´eseau `a charge normale (ne d´epassant pas 90 %) ce d´ebit est largement suffisant pour les flots streaming et conversationnels. Le partage ´equitable n’est pas un objectif en soi, mais un moyen d’assurer un d´ebit acceptable et de prot´eger les flots adaptatifs des flots gourmands. Ainsi, tout flot ne d´epassant pas le d´ebit ´equitable ne subit aucun rejet de paquet dont le d´elai est tr`es faible. C’est aussi un moyen automatique pour assurer un fonctionnement normal sur Internet sans se soucier des comportements de l’utilisateur final. Notons que la possibilit´e de contourner l’imposition d’un partage ´equitable par la g´en´eration de multiples flots au lieu d’un seul est tr`es limit´ee [21]. Tr`es peu d’usagers peuvent en fait ´emettre du trafic `a un d´ebit plus fort que le d´ebit ´equitable. Le d´ebit pour la plupart des usagers est limit´e par leur d´ebit d’acc`es. De plus, dans un r´eseauCHAPITRE 2. PARTAGE DE BANDE PASSANTE 25 CCN o`u les flots sont d´efinis par le nom de l’objet, la possibilit´e de d´emultiplier les flots est beaucoup plus limit´ee qu’en IP. Nous proposons l’utilisation d’un algorithme d’ordonnancement comme DRR [22] o`u les files par flot sont mod´elis´ees par des listes chain´ees utilisant une structure nomm´ee ActiveList. Il a ´et´e d´emontr´e que le nombre de flots dans ActiveList est limit´e `a quelques centaines pour une charge ne pas d´epassant 90%, ind´ependamment de la capacit´e du lien C [19]. Ces r´esultats d´emontrent le “scalabilit´e” de l’ordonnancement DRR. 2.3.2 Contrˆole de surcharge La demande en trafic est le produit du d´ebit d’arriv´ee des flots par la taille moyenne d’un flot. On dit qu’un r´eseau est en surcharge si la demande d´epasse la capacit´e du lien. Dans ce cas, le r´eseau est instable : le nombre de flots en cours croˆıt et leur d´ebit devient tr`es faible. Le partage ´equitable de la bande passante par flot n’est donc scalable que si la charge du lien (demande divis´ee par la capacit´e du lien) est normale. On consid`ere la valeur maximale d’une charge normale ´egale `a 90%. Si la charge du r´eseau d´epasse 90%, le nombre de flots devient trop grand et donc lourd `a g´erer. Pour contrˆoler la charge, on pourrait mettre en place un m´ecanisme de contrˆole d’admission. Ceci consiste `a ´eliminer tout nouveau flot d`es que la charge du r´eseau atteint une valeur maximale. Pour ceci, il faut sauvegarder une liste de flots en cours et ´ecarter tout nouveau flot arrivant. Cependant, pour un r´eseau bien dimensionn´e, le probl`eme de surcharge ne se pose qu’en certains cas rares, comme par exemple une panne sur un lien r´eseau. Pour CCN, plutˆot qu’un contrˆole d’admission complexe et peu utilis´e, nous proposons la simple suppression des paquets d’une certaine liste de flots afin de r´eduire le niveau de charge. Cette liste pourrait ˆetre d´efinie de mani`ere arbitraire (par une fonction hash sur l’identit´e du flot) ou, si possible, de mani`ere plus s´elective en n’incluant que les flots les moins prioritaires.Chapitre 3 Mecanismes pour CCN ´ 3.1 M´ecanismes pour les utilisateurs L’usager dans un CCN doit participer `a la gestion du trafic en modulant la vitesse `a laquelle il envoie les Interests pour les chunks d’un mˆeme objet. L’utilisation du protocole de transport adaptatif dans un r´eseau CCN ´equip´e de l’ordonnancement DRR n’est pas n´ecessaire pour assurer le bon fonctionnement du r´eseau. Cependant, nous recommandons l’utilisation d’un protocole adaptatif pour ´eviter les r´e´emissions multiples, dues aux rejets Interest. 3.1.1 D´etection des rejets Nous proposons pour l’utilisateur un m´ecanisme de d´etection rapide de rejet dont voici la description. Si un paquet est rejet´e au niveau de la file d’attente en raison de sa saturation ou en raison d’une politique de gestion de la file d’attente, on envoie `a l’utilisateur l’entˆete du paquet data sans le payload. L’utilisateur, en recevant un paquet data de payload nul, sait que le paquet a ´et´e rejet´e, et donc r´eajuste la vitesse d’´emission des Interests pour ´eviter l’accumulation de r´e´emissions inutiles. Mieux encore, une modification peut ˆetre apport´ee `a l’entˆete du paquet data afin de pr´eciser qu’il correspond `a un discard. Cette technique de d´etection de rejet est particuli`erement int´eressante dans les CCN, puisqu’on ne peut pas se baser sur le s´equencement des paquets. En effet, contrairement `a TCP/IP, deux paquets qui se suivent `a l’origine ne se suivent pas forc´ement `a la r´eception, car la source d’une donn´ee n’est pas forc´ement unique, et le chemin n’est pas forc´ement le mˆeme pour tous les paquets. Actuellement, dans l’impl´ementation CCNx, la d´etection est faite uniquement en se basant sur le timeout. Ceci reste tr`es impr´ecis, et peut poser des probl`emes. A titre 26CHAPITRE 3. MECANISMES POUR CCN ´ 27 d’exemple, avec un timeout fixe, comme c’est le cas de CCNx dans ses premi`eres versions, une d´etection de rejet ne correspond pas forc´ement en fait `a un rejet, mais `a un temps de propagation et de transmission d´epassant le timeout. R´eduisant la fenˆetre dans un tel cas ne fait que d´et´eriorer les d´ebits des flots malgr´e la disponibilit´e de la capacit´e dans les liens. Un protocole de transport efficace arrive `a d´etecter rapidement les rejets. Ceci devient plus difficile lorsque le protocole de transport connecte l’utilisateur `a deux sources ou plus. Dans CCN, un utilisateur ne peut savoir `a l’avance s’il est servi par deux sources, car la seule donn´ee qu’il d´etient est le nom d’objet. De plus, toute l’architecture du r´eseau et la localisation des serveurs de donn´ees sont invisibles pour lui, ce qui est le but de CCN. L’article de Carofiglio et al. [8] apporte une r´eponse `a ce probl`eme. En utilisant un historique un peu large, on collecte des informations statistiques sur le nombre de chemins utilis´es ainsi que les RTT recens´es pendant les ´echanges, et on construit, sur la base de ces informations, un protocole de transport multipath efficace. En tous les cas, nous sommes peu favorables `a l’utilisation des multipaths et multisources. Les liens r´eseaux ont des capacit´es tellement grandes que le d´ebit d’un seul flot ne peut gu`ere atteindre la capacit´e du lien. Le d´ebit des flots est limit´e en fait par le d´ebit d’acc`es au r´eseau qui est faible par rapport au d´ebit des liens au coeur de r´eseau. Donc, un seul lien non surcharg´e est largement suffisant pour q’un flot r´ealise son d´ebit maximal. Par contre, nous sommes favorables `a des m´ecanismes de load balancing en cas de surcharge de certains liens, ce qui devrait ˆetre assez exceptionnel dans un r´eseau bien dimensionn´e. Avec la d´etection rapide des rejets, le paquet Interest qui a subi le discard est transform´e en paquet data sans payload, avec ´eventuellement une modification l´eg`ere de l’entˆete afin de signaler le discard. Le paquet traverse le chemin inverse en supprimant au fur et `a mesure les entr´ees correspondantes `a la PIT. A la r´eception du paquet data, l’utilisateur ou les utilisateurs corrigeront le probl`eme d´etect´e en adaptant au mieux le d´ebit d’´emission selon le protocole de transport. 3.1.2 Protocole de transport Dans le cas d’un ordonnancement fair queuing, l’utilisateur ne gagne pas en bande passante en ´etant tr`es agressif. Les r´e´emissions multiples d’Interest sont une cons´equence directe d’un protocole de transport agressif ne prenant pas en compte le feedback r´eseau. Si le fair sharing est impos´e, comme nous l’avons sugg´er´e, le protocole de transport n’est plus vu comme un outil pour r´ealiser le partage de bande passante. Le plus simple serait d’envoyer des paquets Interest `a d´ebit constant, mais dans ce cas, le r´ecepteur devrait g´erer une large liste de paquets Interest en attente. Nous proposons plutˆot un protocole AIMD (additive increase/multiplicative-decrease) comme TCP avec d´etection rapide de perte et en utilisant une fenˆetre adaptative CWND (Congestion Window). Le nombre de paquets Interest d’un flot qui transite28 3.2. MECANISMES POUR OP ´ ERATEURS ´ dans le r´eseau ne doit pas d´epasser la taille de la fenˆetre CWND. A la d´etection d’un rejet par d´etection rapide ou timeout, la fenˆetre est r´eduite suivant un facteur ; plus ce facteur est proche de 1, plus le protocole est aggressif. En absence de perte, la fenˆetre CWND croˆıt lin´eairement selon un certain taux. Encore, plus ce taux est grand, plus le protocole est aggressif. 3.2 M´ecanismes pour op´erateurs En plus de l’ordonnancement “fair queuing”, nous envisageons un m´ecanisme pr´eventif de rejet d’Interest. L’op´erateur devrait ´egalement exploiter les sources multiples de certaines donn´ees en appliquant une strat´egie d’acheminement adapt´ee. 3.2.1 Motivation ´economique Il est important de mettre en place une motivation ´economique pour encourager l’op´erateur `a d´eployer cette nouvelle architecture. Il est ´egalement important que le fournisseur de r´eseau soit r´emun´er´e pour le trafic qu’il ´ecoule. On consid`ere que le fournisseur devrait ˆetre pay´e pour les data envoy´es ; par exemple, dans la figure 3.1, l’utilisateur U1 paye le fournisseur P1 pour les data qu’il fournit, et P1 paye P2 pour les data re¸cus. Cette approche fournit bien la motivation n´ecessaire pour d´eployer un r´eseau CCN muni de caches, car le fournisseur, en utilisant des caches, ne serait pas amen´e `a acheter le mˆeme contenu plusieurs fois. Les frais devraient couvrir les coˆuts d’infrastructure (bande passante et caches), et leur nature exacte pourrait prendre de nombreuses formes, y compris les tarifs forfaitaires et des accords de peering. 3.2.2 Interest Discard L’utilisateur ne paye le fournisseur que s’il re¸coit effectivement la donn´ee. Le fournisseur doit donc assurer un contrˆole de congestion afin d’´eviter la perte des donn´ees transmises `a ses clients. Il a int´erˆet `a rejeter les Interests en exc`es pour ´eviter de racheter des donn´ees qui ne peuvent pas ˆetre revendues `a cause d’une congestion ´eventuelle. Afin d’´eviter un tel probl`eme, nous proposons un m´ecanisme compl´ementaire pour prot´eger le fournisseur. Supposons le lien AB dans la figure 3.1 congestionn´e ; B va limiter le d´ebit des Interests envoy´es vers le fournisseur P2 pour ´eviter d’acheter des donn´ees qui ne seront pas revendues `a l’utilisateur final U1 puisqu’elles seront perdues `a cause de la congestion. Nous appelons ce m´ecanisme “Interest Discard”. On consid`ere le lien AB de la figure 3.1. A re¸coit les paquets Interest de U1 et renvoie ces paquets vers B. B re¸coit dans l’autre sens les paquets de donn´ees r´ecup´er´es du fournisseur P2 et applique le Deficit Round Robin sur les paquets Data envoy´es vers A.CHAPITRE 3. MECANISMES POUR CCN ´ 29 Ub Aout Bin Ain Bout Interests Data Interests Data Sb Ua Sa Figure 3.1 – Les cartes r´eseaux des routeurs A et B travers´ees par des paquets Interest et Data B calcule en effet un d´ebit d’´equit´e correspondant `a l’inverse du temp de cycle moyen de DRR. Le d´ebit des Interest est limit´e par des rejets forc´es en utilisant un sceau `a jetons. Le d´ebit des Interests est limit´e au d´ebit correspondant au d´ebit ´equitable r´ealis´e actuellement par le DRR (tenant compte de la taille diff´erente des paquets Interest et Data). Le sceau `a jetons est donc aliment´e au rythme de l’ordonnancement. Notons que le DRR et le sceau `a jetons seront r´ealis´es dans la mˆeme carte r´eseau facilitant ainsi leur couplage. Nous pr´esentons le pseudocode interest Discard, cet algorithme doit ˆetre utilis´e au niveau de chaque interface r´eseau. Il est ex´ecut´e `a l’arriv´ee d’un interest `a l’interface, et `a chaque cycle Round Robin incr´ementer tous les compteurs de l’interface. Nous r´esumons ci-dessous l’algorithme ex´ecut´e au niveau de l’interface r´eseau. Algorithm 1 A l’arriv´ee d’un paquet interest `a l’interface r´eseau R´ecup´erer le nom d’objet du paquet name Calculer le hash du nom d’objet hash if f ile[hash]∄ then Cr´eer la file ayant comme ID hash Attribuer un compteur count[hash] au flux Initialiser le compteur count[hash] = b end if if count(hash)==0 then rejeter l’interest i else count(hash)- - ; end if Nous adoptons le DRR comme algorithme fair queuing, il utilise un nombre fixe de files. On note M le nombre maximal de files Round Robin30 3.2. MECANISMES POUR OP ´ ERATEURS ´ Algorithm 2 A la sortie de l’interface r´eseau i = 0 while i < M do if f ile[i]∃ then Servir le premier paquet de f ile[i] end if if f ile[i] = ∅ et count[i] = b then Supprimer la file physique f ile[i] end if for i = 0 to M − 1 do count[i] = count[i] + 1 end for end whileChapitre 4 Strategies d’acheminement ´ L’architecture CCN offre de nouvelles possibilit´es d’acheminement. Il est en particulier possible d’utiliser plusieurs sources pour un mˆeme objet. Nous avons fait quelques ´etudes sur l’acheminement multi-sources. On d´emontre que les m´ecanismes de gestion du trafic fonctionnent bien dans ce contexte. Cependant, avant de poursuivre la recherche dans ce domaine, il est essentiel de comprendre les possibilit´es r´eelles qu’offrirait un r´eseau CCN en fonction de sa politique de stockage. Nos ´etudes dans cette direction sont d´ecrites dans les deux parties suivantes de ce rapport. 4.1 Multicast Sur la figure 3.1 les utilisateurs Ua et Ub demandent le mˆeme objet stock´e au niveau du fournisseur S1. Si les demandes se passent en mˆeme temps, une seule demande sera enregistr´ee au niveau du routeur B et donc le flot correspondant `a l’objet demand´e aura le mˆeme d´ebit des flots unicast dans le lien BS1. Il n’y a donc pas lieu de distinguer ces flots dans l’ordonnanceur DRR. Si les demandes sont d´ecal´ees dans le temps et que l’utilisateur Ua commence le t´el´echargement des paquets avant l’utilisateur Ub, il est possible que le d´ebit accord´e au flot soit divis´e par deux au niveau du lien BS1. Mais puisque le routeur B est dot´e d’une m´emoire cache, il est tr`es probable que l’utilisateur Ub trouve les paquets pr´ec´edemment t´el´echarg´es par Ua au niveau du cache B, et donc seuls les paquets non t´el´echarg´es par U1 seront demand´es et traverseront le lien S1B. Le d´ebit des flots multicast est donc ´egal au d´ebit des flots unicast sur toutes les branches du r´eseau. Il suffit que le temps de t´el´echargement d’un objet soit inf´erieur au temps de stockage de l’objet dans un cache. Encore, il n’y a pas lieu de distinguer ces flots dans l’ordonnanceur DRR. 3132 4.2. MULTISOURCES Il est important alors de maintenir une m´emoire sauvegardant les paquets des flots en cours, ´evitant ainsi la division du d´ebit des flot par deux ou plus au cas o`u plusieurs flots cherchant le mˆeme objet arrivent en mˆeme temps sur une interface r´eseau, et que ces flot soient non synchronis´es. 4.2 Multisources 4.2.1 Protocole de transport Multipath L’utilisation des multipaths n’a pas de sens que si les performances du r´eseau s’am´eliorent, et que cette utilisation ne r´eduit pas le d´ebit des flots et leurs performances par rapport `a une utilisation compl`etement unipath. Il est important de v´erifier que l’utilisation du routage multipath ne diminue de mani`ere sensible la r´egion de stabilit´e. Un r´eseau stable, est un r´eseau o`u les flots sont servis en un temps fini, c’est `a dire qu’aucun flot souffre de congestion `a cause de la charge sur un lien de son chemin. Dans un mod`ele enti`erement unipath, un r´eseau est stable `a condition que chaque chemin r constitu´e d’un ensemble de liens L(r) et travers´e par un ensemble de flots S v´erifie : X k∈S ρk < Cl ∀l ∈ L(r). Le fair queuing, que nous pr´econisons d’utiliser au niveau des routeurs, n’est pas seulement b´en´efique pour prot´eger les flot gourmands, et pour r´ealiser une certaine ´equit´e entre les flots, mais c’est aussi un moyen de maximiser la r´egion de stabilit´e. En effet, les auteurs de [23] ont d´emontr´e que le fair queuing offrait une r´egion de stabilit´e maximale, et donc permettrait une utilisation optimale du r´eseau. Dans un r´egime multipath, les auteurs de [24] ont d´emontr´e qu’un r´eseau est stable sous la condition : X r∈S ρr < X l∈L(s) Cl . 4.2.2 Performance de MPTCP Afin de mieux comprendre l’utilisation du routage multipath dans CCN, nous avons ´etudi´e d’abord l’impact sur la performance des diff´erentes versions de MPTCP envisag´ees actuellement pour le r´eseau IP. Les protocoles Multipaths sont class´es en deux cat´egories : les protocoles non coordonn´es et les protocoles coordonn´es. Un protocole de transport non coordonn´e permet de r´ecup´erer un maximum de d´ebit mais cette augmentation de d´ebit p´enalise le d´ebit des autres flots car le flot multipath consomme plus de capacit´e que les autres. L’exemple illustr´e dans la figure 4.1 montre que l’utilisation du TCP non coordonn´e peut r´eduire le d´ebit des flots unicast en offrant plus de d´ebit aux flots multichemins, ce qui est loin de nos objectifs. EnCHAPITRE 4. STRATEGIES D’ACHEMINEMENT ´ 33 effet, le flot f1 partage le lien `a 2 Mb/s avec le flot f2 malgr´e la disponibilit´e d’un autre lien enti`erement d´edi´e pour lui et pouvant servir tout le d´ebit requis. flux2 flux1 c=4 c=2 c=2 Figure 4.1 – Le TCP Multipath non coordonn´e n’assure pas l’´equit´e Le MPTCP coordonn´e r´epond `a la condition de stabilit´e. Sa description est pr´esent´e par l’IETF1 ; la RFC 6356 d´ecrit les fonctionnalit´es d’un protocole de transport MPTCP. Il offre beaucoup d’avantages tels que la fiabilit´e en maintenant la connexion en cas de panne, le load balancing en cas de congestion ou l’utilisation du r´eseau wifi et ADSL en mˆeme temps, par exemple, pour r´ecup´erer le mˆeme contenu. Chaque flot utilisant MPTCP est subdivis´e en plusieurs sous-flots, chacun ´eventuellement suivant un chemin diff´erent. Chaque sous-flot r d´etient sa propre fenˆetre wr [25]. On trouve deux types de protocole MPTCP coordonn´e : le MPTCP semi coupl´e et le MPTCP coupl´e. Dans le cas d’un protocole MPTCP coupl´e, `a la d´et´ection d’un rejet, la fenˆetre cwndr du sous-flot r d´ecroit `a max(cwndr − cwndtotal/2, 1). Dans le cas d’un contrˆole de congestion MPTCP semi coupl´e la fenˆetre d’un sous-flot d´ecroit selon sa propre fenˆetre de congestion uniquement. En utilisant un MPTCP enti`erement coupl´e, les fenˆetres s’annulent `a tour de rˆole rendant le fonctionnement de ce protocole instable [26]. Dans notre proposition du module de gestion du trafic pour CCN, nous avons propos´e l’utilisation du Round Robin, il s’av`ere que le Round Robin invalide le fonctionnement du MPTCP coordonn´e. Les ajustements faits par l’utilisateur ne corrigent pas ce probl`eme. Nous avons observ´e ce ph´enom`ene en simulant avec ns2 un tron¸con de r´eseau pr´esent´e dans la figure 4.2. Un flot MPTCP ´emis par le noeud A est constitu´e de deux sous-flot : TCP0, qui suit le plus long chemin vers le r´ecepteur MPTCP (noeud D) A-B-C-D, et T CP2, qui suit le plus court chemin vers le r´ecepteur MPTCP A-D. Un flot TCP g´en´er´e par le noeud E suit le plus court chemin E-B-C-F vers le r´ecepteur TCP. Nous comparons dans un premier temps les d´ebits des flots T CP0 et T CP2 partageant le lien B-C avec une politique Tail drop dans tous les noeuds du r´eseau. La figure 14.3 confirme l’efficacit´e du MPTCP coordonn´e qui r´ealise l’´equit´e en prenant en compte toute la bande passante offerte au flot, et non pas la bande passante offerte par un lien uniquement. Avec le MPTCP non coordonn´e le flot TCP0 partage 1http ://datatracker.ietf.org/wg/mptcp/charter/34 4.2. MULTISOURCES Emetteur TCP Recepteur TCP Emetteur MPTCP Recepteur MPTCP TCP0 TCP1 TCP0 TCP1 TCP0 1Mb 2Mb Figure 4.2 – un r´eseau pour illustrer les Multipaths (a) MPTCP coordonn´e (b) MPTCP non coordonn´e Figure 4.3 – MPTCP coordonn´e priorise le flot unipath ´equitablement la bande passante avec TCP2, ce qui est ´equitable localement sur le lien, mais ne l’est pas globalement dans le r´eseau, puisque l’´emetteur TCP re¸coit un d´ebit suppl´ementaire du sous-flot TCP1. Le protocole MPTCP coordonn´e corrige ce probl`eme et offre au flot unipath TCP1 plus de d´ebit que TCP0. Nous recommen¸cons le mˆeme sc´enario mais en appliquant une politique Round Robin au noeud B. Nous observons le d´ebit des flots TCP0 et TCP2. On obtient les r´esultats pr´esent´es dans la figure 4.4 : Le MPTCP coordonn´e avec Round Robin r´ealise une ´equit´e par lien. Les flots TCP0 et TCP2 partage ´equitablement la bande passante au niveau du lien B-C, contrairement aux r´esultats observ´es avec Tail Drop, ou le flot TCP2 gagnait plus de d´ebit. L’effetCHAPITRE 4. STRATEGIES D’ACHEMINEMENT ´ 35 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 2 4 6 8 10 12 14 16 18 20 throughput temps tcp0 reno0 Figure 4.4 – Le Round Robin annule l’efficacit´e du MPTCP des changements des fenˆetres d’une mani`ere coordonn´ee sont invalid´es par le Round Robin. L’effet du partage ´equitable local par lien du Round Robin est dominant, et le MPTCP coordonn´e perd son efficacit´e. Pour pallier `a ce probl`eme ,on propose un nouveau protocole MPRR(Multipath Round Robin) permettant de r´ealiser l’´equit´e niveau flot et de pr´eserver les performances du protocole MPTCP, le protocole MPTCP est d´ecrit par les algorithme ci dessous : Algorithm 3 A l’arriv´ee d’un paquet interest `a l’interface r´eseau I0 et sortant de l’interface I1 R´ecup´erer le nom d’objet du paquet name Calculer le hash du nom d’objet hash if f ile[hash]∄ then Cr´eer la file ayant comme ID hash Attribuer un compteur mark[hash] au flux Initialiser le compteur mark[hash] = 0 end if if interest marqu´e then mark[hash]++ ; else envoyer le paquet interest `a l’interface I1 if Une autre interface I2 r´eseau est utilis´e par le flux name then envoy´e un paquet interest marqu´e `a I2 end if end if36 4.2. MULTISOURCES Algorithm 4 A la sortie de l’interface r´eseau i = 0 while i < M do if f ile[i]∃ then if mark[i] == 0 then Servir le premier paquet de f ile[i] end if else mark[i]- - ; end if if f ile[i] = ∅ then Supprimer la file physique f ile[i] end if end while Au niveau d’un routeur, si deux interfaces I1 et I2 sont utilis´ees pour envoyer les paquets d’un flot f, `a chaque fois qu’un paquet Interest du flot f est envoy´e vers une interface, un paquet Interest dupliqu´e et marqu´e serait envoy´e `a la deuxi`eme interface. Les interfaces servant un flot multipath sont sauvegard´ees au niveau de la FIB. Au niveau du cache, `a chaque fois qu’on re¸coit un paquet Interest marqu´e, il prendrait place dans la file d’attente du flot f et le compteur Interest discard du flot f serait d´ecr´ement´e. Mais ce paquet ne serait pas achemin´e vers les autres interfaces du r´eseau, il est utilis´e uniquement pour mettre en place l’´equit´e entre flots. Chaque flot a une file unique au niveau de chaque interface. La file correspondante au flot f est trait´e une seule fois `a chaque cycle Round Robin comme tous les flots arrivants `a l’interface. Afin de v´erifier notre protocole, on utilise le simulateur Simpy pour simuler le r´eseau de la figure 4.5. 1,2 1,2 1 capacity=10Mb/s capacity=1Mb/s capacity=2Mb/s Figure 4.5 – r´eseau simul´e avec Simpy En utilisant le MPRR au niveau des interfaces on obtient les r´esultats pr´esent´es dans la figure 4.6 Le d´ebit du flot TCP0 partageant le lien avec le sous-flot MPTCP TCP1, et le d´ebit du sous flot MPTCP circulant sur le deuxi`eme lien TCP2 en utilisant les politiques Tail drop, Round Robin et MPRR sont repr´esent´es dans la figure 4.6.CHAPITRE 4. STRATEGIES D’ACHEMINEMENT ´ 37 0 200000 400000 600000 800000 1e+06 1.2e+06 1.4e+06 1.6e+06 1.8e+06 2e+06 0 50 100 150 200 250 300 350 400 450 500 Throughput(b/s) Time tcp0 tcp1 tcp2 (a) politique Tail drop 100000 200000 300000 400000 500000 600000 700000 800000 900000 1e+06 1.1e+06 1.2e+06 0 50 100 150 200 250 300 350 400 450 500 T hroughput(b/s) T ime tcp0 tcp1 tcp2 (b) politique Round Robin Figure 4.6 – D´ebits des flots avec politique Tail drop (gauche) et Round Robin (droite) Le MPTCP offre des d´ebits approximativement ´equitables tenant compte de la capacit´e global offerte au flot. Le Round Robin assure une ´equit´e au niveau de chaque lien, ce qui a pour effet de r´ealiser une in´equit´e au sens global en prenant compte de la capacit´e globale offerte. 200000 400000 600000 800000 1e+06 1.2e+06 1.4e+06 1.6e+06 1.8e+06 0 50 100 150 200 250 300 350 400 450 500 Throughput(b/s) Time tcp0 tcp1 tcp2 Figure 4.7 – D´ebits des flot avec politique Multipath Round Robin Le MPRR corrige le probl`eme provoqu´e par l’utilisation du Round Robin en r´ealisant des d´ebits presque ´equitables en prenant compte toute la capacit´e offerte aux flots, les graphes montrent une am´elioration par rapport au Tail drop traditionnel.38 4.2. MULTISOURCES 4.2.3 CCN et routage multipath Mˆeme si le MPTCP s’av`ere performant en utilisant Tail drop, ou en utilisant le MPRR, on ne peut pas faire enti`erement confiance `a tous les utilisateurs de l’Internet. L’utilisateur d´etient toujours la possibilit´e de modifier son protocole de transport, ou d’utiliser le MPTCP non coordonn´e, par exemple. Rien n’oblige l’utilisateur `a participer `a la gestion de congestion. On note aussi la difficult´e d’impl´ementer un protocole MPTCP dans CCN, du fait que l’utilisateur ne peut choisir la destination de chaque paquet. On pourrait envisager une m´ethode statistique se basant sur l’historique des chemins travers´es et leur RTTs. Si on ne peut choisir le chemin qui serait travers´e `a l’´emission, il reste possible de savoir le chemin qui a ´et´e suivi par un paquet `a la r´eception en stockant les noeud travers´es dans chaque paquet, par exemple [27]. Malheureusement cette m´ethode reste compliqu´ee et demande beaucoup de modifications au niveau de chaque paquet. De plus, la coordination des routeurs est n´ecessaire pour r´ealiser l’´equit´e globale dans le r´eseau. Nous pensons que les multipaths dans CCN ne devrait pas ˆetre g´er´es par les utilisateurs, ou du moins les utilisateurs ne sont pas responsables de la gestion du trafic dans le r´eseau. Nous ne pouvons qu’´emettre des conseils permettant `a l’utilisateur d’utiliser au mieux la bande passante et d’´eviter les rejets d’Interests successifs rendant leur protocole de transport compliqu´e `a g´erer. C’est le r´eseau qui devrait distribuer les paquets ´equitablement sur les liens disponibles. 4.2.4 Performances des multipaths Nous pensons que les flots multipaths devraient ˆetre g´er´e par le r´eseau lui-mˆeme. Pour ´evaluer les performances des multipaths, nous proposons d’´etudier deux segments de r´eseaux pr´esent´es dans la figure 4.8. N1 a) multiple single-hop sources N1 N4 N3 N2 S2 S3 S4 b) short paths and long paths Figure 4.8 – Deux r´eseaux pour illustrer les Multipaths Dans la figure 4.8a, les flot arrivants au noeud N1 peuvent r´ecup´erer les donn´ees duCHAPITRE 4. STRATEGIES D’ACHEMINEMENT ´ 39 routeur S1 localis´ee dans le noeud N1, sinon dans le cas o`u la donn´ee est introuvable, la r´ecup´erer d’une des 3 sources S4,S2 ou S3. Dans la figure 4.8b, deux routeurs sont choisi aux hasard pour r´ecup´erer les donn´ees. Nous comparons les 3 cas suivants : – un seul chemin qui correspond au chemin le plus court en nombre de sauts est utilis´e par les flots, – deux chemins sont utilis´es conjointement, – on utilise un contrˆole de charge s´electif qui consiste `a refuser l’acc`es au lien `a tout flot multipath si le lien est un chemin secondaire, et si la charge du lien d´epasse un certain seuil. On utilise des simulations Monte-Carlo pour ´evaluer le d´ebit moyen des flots en fonction de la charge, et comparer ainsi les performances des strat´egies d’acheminement. La figure repr´esente la bande passante en fonction de la charge. Pour le premier (a) a (b) b Figure 4.9 – MPTCP coordonn´e priorise les flot unipath r´eseau 4.9(a) la charge maximale `a partir de laquelle le r´eseau est instable est de 5,29 exprim´ee en unit´es du d´ebit d’un lien. Ceci peut ˆetre pr´edit par calcul. Dans ce cas, les chemins ont le mˆeme nombre de sauts. D´es qu’un chemin est surcharg´e on ne peut plus l’utiliser. L’utilisation du r´eseau est maximal avec le DRR et le contrˆole de charge. Les d´ebits du deuxi`eme r´eseau montrent que l’utilisation des chemins multiples sans aucune strat´egie m`ene `a une perte en capacit´e (la r´egion de stabilit´e est r´eduite). Dans ce cas le d´ebit offert est maximale au d´ebut. L’utilisation des chemins unicast offre une meilleur capacit´e en trafic, mais les d´ebits offerts au d´ebut sont plus faibles. Afin d’offrir des d´ebits plus importants `a faible charge, tout en offrant une meilleure capacit´e en trafic, nous proposons d’appliquer le contrˆole de charge s´electif.40 4.2. MULTISOURCES D`es qu’un seuil de d´ebit est atteint dans un lien, il faut refuser l’acc`es aux flots multipaths si le lien appartient `a un chemin secondaire. Pour distinguer les chemins secondaires et principaux pour un flot on peut marquer les paquets qui traversent un chemin secondaire. On observe effectivement que les performances obtenues avec un contrˆole de charge s´electif sont mieux que celles obtenus avec une utilisation exclusive des chemins les plus courts, et ´evidement mieux que l’utilisation al´eatoire des chemins multipath. En r´ealit´e les liens au coeur du r´eseau ont des d´ebits tr`es ´elev´es d´epassant le d´ebit d’acc`es des utilisateurs. L’utilisation des chemins multiples n’est pas forc´ement b´en´efique, les performances des caches localis´es au niveau des routeurs peuvent s´erieusement d´ecroitre. Pour illustrer ce ph´enom`ene, on consid`ere un r´eseau simple repr´esent´e dans la figure 4.10. A C B Figure 4.10 – tron¸con de r´eseau pour d´emontrer l’impact des multipath sur le hit global Des flots arrivent au noeud A selon un processus de Poisson. Si l’objet se trouve dans le noeud A alors le cache A r´epond `a la requˆete. Si l’objet demand´e se trouve dans les deux caches B et C alors un des deux caches est choisi au hasard et l’objet est r´ecup´er´e du cache choisi. Si un des caches contient l’objet et que l’autre ne le contient pas, le chemin le plus long menant au cache d´etenteur d’objet est s´electionne avec une probabilit´e P. Si aucun des caches B ou C ne contient l’objet alors il est dirig´e vers le serveur d’origine `a travers le routeur B ou C (choisi au hasard). La popularit´e des requˆetes suit une loi Zipf(0.8). On trace la probabilit´e de hit global de cette micro architecture en fonction de la probabilit´e de choix du chemin le plus long pour diff´erentes valeurs de la taille des caches. On choisit une taille de catalogue de 104 objets. Conform´ement `a la proposition CCN les caches ont la mˆeme taille. Cette exemple illustre un contre exemple des b´en´efices tir´es par les multipaths. Mˆeme si les multipaths paraissent comme une solution int´eressante pour augmenter le trafic v´ehicul´e sur Internet, son utilisation dans les r´eseaux CCN tel que pr´esent´e par Van Jacobson ne parait pas forc´ement b´en´efique. Ce constat a aussi ´et´e d´emontr´e par Rossini et al. [9]. Il est clair que le mieux pour les r´eseaux CCN est de n’utiliser les chemins les plus longs que pour les cas extrˆemes ou un flot ne peut ˆetre servi par son chemin le plus court `a cause d’une charge maximale dans un lien du chemin, et que le chemin le plus long ne contient aucun lien proche de la surcharge. Nous proposons de maintenir le choix des chemins les plus courts comme choix principal. Si un flot est rejet´e de son chemin le plus court `a cause de la surcharge d’un lien appartenant `a son chemin on peut dans ce cas seulement envisager d’emprunter un chemin secondaire, `a condition que ce chemin n’atteint pas un certain seuil deCHAPITRE 4. STRATEGIES D’ACHEMINEMENT ´ 41 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 hitg P T=0.1 T=0.3 T=0.5 Figure 4.11 – Taux de hit en fonction de la probabilit´e du choix du plus long chemin charge. A B C D chemin congestionn Dtection de congestion Figure 4.12 – Chemin plus longs choisis en cas de congestion uniquement La figure 4.12 montre un exemple de l’utilisation des multipaths dans CCN. Quand un objet se trouve dans le cache D, les Interests suivent le plus court chemin A-B-D. Mais quand le cache D re¸coit l’Interest il ne peut le servir `a travers le chemin inverse car une congestion est d´etect´e. Le flot est alors redirig´e vers le noeud d’avant B qui propose un autre chemin vers le cache D. Le chemin ´etant non congestionn´e le flot emprunte le chemin A-B-C-D. La FIB du noeud B est mise `a jour afin de v´ehiculer tous les paquets Interest du flot vers l’interface menant au noeud C et non pas au noeud D. D’autre part, un routage orient´e contenu est lourd `a mettre en place, une mise `a jour des FIB `a chaque fois qu’un objet est supprim´e d’un cache peut avoir un impact sur plusieurs tables, surtout si les tables enregistrent les paquets et non les objets.Chapitre 5 Simulations et experimentations ´ Nous avons test´e, par simulation et exp´erimentation certains des m´ecanismes que nous avons propos´es. 5.1 Simulations On consid`ere un lien `a 10 Mb/s partag´e entre un flot Poisson `a 5 Mb/s et un ensemble de flots permanents. Les flots sont ordonnanc´es en utilisant le DRR. Le flot de Poisson repr´esente de mani`ere simplifi´ee un ensemble de flots de d´ebit beaucoup plus faible que le d´ebit ´equitable. Les paquets de ce flot sont suppos´es appartenir `a des flots diff´erents. Les flots permanents simul´es sont : – AIMD (7/8) : l’utilisateur final impl´emente un contrˆole de congestion AIMD avec facteur de r´eduction de la fenˆetre CWND β = 7/8 et, pour ce flot, RTT = 10 ms. – AIMD (1/2) : l’utilisateur final impl´emente un contrˆole de congestion AIMD avec facteur de r´eduction de la fenˆetre CWND β = 1/2 et RTT = 200 ms. – CWIN (5) : L’utilisateur final maintient une fenˆetre fixe de 5 packets et RTT = 200 ms. – CWIN (100) : L’utilisateur final maintient une fenˆetre fixe de 100 packets et RTT = 200 ms. – CBR : L’utilisateur final envoie des paquets Interest `a taux fixe correspondant `a un d´ebit constant de paquets data de 4 Mb/s. 42CHAPITRE 5. SIMULATIONS ET EXPERIMENTATIONS ´ 43 Les r´esultats sont r´esum´es dans les tableaux 5.1 et 5.2. On distingue les cas d’une d´etection rapide par l’utilisateur nomm´ee “Rapid” et une d´etection par timeout fix´e `a 1s. On distingue aussi les deux cas avec “Interest discard” ou sans “Interest discard”. Table 5.1 – D´ebits en Mb/s Flow sans discard Interest discard Rapid TO (1s) Rapid TO (1s) AIMD (7/8) 1.20 1.24 1.23 1.31 AIMD (1/2) 1.19 1.10 1.12 0.84 CWIN (5) 0.19 0.19 0.19 0.19 CWIN (100) 1.20 1.24 1.23 1.32 CBR 1.20 1.24 1.23 1.32 Table 5.2 – Taux de rejets et de discard (perte/discard) Flow sans discard Interest discard Rapid TO (1s) Rapid TO (1s) AIMD (7/8) .006/0 .01/0 0/.01 0/.01 AIMD (1/2) .002/0 .003/0 0/.001 0/.003 CWIN (5) .006/0 0/0 0/.0 0/0 CWIN (100) .30/0 .18/0 0/.65 0/.22 CBR .76/0 .75/0 0/.75 0/.74 Les flot agressifs CWIN(100) et CBR ont des d´ebits `a peu pr`es ´egaux au flot TCP(7/8), mais le taux de rejets des data des flot agressifs est tr`es important (30% pour le CWIN(100) et 76% pour CBR) ; les rejets data sont convertis en rejets Interest en utilisant le m´ecanisme Interest Discard. A partir de ces r´esultats, nous recommandons que les utilisateurs choisissent un protocole de transport AIMD agressif avec donc un facteur de r´eduction proche de 1. 5.2 Exp´erimentations 5.2.1 Fair sharing Nous avons utilis´e comme algorithme d’ordonnancement le DDR [22]. Cet algorithme utilise des files virtuelles, chacune correspondant `a un identifiant de flot. A la r´eception d’un paquet, un hash est calcul´e `a partir du nom d’objet, et le paquet est plac´e dans la file correspondante `a cet identifiant. Les files sont impl´ement´ees comme une simple liste chain´ee appel´e ActiveList. Lorsque la file globale atteint une taille maximale, le dernier paquet de la file de flot la plus longue est supprim´e.44 5.2. EXPERIMENTATIONS ´ client serveur ./ccnd video1 @ipB video2 @ipB FIB ./ccnd Content Store video2 video1 Discard de l’interest ou nom? Identifier nom objet et calculer son hash Alimenter les conteurs chaque cycle Round Robin Figure 5.1 – Testbed et interest Discard 5.2.2 Interest discard On impl´emente un compteur pour chaque flot dans l’ActiveList du DRR. Tous les compteurs seront incr´ement´es d’un quantum `a chaque fois que l’ordonnanceur DRR compl`ete un cycle (parcours toutes les files) jusqu’`a une valeur maximale b. A chaque fois qu’un Interest correspondant `a un flot arrive sur la carte r´eseau, le compteur du flot est d´ecr´ement´e d’un quantum. Si un Interest arrive et que le compteur du flot correspondant `a l’Interest est `a z´ero, il faut supprimer l’Interest. 5.2.3 Sc´enarios et r´esultats Nous avons impl´ement´e un ordonnancement DRR [22] ainsi que l’Interest discard dans un r´eseau basique de deux noeuds. Un lien full duplex interconnecte deux machines Linux. Une machine joue le rˆole du serveur et stocke des fichiers de donn´ees, l’autre machine est client, cherchant ces donn´ees. L’ordonnancement est impl´ement´e dans l’espace noyau en utilisant une version modifi´ee du module sch sfq d´evelopp´e par L.Muscariello et P.Viotti [28]. Cette nouvelle impl´ementation permet l’identification des flots par les noms d’objets. L’Interest discard est impl´ement´e dans le noyau, les modifications suivantes ont ´et´e apport´ees : – Cr´eation d’un compteur par flot. – Incr´ementation de tous les compteurs `a chaque cycle DRR. – D´ecr´ementation d’un compteur `a chaque fois qu’un Interest est envoy´e. – Rejet d’un Interest si le compteur est nulle. Au niveau du serveur, nous appliquons un shaping `a 10 Mb/s, lan¸cons le d´emon ccnd, et chargeons des fichiers dans le r´ef´erentiel. Au niveau de la machine cliente, nous lan¸cons le d´emon ccnd, et r´ecup´erons les objets stock´es dans le serveur en utilisantCHAPITRE 5. SIMULATIONS ET EXPERIMENTATIONS ´ 45 deux applications : l’application ccncatchunks2 impl´ement´ee par PARC, et l’application cbr que nous avons d´evelopp´ee pour envoyer les paquets Interest `a un d´ebit constant. La figure 5.2 repr´esente les d´ebits instantan´es dans le cas d’un ordonnancement FIFO, et dans le cas d’un ordonnancement DRR. 0 5 10 0 20 40 60 80 100 rate (Mb/s) FIFO 0 5 10 0 20 40 60 80 100 rate (Mb/s) DRR Figure 5.2 – D´ebits des flot : cbr et ccncatchunks2 Le flot ccncatchunks2 arrive `a avoir son d´ebit maximal en utilisant le Deficit Round Robin, contrairement `a l’ordonnancement par d´efaut tail drop. Les r´esultats des exp´erimentations confirment les simulations. L’Interest Discard permet de convertir les rejets data en rejets Interest, ce qui aide `a conserver la bande passante et `a prot´eger l’op´erateur. sans filtre b = 10 b = 100 perte .42 .002 .005 discard 0 .45 .46 Le tableau ci-dessus montre que l’Interest discard est un m´ecanisme efficace pour ´eviter les rejets data et donc ´eviter un gaspillage de la bande passante.Chapitre 6 Conclusion Dans cette partie, nous avons propos´e un ensemble de m´ecanismes de gestion du trafic pour la proposition CCN. Cet ensemble comprend quatre volets essentiels : – La gestion du partage de bande passante. Grˆace `a l’identification des flots par les noms d’objets, il est d´esormais possible de d´efinir un flot sous CCN. Nous soulignions la n´ecessit´e de s´eparer files d’attente et caches parce qu’ils n’ont pas les mˆemes exigences en termes de taille et de temps de r´eponse. La file d’attente devrait ˆetre de taille petite avec un temps de r´eponse rapide. Par contre les caches sont plus grands mais exigent un temps de r´eponse moins rapide et utilisent typiquement une politique LRU (remplacement de l’objet le moins r´ecemment demand´e). Le partage de bande passante est assur´e au moyen de l’ordonnancement fair queuing (DRR de pr´ef´erence) au niveau flot. – Des m´ecanismes pour utilisateurs. Nous conseillons l’utilisation d’un protocole AIMD adaptatif, pas pour assurer l’´equit´e, qui est r´ealis´ee directement par DRR, mais afin de limiter les pertes et le d´eclenchement de r´e´emissions multiples lourdes `a g´erer. L’utilisateur ne gagne rien en ´etant agressif car le r´eseau partage ´equitablement la bande passante. La d´etection rapide des rejets assure l’efficacit´e du protocole de transport. Nous proposons donc une d´etection rapide de rejets au niveau des routeurs. Si un paquet Interest ne peut ˆetre servi ou si un paquet Data devrait ˆetre rejet´e, un paquet data sans payload est envoy´e vers l’usager. Nous utilisons cette m´ethode car, dans CCN, l’existence de chemins multiples entraine des probl`emes de s´equencement des paquets Data rendant impossible la d´etection rapide de perte en contrˆolant les num´eros de s´equence. L’utilisation de sources multiples engendre en plus des variations importantes du RTT rendant difficile le r´eglage du seuil de Timeout. – Des m´ecanismes pour op´erateurs. Nous proposons un nouveau mod`ele de facturation o`u un usager ou un op´erateur “ach`ete” des paquets de Data en ´emettant 46CHAPITRE 6. CONCLUSION 47 les paquets Interest correspondants. Ce m´ecanisme incite les op´erateurs `a investir dans des ressources r´eseaux afin de pouvoir “vendre” davantage de trafic. Les op´erateurs sont ´egalement motiv´es `a utiliser les caches afin d’´eviter le rachat multiple fois d’un objet populaire. Nous proposons aussi un m´ecanisme d’Interest discard qui limite les rejets Data et permet `a l’op´erateur d’´eviter de demander des paquets Data qui ne peuvent pas ˆetre revendus en aval. – Des strat´egies d’acheminement. Le multicast sous CCN est compatible avec le fair queuing que nous sugg´erons d’utiliser au niveau des routeurs. CCN utilise le multicast comme une partie de la proposition de sorte que deux flux synchronis´es demandant le mˆeme objet ne peuvent le t´el´echarger parall`element sur un lien. Une seule demande est envoy´ee pour les deux flux, ce qui ´evite la division du d´ebit des flux due au fair queuing. Si les demandes ne sont pas synchronis´ees l’utilisation des caches permet de maintenir le d´ebit de t´el´echargement grˆace au stockage temporaire des paquets en cours de t´el´echargement. Par contre l’utilisation du fair queuing peut poser un s´erieux probl`eme en ce qui concerne les multipaths. Le fair queuing annule le comportement du flux multipath coordonn´e et le transforme en un flux multipath non coordonn´e. Une ´equit´e locale par lien est r´ealis´ee mais l’´equit´e globale ne l’est pas car le flux multipath re¸coit plus de d´ebit qu’un flux unipath. Nous corrigeons ce probl`eme par la conception d’un protocole MPRR (multipath Round Robin). Un protocole de type MPTCP est difficile `a r´ealiser sous CCN puisque l’utilisateur n’a aucune visibilit´e sur les chemins utilis´es. Nous proposons donc une gestion par flot plutˆot qu’une gestion par paquet. Il suffit d’observer la charge des liens et de n’accepter aucun nouveau flot sur un chemin long que si la charge des liens est assez faible. Nous avons ´egalement observ´e que l’utilisation de multipaths nuit `a l’efficacit´e des caches dans certains cas. Suite `a nos observations li´es `a la d´egradation du taux de hit global due `a l’utilisation des multipaths, une ´etude des performances des caches est n´ecessaire, car la gestion du trafic en d´epend. Cette ´etude est l’objet de la prochaine partie.Deuxi`eme partie Performances des caches 48Chapitre 7 Introduction 7.1 Probl´ematique Dans ce chapitre, nous traitons le probl`eme de la performance des caches dans les r´eseaux orient´es contenus. Compte tenu des modifications majeures `a apporter aux r´eseaux dans le cas o`u une mise en oeuvre de CCN est envisag´ee (mise `a jour des caches, protocoles, m´emoires distribu´ees), il est important de mesurer le gain apport´e par cette architecture. Il est primordial de mesurer la quantit´e et la mani`ere dont arrivent les objets. Les conclusions que nous pouvons tirer d’une ´etude de performances d´epend de la popularit´e des objets arrivant aux caches, et de la taille des catalogues. Nous souhaitons apporter des conclusions pratiques en utilisant des donn´ees r´eelles. On note que la diffusion de contenus repr´esente 96% du trafic Internet. Ce contenu est constitu´e d’un mix de donn´ees. Nous avons alors mis en place une ´evaluation d’une hi´erarchie de caches `a deux niveaux en utilisant un mix de flux refl´etant un ´echange r´eel de donn´ees sur Internet. 7.2 Etat de l’art 7.3 Contributions Dans cette partie, on ´evalue le taux de hit, pour une hi´erarchie de caches `a deux niveaux, avec un mix de flux r´eel. Ceci en utilisant un mod`ele simple permettant d’effectuer des calculs rapides pour des tailles importantes de cache. Ce mod`ele, pr´ec´edemment propos´e dans la litt´erature, a ´et´e test´e, v´erifi´e, et d´emontr´e math´ematiquement. Des simulations ont ´et´e effectu´ees pour confirmer son exactitude. Nous avons effectu´e 4950 7.3. CONTRIBUTIONS les calculs en utilisant un mix de flux refl´etant le partage actuel du trafic sur Internet. Nous proposons un stockage des contenus VoD au niveau des routeurs d’acc`es, vu leur volume faible par rapport aux autres types de donn´ees. Les autres types devraient ˆetre stock´es dans un cache tr`es volumineux, probablement constituant un deuxi`eme niveau de caches.Chapitre 8 Mesure du trafic et performances des caches Pour estimer les taux de hit d’une architecture `a deux niveaux, il est primordial de mesurer les caract´eristiques du trafic, car les taux de hit d´ependent fortement de la nature du trafic et de son volume. 8.1 Mesure du trafic Nous pr´esentons les caract´eristiques du trafic Internet, et nous discutons des param`etres les plus importants pour nos ´evaluations. 8.1.1 Types de contenu Le “Cisco Visual Networking Index” publi´e en 2011 [29] classifie le trafic Internet et la demande globale pr´evue pour la p´eriode 2010-2015. 96% du trafic repr´esente le transfert de contenus susceptibles d’ˆetre stock´es dans les m´emoires cache. On peut les classifier en quatre cat´egories : – Donn´ees web : Ce sont les pages web visit´ees par les internautes. – Fichiers partag´es : G´en´eralement g´er´es par des protocoles pair `a pair, cr´eant une communaut´e d’entraide : Un utilisateur (leecher) peut t´el´echarger un fichier stock´e dans une des machines des autres utilisateurs (seeders). D`es que son t´el´echargement est termin´e, le leecher devient `a son tour seeder. Les r´eseaux pair `a pair rencontrent de plus en plus de probl`emes `a cause de la violation des droits 5152 8.1. MESURE DU TRAFIC d’auteur par leurs utilisateurs. Ces derniers peuvent mettre en t´el´echargement du contenu ill´egal. R´ecemment, `a titre d’exemple, le site Demonoid n’est plus disponible, probablement `a cause de la violation des droits d’auteur. – Contenu g´en´er´e par les utilisateurs (UGC) : C’est un ensemble de contenus g´en´er´es par les utilisateurs, ou directement mis `a disposition par ces derniers. La communaut´e utilisant ce partage utilise des logiciels libres, des contenus avec des licences de droit d’auteur flexibles, permettant des ´echanges simples entre des utilisateurs, mˆeme ´eloign´es g´eographiquement. A la diff´erence des r´eseaux pair `a pair, les donn´ees sont sauvegard´ees sur les serveurs priv´ees du fournisseur de contenu. Il d´etient alors la possibilit´e de v´erifier les contenus charg´es par les utilisateurs avant leur publication. – Vid´eo `a la demande (VoD) : C’est une technique de diffusion de donn´ees permettant `a des utilisateurs de commander des films ou ´emissions. La t´el´evision sur IP est le support le plus utilis´e. Le service VoD est propos´e g´en´eralement par des fournisseurs d’acc`es Internet, et il est dans la plupart des cas payant. Le contenu propos´e est lou´e pour une p´eriode donn´ee, assurant ainsi le respect des droits num´eriques. Les proportions du trafic sont indiqu´es dans le tableau 8.1. Fraction du trafic (pi) taille de la taille moyenne 2011 2015 population(Ni) des objets (θi) Web .18 .16 1011 10 KB File sharing .36 .24 105 10 GB UGC .23 .23 108 10 MB VoD .23 .37 104 100 MB Table 8.1 – Les caract´eristiques des contenus du trafic Internet 8.1.2 La taille des contenus et des objets – Web : La soci´et´e Netcraft 1 publie chaque mois le nombre de sites, estim´e grˆace `a un sondage fait aupr`es de soci´et´es d’h´ebergement et d’enregistrement des noms de domaine. Elle estime le nombre de sites actifs `a 861 379 152, en consid´erant la moyenne de nombre de pages par site `a 273 2 nous comptons plus de 2 ∗ 1011 pages web. Pour notre ´etude, on suppose que le nombre de pages web est de 1011 et leur taille moyenne est de 10KB [30]. – Fichiers partag´es : On estime le nombre de fichiers partag´es grˆace aux statistiques relev´ees sur le site Demonoid3 `a 400 000 fichiers de taille moyenne de 7.4 GB. Nous arrondissons ces chiffres dans le tableau 8.1. 1http ://news.netcraft.com/archives/category/web-server-survey/ 2http ://www.boutell.com/newfaq/misc/sizeofweb.html 3www.demonoid.me/CHAPITRE 8. MESURE DU TRAFIC ET PERFORMANCES DES CACHES 53 – UGC : Les contenus UGC sont domin´es par Youtube. Une ´etude r´ecente, faite par Zhou et al. [31], estime le nombre de vid´eos Youtube `a 5 × 108 de taille moyenne de 10 MB. Actuellement avec une simple recherche du mot clef ”a” sur Youtube nous comptons plus de 109 vid´eos. – VoD : Les vid´eos `a la demande sont estim´ees `a quelques milliers et sont de taille moyenne de 100 MB. Ce sont sans doute des sous-estimations avec l’essore r´ecente de certaines applications VoD mais elles sont suffisamment pr´ecises pour les ´evaluations pr´esent´ees dans la suite. 8.1.3 Distribution de la popularit´e La distribution de la popularit´e est un des ´el´ements essentiels du calcul des performances d’un cache. – Web : La popularit´e des pages web suit g´en´eralement la loi de Zipf : le taux de demandes q(n) pour le ni´eme objet le plus populaire est proportionnel `a 1/nα. Selon [32] et [30] le param`etre α varie entre 0.64 and 0.83. – Fichiers partag´es : Il est possible de calculer la popularit´e des torrents en utilisant les statistiques extraites du site Demonoid. En entrant un mot clef, on peut classer les torrents d’une mani`ere d´ecroissante suivant le nombre de t´el´echargements en cours (mais le site ne permet l’affichage que des 10 000 premiers et les 10 000 derniers torrents). La loi de popularit´e correspond `a peu pr`es `a une loi de Zipf de param`etre α ´egal 0.82. On estime que la popularit´e du site PirateBay suit une loi de Zipf de param`etre 0.75. On trace la popularit´e des vid´eos partag´es pour deux sites ”PirateBay” et ”torrentreactor” 4 . Apr`es une recherche par mot clef, les sites affichent les vid´eos et le nombre de leechers correspondants. En choisissant comme mot clef la seule lettre ”a”, et apr`es un tri d´ecroissant du nombre de leechers, nous tra¸cons les popularit´es pr´esent´ees dans 8.1(a) et 8.1(b). Pour le site torrentreactor, la popularit´e suit la loi Zipf(0.75) pour les premiers rangs, puis la courbe s’incline et suit une loi Zipf(1.2) pour la queue de la loi. La mˆeme observation concerne le site PirateBay. – UGC : Les flux UGC suivent une loi de Zipf avec α estim´e `a 0.56 [11] ou `a 0.8 [13]. Des travaux r´ecents de Carlinet et al. [33] sugg`erent plutˆot une loi Zipf(0.88). – VoD : L’´etude de Carlinet et al. ´evalue ´egalement les VoD. La loi de popularit´e n’est pas de Zipf, mais une combinaison de deux lois de Zipf. La premi`ere est de param`etre 0.5 pour les 100 objets les plus populaires, la deuxi`eme est de param`etre 1.2 pour les objets suivants. Des statistiques ´etudi´ees par Yu et al. [34] pour un service VoD en Chine sugg`erent une loi de Zipf avec α variant ente 0.65 et 1. 4http ://www.torrentreactor.net54 8.2. LE TAUX DE HIT D’UN CACHE LRU 1 10 100 1000 10000 100000 1 10 100 1000 10000 nombre de leechers Zipf(0.75) (a) torrentreactor 1 10 100 1000 10000 1 10 100 1000 rang nombre de leechers Pirate Bay Zipf(0.7) (b) Pirate Bay Figure 8.1 – La popularit´e des vid´eos partag´ees sur torrentreactor et Pirate Bay 8.2 Le taux de hit d’un cache LRU 8.2.1 Independent Reference Model Afin d’utiliser les mod`eles math´ematiques, on consid`ere g´en´eralement un ensemble d’objets ayant des popularit´es fixes, ainsi qu’un catalogue d’objets fixe. C’est le mod`ele dit “independance reference model” ou IRM. En r´ealit´e, les objets changent de popularit´e et les catalogues ne cessent d’augmenter. Une prise en compte d’une telle complexit´e ne peut ˆetre r´esolue par mod`ele math´ematique, et est tr`es complexe `a simuler. Cependant, la variance des popularit´es est n´egligeable par rapport au temps de remplissage d’un cache. On peut consid´erer que les mod`eles sont applicables sur un intervalle de temps o`u les popularit´es et les catalogues seront approximativement fixes. Afin d’appliquer ce mod`ele math´ematique, il faut aussi que les requˆetes soient ind´ependantes. Ceci est vrai si des demandes arrivent d’un grand nombre d’utilisateurs agissant de fa¸con ind´ependante. Ces conditions s’appliquent pour un premier niveau de cache. Mais pour les niveaux sup´erieurs, la corr´elation des demandes invalide le mod`ele IRM. Cependant, selon Jenekovic et Kang [35], la corr´elation des demandes qui d´ebordent du premier niveau d’un simple r´eseau de caches `a deux niveaux, a un faible effet sur la probabilit´e de hit observ´ee. Nous avons d’ailleurs v´erifi´e par simulation l’effet de la corr´elation des demandes pour un r´eseau de caches en arbre. Pour conclure, les mod`eles math´ematiques bas´es sur l’IRM peuvent ˆetre appliqu´es pour les r´eseaux CCN, car la popularit´e des objets varient d’une mani`ere faible par rapport `a la vitesse de remplissage des caches et l’ind´ependance est respect´ee.CHAPITRE 8. MESURE DU TRAFIC ET PERFORMANCES DES CACHES 55 8.2.2 Les mod`eles analytiques Une politique de remplacement LFU (least frequently used) reste la politique id´eale ; mais, il est impossible de mettre en place cet id´eal car la popularit´e des objets est en g´en´eral inconnue. Van Jacobson propose un ordonnancement LRU (least recently used) dans tous les caches mˆeme si plusieurs travaux remettent en question les performances r´eseau avec une utilisation exclusive de LRU. Les ´etudes de performance des r´eseaux CCN n´ecessitent l’utilisation de mod`eles math´ematiques afin de confirmer et de g´en´eraliser les observations tir´ees des simulations et exp´erimentations. Notre objectif, qui est d’´evaluer la performance pour les tr`es grandes populations (jusqu’`a 1011 pages web, par exemple) et leur m´elange, n’est pas envisageable par simulation. Les mod`eles exacts sont tr`es complexes, mˆeme dans le cas basique d’un seul cache. La complexit´e de ces mod`eles croit d’une fa¸con exponentielle avec la taille des caches et le nombre d’objets. Il est donc plus int´eressant de cr´eer des mod`eles simplifi´es bas´es sur des approximations. La majorit´e des mod`eles ont ´et´e con¸cus pour une politique de remplacement LRU avec le mod`ele dit IRM (Independent Reference Model). Quelques travaux r´ecents ont trait´e cette probl´ematique. En effet, G. Carofiglio et al. [36] proposent un mod`ele g´en´eralis´e dans le cas d’un r´eseau de caches (architecture en arbre) ; ce mod`ele se limite aux cas d’arriv´ees suivant un processus de Poisson et une loi de popularit´e de type Zipf avec α > 1. Ce mod`ele s’applique `a la mise en cache par paquet (comme CCN) et prend en compte la d´ependance entre les paquets data d’un mˆeme objet. Un autre mod`ele pour les r´eseaux de caches a ´et´e propos´e par E.Rosensweig et al. [37] ; c’est un mod`ele adapt´e `a toute architecture. Cependant, la complexit´e du calcul du taux de hit, dˆu `a Dan et Towsley [38], limite cette approche `a des r´eseaux et des populations de taille relativement faible. 8.2.3 La formule de Che Nous pensons qu’une mise en cache par objet est plus simple `a d´eployer et `a utiliser qu’une mise en cache par paquet. La proposition de Che et al. [39] est particuli`erement int´eressante. Hormis sa facilit´e d’utilisation par rapport aux autres mod`eles, sa grande pr´ecision a ´et´e d´emontr´ee dans plusieurs cas. On consid`ere un cache de taille C, des objets appartenant `a un catalogue de taille M arrivent au cache suivant une loi de popularit´e pop(n) proportionnelle `a q(n). Sous un syst`eme conforme au mod`ele IRM, la probabilit´e de hit h(n) d’un objet n selon l’approximation de Che est estim´ee `a : h(n) = 1 − e −q(n)tc , (8.1) o`u tc est la solution de l’´equation : C = X n (1 − e −q(n)tc ). (8.2)56 8.2. LE TAUX DE HIT D’UN CACHE LRU Cette approximation est centrale pour le reste du travail. Voici quelques ´el´ements expliquant sa pr´ecision et sa validit´e comme mod`ele math´ematique. On note Tc(n) le temps o`u exactement C objets diff´erents de n ont ´et´e demand´es. On suppose une premi`ere demande de l’objet n faite `a l’instant 0, la prochaine requˆete pour l’objet n a lieu `a τn, cette demande est un hit si τn < Tc(n). La probabilit´e de hit de l’objet n peut ˆetre exprim´ee par : h(n) = P(τn < Tc(n)). (8.3) Che et al. ont mesur´e par simulation Tc(n) et ont observ´e qu’il est presque d´eterministe et montre une tr`es faible variation en fonction du rang mˆeme pour des catalogues petits (catalogue de 10 000 objets). Cette variable est presque ind´ependante de l’objet n et est caract´eristique au catalogue On pose alors E(Tc(n)) = tc que Che et al consid`erent comme le “temps caract´eristique” du cache. Puisque les arriv´ees de requˆetes suivent un processus de Poisson, le temps inter-arriv´ee τn suit une loi exponentielle de param`etre q(n). On a donc la probabilit´e de hit h(n) de l’objet n, en r´e´ecrivant (8.3 : h(n) = P(τn < tc) = 1 − exp(−q(n)tc) . Dans l’intervalle [0,tc], nous avons exactement C arriv´ees sans compter l’objet n. Donc, `a cet instant pr´ecis, parmi les M objets du catalogue, C objets exactement sont arriv´es au cache `a l’instant tc, sans compter l’objet n. Ceci peut ˆetre exprim´e par : X M i=1,i6=n P(τi < tc) = C o`u τi est le temps s´eparant le d´ebut de l’observation t = 0 du temps d’arriv´ee de la demande de l’objet i au cache donc exponentielle de paramˆetre q(i). Cette ´equation permet de calculer le temps caract´eristique des caches tc. Mais pour plus de facilit´e, l’´equation devient : PM i=1 P(τi < tc) = C. Ceci est valable si la popularit´e individuelle de l’objet est relativement petite par rapport `a la somme des popularit´es. En utilisant le fait que τi est de loi exponentielle de param`etre q(i), l’´equation devient l’´equation (8.2), C = PM i=1(1 − e −q(i)tc ). L’approximation n’est pas seulement pr´ecise dans le cas, envisag´e par Che et al, d’un grand cache et d’une population importante, mais ´egalement pour des syst`emes tr`es petits. Nous avons v´erifi´e la validit´e de l’approximation par simulation, pour un seul cache de taille 104 et une loi Zipf(0.8) ou un cache de taille 16 et une loi g´eom´etrique Geo(0.5). La figure 8.2 montre la pr´ecision de l’approximation et sa validit´e mˆeme dans le cas d’un petit cache. Pour calculer le h(n), il faut d’abord trouver le tc qui est le z´ero de l’´equation (8.2). Pour trouver le z´ero de cette ´equation on utilise la m´ethode de Newton : On peut trouver une valeur proche du z´ero d’une fonction f(x) en calculant successivementCHAPITRE 8. MESURE DU TRAFIC ET PERFORMANCES DES CACHES 57 0 0.2 0.4 0.6 0.8 1 1 100 10000 hit rate cache size (objects) 0 0.2 0.4 0.6 0.8 1 1 4 16 cache size (objects) Figure 8.2 – Taux de Hit en fonction de la taille du cache en utilisant l’approximation de Che : `a gauche, N = 104 , popularit´e de Zipf(.8) , rangs 1, 10, 100, 1000 ; `a droite, N = 16, popularit´e geo(.5), rangs 1, 2, 4, 8. une suite de valeurs xi jusqu’`a l’obtention d’une approximation satisfaisante. xi+1 est calcul´e `a partir de la valeur de xi : xi+1 = xi − f(xi) f ′(xi) . (8.4) Le x0 est choisi arbitrairement et on calcule x1 en utilisant la formule 8.4. On recalcule f(x1) et si f(x1) est suffisamment proche de z´ero, x1 est alors le z´ero de f(x). Sinon on calcule x2,... Nous constatons que la convergence est extrˆemement rapide. 8.3 Autres politiques de remplacement 8.3.1 Le cache Random 8.3.1.1 Relation entre taux de hit et temps moyen de s´ejour On consid`ere un cache de taille C utilisant une politique de remplacement Random. Dans cette politique, lorsqu’il faut lib´erer de la place pour cacher un nouveau contenu, le contenu `a ´eliminer est choisi au hasard. La taille du catalogue est M. On note Ts(n) le temps de s´ejour de l’objet n. On commence nos observations sur des simulations pour un catalogue de taille petite (100 objets), et un cache de 50 objets. Nous ´etudions le cas des objets arrivant suivant une loi Zipf(0.6) et Zipf(1.2). Les requˆetes arrivent avec un taux de 100 requˆetes/s. On lance la simulation pour 1 000, 10 000 et 100 000 it´erations. On trace dans ces trois cas la moyenne du temps de s´ejour Ts(n) en fonction de n (voir figure 8.3). On remarque que, plus le nombre d’it´erations augmente, plus la moyenne du temps de s´ejour tend vers une valeur pr´ecise. Cette valeur est la mˆeme quelque soit le rang. En se basant sur cette observation, on consid`ere que la valeur du temps de s´ejour est ind´ependante de n (adoptant la mˆeme approximation que Che). Le Ts est fixe quand58 8.3. AUTRES POLITIQUES DE REMPLACEMENT 0 1000 2000 3000 4000 5000 6000 0 10 20 30 40 50 60 70 80 90 100 Ts rang 1000 iteration 10000 iterations 100000 iterations (a) Zipf(0.6) 0 2000 4000 6000 8000 10000 12000 0 10 20 30 40 50 60 70 80 90 100 Ts rang 1000 iteration 10000 iterations 100000 iterations (b) Zipf(1.2) Figure 8.3 – Temps de s´ejour en fonction du rang pour un cache Random le temps de simulation devient grand, car tout objet du cache a la mˆeme probabilit´e que les autres objets d’ˆetre ´elimin´e. Son ´elimination d´epend surtout du taux de miss du cache qui devient fixe et stable apr`es un certain temps de simulation. Nous appliquons la formule de Little. La probabilit´e de hit d’un objet i est exprim´ee par : h(n) = λ(n)Ts(n) o`u Ts(n) est la moyenne du temps de s´ejour de l’objet n. Ts(n) ´etant fixe et ind´ependant de n on pose Ts(j) = Ts ∀j. Le taux d’entr´ee dans le cache est λ(n) = (1 − h(n))pop(n) o`u pop(n) est la popularit´e normalis´ee de l’objet n. On obtient alors : h(n) = (1 − h(n))pop(n)Ts Finalement h(n) peut ˆetre exprim´e par : h(n) = pop(n)Ts 1 + pop(n)Ts . (8.5) La moyenne du temps de s´ejour peut ˆetre calcul´ee en utilisant l’´equation suivante (comme dans le cas d’un cache LRU) : X M i=1 h(i) = X M i=1 pop(i)Ts 1 + pop(i)Ts = C. (8.6) L’´equation (8.6) peut ˆetre r´esolue avec la m´ethode de Newton. Nous utilisons la valeur Ts retrouv´ee dans l’´equation (8.5) pour d´eterminer les h(n). Les valeurs h(n) trouv´e par calcul et h(n) trouv´e par simulation sont compar´ees dans la Figure 8.4 pour diff´erentes valeurs de taille de cache et pour les deux lois, Zipf(0.6) et Zipf(1.2) et pour diff´erentes tailles de cache : c = C/M = 0.3, 0.5, 0.7.CHAPITRE 8. MESURE DU TRAFIC ET PERFORMANCES DES CACHES 59 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 10 20 30 40 50 60 70 80 90 100 hit rang simulation c=0.5 simulation c=0.3 simulation c=0.7 calcul c=0.5 calcul c=0.3 calcul c=0.7 (a) Zipf(0.6) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 10 20 30 40 50 60 70 80 90 100 hit rang simulation c=0.5 simulation c=0.3 simulation c=0.7 calcul c=0.5 calcul c=0.3 calcul c=0.7 (b) Zipf(1.2) Figure 8.4 – Taux de hit en fonction du rang pour un cache Random 8.3.1.2 Approximation de Fricker Dans Fricker, Robert and Roberts [40], l’approximation suivante est donn´ee : Ts ≃ P τC j6=n q(j) , (8.7) o`u τC est une constante. On va discuter de la validit´e de cette approximation. Le temps de s´ejour d’un objet n peut ˆetre exprim´e, comme repr´esent´e dans la figure 8.5, par une somme de temps tj o`u tj est la dur´ee entre deux requˆetes successives. Arrivee de l’objet n Temps de sejour de l’objet n t1 t3 ti t2 Arrivee de l’objet n sortie de l’objet n Figure 8.5 – Repr´esentation du temps de s´ejour A chaque arriv´ee au cache, l’objet n peut ˆetre retir´e du cache avec une probabilit´e de 1/C si l’objet arriv´e n’appartient pas d´ej`a au cache. On suppose que toute nouvelle arriv´ee au cache implique une mise `a jour mˆeme si cette arriv´ee est un hit. Soit n fix´e. Calculons le temps de s´ejour Ts(n). Tout objet i arrive au cache suivant un processus de Poisson de taux q(i). Les temps inter-arriv´ees Zi sont des variables al´eatoires ind´ependantes de loi exponentielle de param`etre q(i). La prochaine requˆete susceptible de retirer l’objet n du cache se passe `a un temps Xn1 Xn1 = inf i6=n (Zi). (8.8)60 8.3. AUTRES POLITIQUES DE REMPLACEMENT Donc Xn1 suit une loi exponentielle de param`etre P i6=n q(i). Comme la politique de remplacement est Random, on en d´eduit facilement que Ts(n) = X Y j=1 Xnj (8.9) o`u Xnj est de loi exponentielle de param`etre P i6=n q(i), et Y est de loi g´eom´etrique de param`etre 1 − 1/C sur N∗ , ind´ependant de (Xnj )j≥1. D’o`u, en passant `a l’esp´erance dans l’´equation (8.9), Ts(n) = X +∞ i=1 P(Y = i) X i k=1 E(Xnk). (8.10) Or, E(Xnk) = 1/ X j6=n q(j), et comme Y suit une loi g´eom´etrique de param`etre 1 − 1/C sur N∗ , il vient que E(Y ) = C. En effet, une v.a. de loi g´eom´etrique de param`etre a sur N∗ est de moyenne 1/(1−a). En reportant dans l’´equation (8.10 le temps de s´ejour moyen peut donc ˆetre exprim´e par : Ts(n) = E(Y ) P j6=n q(j) = C P j6=n q(j) . Revenons `a l’approximation (8.7). L’id´ee sous-jacente dans [40] est qu’on peut approximer le temps de s´ejour de n en supposant que toute arriv´ee mˆeme un hit implique une mise `a jour. Cela revient `a supposer que tous les objets autres que n sont hors du cache. Intuitivement, cela est justifi´e si 1) le cache est petit devant la taille du catalogue, 2) les objets les plus populaires sont hors du cache car ce sont eux qui contribuent le plus `a P j6=n q(j). Cette deuxi`eme condition n’est pas du tout naturelle. On va voir, en tra¸cant les diff´erentes approximations du taux de hit, que cela est vrai pour une loi de popularit´e de Zipf de param`etre α < 1 o`u les objets ont des popularit´es plus voisines que pour α > 1 o`u les objets les plus populaires sont dans le cache avec forte probabilit´e. 8.3.1.3 Approximation de Gallo Gallo et al [41] ont propos´e une approximation pour le taux de hit pour une valeur de α > 1. La probabilit´e de miss d’un objet i est approxim´ee quand C est grand par :CHAPITRE 8. MESURE DU TRAFIC ET PERFORMANCES DES CACHES 61 M iss(i) = ραi α Cα + ραi α o`u ρα =  π α sin( π α ) α . Cela revient `a hitg(i) = 1 − M iss(i) ≈ 1 ρα(i/C) α + 1 . Donc tout calcul fait, le temps de s´ejour devrait ˆetre proportionnel `a  C ∗ sin( π α ) π α α pour α > 1. On compare l’approximation du taux de hit avec l’approximation de Gallo et al [41] hitg pour des valeurs de α > 1 pour un catalogue de M = 20 000. Voir Figure 8.6. On remarque que les deux approximations sont proches pour des valeurs de caches mˆemes petites (C ≥ 20). 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 10 100 1000 10000 100000 hit rang hitG hitF α = 1.2 α = 1.5 α = 1.7 (a) C=20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 10 100 1000 10000 100000 hit rang hitG hitF α = 1.2 α = 1.5 α = 1.7 (b) C=100 Figure 8.6 – Comparaison des taux de hit avec l’approximation de Fricker et Gallo 8.3.2 Le cache LFU Le cache LFU ne stocke que les objets les plus populaires. Donc, la probabilit´e de hit LFU peut ˆetre calcul´ee, pour un catalogue de taille M et pour un cache LFU de taille  C objets : hit(i) = 1; 0 ≤ i ≤ C, hit(i) = 0;i > C.62 8.3. AUTRES POLITIQUES DE REMPLACEMENT La probabilit´e de hit globale d’un cache LFU peut donc ˆetre exprim´ee par : hitg = X C i=1 pop(i) o`u pop(i) est la popularit´e normalis´ee de l’objet i. Pour une loi de Zipf, la popularit´e normalis´ee de l’objet i peut ˆetre exprim´ee par : pop(i) = 1/iα PM k=1 1/kα Donc la probabilit´e globale de hit pour un cache LFU peut ˆetre exprim´ee par : hitg = PC i=1 1/iα PM i=1 1/iα . Soit i un entier et t un r´eel tel que i ≤ t ≤ i + 1. Pour α > 0, on a : 1 (i + 1)α < 1 t α < 1 i α. Donc, 1 (i + 1)α < Z i+1 i 1 t α dt < 1 i α , d’o`u (M + 1)1−α − 1 1 − α < X M i=1 1 i α < M1−α 1 − α et (C + 1)1−α − 1 1 − α < X C i=1 1 i α < C 1−α 1 − α . Puisque le nombre d’objets est grand, nous consid´erons M + 1 ≈ M. Nous utilisons des caches d’au moins quelques centaines d’objets donc, C + 1 ≈ C nous concluons que : X M i=1 1 i α ≈ M1−α 1 − α et X C i=1 1 i α ≈ C 1−α 1 − α . (8.11) La probabilit´e de hit global pour un cache LFU peut ˆetre exprim´ee par : hitg =  C M 1−α ,CHAPITRE 8. MESURE DU TRAFIC ET PERFORMANCES DES CACHES 63 8.3.3 Comparaison des politiques de remplacement On compare les deux politiques de remplacement LRU et Random en fonction du rang, pour des valeurs de α = 0.6 et α = 1.2, et pour un cache de 50% la taille du catalogue. On fixe le catalogue M = 106 objets. Il est clair que la politique LRU est plus performante que Random. On remarque aussi que l’´ecart entre LRU et Random est r´eduit pour un α = 1.2, ce qui est confirm´e par l’´etude de Gallo et al. [41]. Cet ´ecart se r´eduit de plus en plus quand α grandit. Mais pour une comparaison effective, il est imp´eratif de comparer le taux de hit global des caches Random, LRU et LFU. 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 hitg c LRU LFU Random (a) Zipf(0.6) 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 hitg c LRU LFU Random (b) Zipf(1.2) Figure 8.7 – Taux de hit global en fonctions des tailles de cache normalis´e On remarque que la diff´erence des performances des caches est plus grande et visible dans le cas d’un Zipf(0.6). Cette diff´erence diminue pour un α = 1.2, non seulement entre LRU et Random, mais aussi LFU. Les caches deviennent aussi plus efficaces. Or, selon notre ´etude bibliographique et les statistiques tir´ees de certains sites fournisseurs de donn´ees, on sait que le α est g´en´eralement < 1. La politique LFU dans ce cas s’´eloigne largement de LRU et Random. Mais un petit ´ecart est `a noter entre les deux politiques de remplacement LRU et Random.Chapitre 9 Les performances des hierarchies de caches ´ 9.1 Caract´eristiques d’une hi´erarchie de caches Pour ´evaluer les performances des hi´erarchies de caches, il est important de pr´eciser ses caract´eristiques. Une hi´erarchie de caches est diff´erente d’une autre si un ou plusieurs de ces caract´eristiques sont diff´erentes. 9.1.1 Politique de remplacement La fonction d’un algorithme de remplacement est de choisir un contenu `a remplacer dans un cache quand le cache est plein, et qu’un nouveau contenu doit ˆetre enregistr´e. Un algorithme optimal ´elimine le contenu qui serait le moins utilis´e. Pour ceci, l’´evolution des popularit´es des objets devrait ˆetre connue. Puisque la variation des popularit´es des objets se produit sur un temps beaucoup plus grand que le temps n´ecessaire pour remplir un cache, les pr´edictions futures peuvent se baser sur le comportement pass´e ; ce qu’on appelle principe de localit´e. On trouve diff´erentes politiques de remplacement – LRU (Least Recently Used) : cet algorithme remplace le contenu utilis´e le moins r´ecemment. Il se base sur le principe de localit´e temporelle. Un objet tr`es populaire sera demand´e plus rapidement qu’un objet moins populaire. L’impl´ementation de cette politique est simple. Il suffit d’attribuer `a chaque objet du catalogue un hash, le hash correspond `a une case permettant de renseigner l’adresse de l’objet recherch´e (si l’objet n’existe pas, l’adresse correspond `a NULL). Des pointeurs permettent de relier les objets et de sauvegarder l’ordre des objets. 64CHAPITRE 9. LES PERFORMANCES DES HIERARCHIES DE CACHES ´ 65 – LFU (Least Frequently Used) : cet algorithme remplace le contenu le moins fr´equemment utilis´e. Il est optimal pour des popularit´es fixes, mais les variations des popularit´es des objets le rend moins efficace. Si les variations des popularit´es sont lentes, LFU est un bon algorithme de remplacement. De plus, il est facile `a impl´ementer, il suffit d’attribuer `a chaque contenu le nombre de fois o`u il a ´et´e demand´e. Par contre, son impl´ementation mat´erielle serait coˆteuse, car un compteur devrait ˆetre attribu´e `a chaque contenu. – Random : Cet algorithme remplace un contenu au hasard, il ne demande aucun enregistrement d’information, mais il est moins performant que LRU. – MRU (Most recently used) : Cette politique ´elimine le contenu correspondant `a la donn´ee la plus r´ecemment demand´ee. Cette politique s’av`ere efficace comme politique de deuxi`eme niveau dans le cas d’une hi´erarchie de caches. Gallo et al [41] ont d´emontr´e, par simulation et par calcul, que la diff´erence entre les performances observ´ees entre un cache LRU et Random ne sont pas importantes, et surtout pour des popularit´es d’objets suivant une loi de Zipf avec α > 1 (α = 1.7). Nous avons simul´e des caches LRU et Random avec une loi Zipf (0.6), le constat reste le mˆeme. La diff´erence constat´ee entre probabilit´e de hit obtenue avec LRU et celle obtenue avec Random est faible. Mais cette diff´erence atteint 16% dans certains cas. Cette diff´erence, mˆeme n´egligeable, peut r´eduire l’utilisation de la bande passante d’une mani`ere importante. 9.1.2 Les politiques de meta-caching – LCE(Leave Copy Everywhere ) : Les objets sont copi´es `a chaque cache travers´e. – Fix [42] : Cette politique consiste `a mettre dans le cache un objet selon une probabilit´e fixe. – LCD(Leave Copy Down) [42] : Copier uniquement dans le cache suivant. Selon Laoutaris, cet algorithme offre les meilleurs r´esultats dans tous les cas ´etudi´es, et donc parait le plus prometteur de tous. – ProbCache [43] : Un objet est copi´e dans un cache suivant une probabilit´e calcul´ee en se basant sur le nombre de sauts travers´es. Psaras et al. pr´esentent des r´esultats remettant en cause LCD comme meilleur algorithme de metacaching, mais la diff´erence des performances reste petite entre les deux algorithmes. D’autre part, Rossini el al. [44] constatent que, inversement, LCD offre de meilleurs r´esultats. – WAVE [45] : c’est un algorithme de meta-caching orient´e chunk. Il est similaire `a LCD, sauf que des variables sont utilis´ees pour contrˆoler le stockage des donn´ees selon leur popularit´e. Les objets peu populaires ont peu de chance de traverser tous les caches menant au demandeur. Cet algorithme semble plus complexe que LCD, par opposition `a LCD qui stocke naturellement les objets les plus populaires tout pr`es des utilisateurs.66 9.1. CARACTERISTIQUES D’UNE HI ´ ERARCHIE DE CACHES ´ – Btw [46] : Cet algorithme se base sur le stockage des donn´ees uniquement dans les noeuds pertinents du r´eseau, c’est-`a-dire ayant la probabilit´e la plus ´elev´ee d’aboutir `a un hit pour les objets demand´es. Plusieurs ´etudes mettent en valeur la politique LCD `a cause de son efficacit´e constat´ee par les ´etudes comparatives, mais aussi par sa simplicit´e par rapport aux autres politiques. L’´etude r´ecente de Rossini et al. [44] confirme l’efficacit´e de LCD par rapport aux autres politiques propos´ees. Dans cette perspective, Laoutaris et al. ont pr´esent´e une ´etude portant sur une hi´erarchie LCD [47] ; cette ´etude commence par une comparaison entre plusieurs politiques de meta-caching. La politique LCD semble ˆetre la meilleure avec MCD (Move copy down). Cette ´etude pr´esente aussi un mod`ele analytique pour calculer num´eriquement le taux de hit pour une hi´erarchie LCD `a deux niveaux. La probabilit´e de hit au premier niveau d’un objet pour une hi´erarchie de caches LCD est estim´ee `a : h1(i) = exp(λi ∗ τ1 − 1)/exp(λiτ1) + exp(λi ∗ τ2) exp(λi ∗ τ2) − 1 o`u τ1 et τ2 repr´esentent les temps caract´eristiques des caches au premier et au deuxi`eme niveau ; la probabilit´e de hit au premier niveau d´epend de la taille du cache au deuxi`eme niveau. La probabilit´e de hit au deuxi`eme niveau vient directement de la formule de Che pour LCE, en supposant les arriv´ees au deuxi`eme niveau ind´ependantes. h2(i) = 1 − exp(−λi ∗ M iss1(i) ∗ τ2) o`u M iss1(i) est le taux de miss de l’objet i au premier niveau de caches. Les temps caract´eristiques sont calcul´es en uti P lisant la formule : i h(i) = C o`u C est la taille du cache. Cette ´equation peut ˆetre utilis´ee au premier niveau comme au deuxi`eme niveau de cache. Nous obtenons alors deux ´equations `a deux inconnues, τ1 et τ2. On r´esout ces ´equations en utilisant la m´ethode de Newton appliqu´ee aux ´equations `a deux inconnues. Pour ce faire, on est amen´e `a calculer l’inverse de la matrice jacobienne pour trouver la solution des ´equations. Nous comparons le mod`ele math´ematique avec les simulations ; nous observons les r´esultats dans le graphique 9.1 : Nous avons effectu´e des simulations comparant la politique LCE et LCD jug´ee la meilleure de toutes les politiques de m´etaching. Nous pr´esentons dans les graphes 9.2 les r´esultats des simulations pour une hi´erarchie de caches `a deux niveaux, avec 10 serveurs au premier niveau, attach´es `a un serveur au deuxi`eme niveau. Les r´esultats montrent un avantage de LCD par rapport `a LCE. Cet avantage diminue au fur et `a mesure que la taille des caches augmente, r´eduisant ainsi l’utilit´e de LCD. LCD permet de r´eduire le nombre de copies inutiles dans une hi´erarchie. La politique MCD, en plus de copier les objets dans le cache inf´erieur, efface les objets dans les caches sup´erieurs, mais l’efficacit´e de cet algorithme reste presque identique `a LCD, surtout dans le cas d’une hi´erarchie `a deux niveaux. Ainsi, LCD paraˆıt la politique la plus simple et la plus rentable. Nous comparons aussi la probabilit´e de hit globale des hi´erarchies afin d’´evaluer l’efficacit´e globale des algorithmes :CHAPITRE 9. LES PERFORMANCES DES HIERARCHIES DE CACHES ´ 67 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 60 70 80 90 100 hit1 : taux de hit au premier niveau rang c1 = c2 = 0.1 c1 = c2 = 0.3 simulation calcul (a) Zipf(0.6) 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 60 70 80 90 100 hit1 : taux de hit au premier niveau rang c1 = c2 = 0.1 c1 = c2 = 0.3 simulation calcul (b) Zipf(0.9) Figure 9.1 – Taux de hit calcul´e par simulation et mod`ele analytique de Che 0 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 hit1 C2 c1 = 0.1 c1 = 0.3 c1 = 0.5 LCE LCD (a) Zipf(0.6) 0 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 hit1 C2 c1 = 0.1 c1 = 0.3 c1 = 0.5 LCE LCD (b) Zipf(0.9) Figure 9.2 – Taux de hit au premier niveau pour une hi´erarchie LCE et LCD 9.1.3 Les politiques de forwarding – SPR (Shortest Path Routing) : Cette politique consiste `a chercher l’objet en suivant le chemin le plus court vers le serveur d’origine. – Flooding [48] : Envoyer la demande `a tous les noeuds et suivre le chemin trouv´e. Cette technique est lourde `a mettre en place et est tr`es coˆuteuse. – INFORM [49] : Chaque noeud sauvegarde les valeurs correspondant au temps de latence n´ecessaire pour arriver `a une destination en passant par chaque noeud voisin. Le noeud voisin, menant `a destination et offrant le moins de temps pour y arriver, est s´electionn´e pour r´ecup´erer ou envoyer les prochains paquets. – CATT [50] : Le choix du prochain noeud `a suivre pour arriver `a la donn´ee est effectu´e suivant le calcul du param`etre nomm´e potential value. Ce param`etre68 9.1. CARACTERISTIQUES D’UNE HI ´ ERARCHIE DE CACHES ´ 0 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 hitg C2 LCE LCD (a) Zipf(0.6) 0 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 hitg C2 LCE LCD (b) Zipf(0.9) Figure 9.3 – Taux de hit global pour une hi´erarchie LCE et LCD `a deux niveaux pour les cas de bas en haut c1 = 0.1, c1 = 0.3 et c1 = 0.5 peut ˆetre ´evalu´e selon le nombre de sauts, la situation g´eographique, ou la qualit´e de la bande passante s´eparant le noeud voisin des noeuds contenant les donn´ees. – NDN [51] : Cette strat´egie est propos´ee actuellement pour les r´eseaux orient´es contenus. Elle utilise des tables FIB, PIT et CS ; mais les FIB doivent ˆetre remplies suivant une autre m´ethode. Cet algorithme suppose des tables FIB compl`etes. – NRR(Nearest Routing Replica) : C’est une strat´egie id´ealiste qui consiste `a trouver la copie la plus proche du cache. Cette politique est lourde `a mettre en place car il faut maintenir des tables de routage orient´ees contenu tr`es dynamiques. La majorit´e des algorithmes propos´es r´ecemment se basent sur le calcul de param`etres de performances menant au cache contenant la donn´ee. Tout ceci n´ecessite des m´ecanismes de signalisation et d’´echange de messages de contrˆole afin d’identifier p´eriodiquement les chemins menant `a tous les objets. Cette op´eration est non seulement coˆuteuse, mais aussi pose des probl`emes de passage `a l’´echelle. SPR reste jusqu’`a pr´esent l’algorithme le plus utilis´e, et le plus simple ne pr´esentant aucun probl`eme de passage `a l’´echelle. Si la donn´ee est pertinente et populaire, elle devrait se trouver dans l’un des caches du plus court chemin menant au serveur d’origine. Ce dernier, est statique et invariable, sauf en cas de panne. L’utilisation des chemins secondaires est conditionn´ee par des probl`emes de congestion dans le r´eseau. La politique NRR semble ˆetre la politique la plus efficace offrant les meilleurs taux de hit. Nous comparons la politique SPF avec NRR afin d’´evaluer la diff´erence entre la politique la plus performante, et la politique la plus utilis´ee pour le routage. Les r´esultats sont pr´esent´es dans le graphique 9.4 :CHAPITRE 9. LES PERFORMANCES DES HIERARCHIES DE CACHES ´ 69 0 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 hit1 c1 SPF NRR (a) Zipf(0.6) 0 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 hit1 c1 SPF NRR (b) Zipf(0.9) Figure 9.4 – Taux de hit au premier niveau pour les politiques de forwarding NRR et SPF 9.2 Performances des hi´erarchies de caches Dans cette partie, nous ´etudions le cas d’une hi´erarchie LRU, avec une politique de forwarding se basant sur la recherche dans le serveur d’origine le plus proche. Nous nous limitons au cas de deux niveaux de cache ; Sem Borst [52] affirme qu’il n’y a aucune utilit´e `a utiliser un cache coop´eratif de plus de deux niveaux. 9.2.1 G´en´eralisation de la formule de Che Il a ´et´e mentionn´e dans la section pr´ec´edente que la corr´elation des demandes `a un deuxi`eme niveau de cache n’a qu’une petite influence sur les probabilit´es de hit. Nous d´emontrons, par simulation, que la formule de Che reste valide pour un deuxi`eme niveau de cache `a condition d’avoir suffisamment de caches au premier niveau att´enuant ainsi la corr´elation entre les caches. On consid`ere un catalogue de M objets ; les demandes arrivent au premier niveau suivant les lois de popularit´e Zipf(α). On mesure la probabilit´e de hit global de l’architecture `a deux niveaux, constitu´ee de n caches au premier niveau et d’un seul cache au deuxi`eme niveau. Les caches au premier niveau ont la mˆeme taille C1. On pose c1 = C1/M, la taille normalis´ee des caches au premier niveau. T2 est la taille du cache au deuxi`eme niveau et c2 = C2/M sa taille normalis´ee. On utilise la formule de Che au deuxi`eme niveau de caches, en consid´erant la popularit´e pop2(i) de l’objet i `a l’entr´ee du cache au deuxi`eme niveau : pop2(i) = pmiss1(i) ∗ pop1(i) o`u pmiss1(i) est la probabilit´e de miss de l’objet i au premier niveau.70 9.2. PERFORMANCES DES HIERARCHIES DE CACHES ´ La formule de Che au premier niveau s’applique normalement : pmiss1(i) = exp(−pop1(i) ∗ tc1 ) o`u tc1 est la solution de l’´equation C1 = X n (1 − e −pop1(i)tc1 ). La forumule de Che appliqu´ee au deuxi`eme niveau donne : pmiss2(i) = exp(−pop2(i) ∗ tc2 ) o`u tc2 est la solution de l’´equation C2 = X n (1 − e −pop2(i)tc2 ). La probabilit´e de hit globale hitg(i) de l’objet i est calcul´ee de : hitg(i) = hit1(i) + hit2(i) − hit1(i) ∗ hit2(i) et la probabilit´e de hit globale de toute l’architecture est : hitg = Xpop1(i) ∗ hitg(i). Comme on peut le remarquer, le n n’intervient pas dans le calcul du hitg, Che propose une formule plus complexe que l’´equation initiale incluant le n. La formule ´etant dif- ficile `a exploiter et nos calculs ne semblant pas donner des r´esultats plus satisfaisants, nous souhaitons savoir si la forumule de Che pour un cache est valable aussi pour plusieurs niveaux de cache. On compare les r´esultats des calculs avec les r´esultats des simulations dans les cas suivants : On remarque que les r´esultats des simulations sont proches des r´esultats de calcul pour n = 5, l’approximation de Che devient de plus en plus exacte que n augmente. Nous avons v´erifi´e que l’approximation est tr`es pr´ecise pour n ≥ 10. On compare les taux de hit globaux `a chaque niveau de cache pour une hi´erarchie de caches `a 5 niveaux obtenus par simulation hitgs et par calcul hitgc , les tableaux ci dessous repr´esente le taux d’erreur Te pour des valeurs de n de 2 et 5 (chaque noeud `a n fils). Le taux d’erreur est calcul´e ainsi : Te = hitgs − hitgc hitgs On effectue le calcul et la simulation dans le cas de caches de mˆeme taille normalis´ee c(cas CCN),on pose Tg la taille globale Tg = 5∗c ; les conclusions tir´ees sont identiques, mˆeme si les tailles de cache sont diff´erentes d’un niveau `a un autre. Il est clair queCHAPITRE 9. LES PERFORMANCES DES HIERARCHIES DE CACHES ´ 71 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 hicg c2 c1 = 0.1 c1 = 0.3 c1 = 0.5 c1 = 0.7 c1 = 0.9 Simulation Calcul (a) n=2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 hicg c2 c1 = 0.1 c1 = 0.3 c1 = 0.5 c1 = 0.7 c1 = 0.9 Simulation Calcul (b) n=5 Figure 9.5 – Probabilit´es de hit global pour α = 0.6 pour un nombre de caches au premier niveau ´egal `a 2 (gauche) et 5 (droite) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 hicg c2 c1 = 0.1 c1 = 0.3 c1 = 0.5 c1 = 0.7 c1 = 0.9 Simulation Calcul (a) n=2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 hicg c2 c1 = 0.1 c1 = 0.3 c1 = 0.5 c1 = 0.7 c1 = 0.9 Simulation Calcul (b) n=5 Figure 9.6 – Probabilit´es de hit global pour α = 0.8 la formule de Che reste une excellente approximation, mˆeme pour une hi´erarchie de caches, et `a tous les niveaux de caches. Chaque objet dans CCN est constitu´e d’un certain nombre de chunks, On peut appliquer l’approximation de Che au niveau des chunks. On consid`ere que la d´ependance entre chunks est n´egligeable. On consid`ere un catalogue de taille M objets, un cache de taille C ; chaque objet k est de taille Tk et constitu´e de Tk/s chunks, o`u s est la taille d’un chunk. La probabilit´e de hit du chunk oik de l’objet k est : h(oik) = 1 − exp(−pop(oik) ∗ tc)72 9.2. PERFORMANCES DES HIERARCHIES DE CACHES ´ level M = 104 M = 105 Tg = 20% level2 13% 17% level3 23% 25% level4 28% 29% level5 30% 32% Tg = 80% level2 14% 17% level3 23% 20% level4 25% 30% level5 28% 21% Table 9.1 – Taux d’erreur pour la formule de Che pour n = 2 level M = 103 M = 104 Tg = 20% level2 10% 6% level3 11% 13% level4 11% 9% level5 12% 10% Tg = 80% level2 1.3% 4% level3 3.5% 6.8% level4 5.8% 7.3% level5 6% 6.9% Table 9.2 – Taux d’erreur pour la formule de Che pour n = 5 o`u tc est la solution de l’´equation : C = X M k=1 T Xk/s i=1 (1 − exp(−pop(oik) ∗ tc). Les chunks appartenant au mˆeme objet ont la mˆeme popularit´e que l’objet : pop(oik) = pop(k). Donc tc est calcul´e avec l’´equation : C = X M k=1 (Tk/s) ∗ (1 − exp(−pop(k) ∗ tc). 9.2.2 Cas d’application : hi´erarchie de caches avec mix de flux Comme soulign´e pr´ec´edemment, les donn´ees Internet sont non homog`enes. Plusieurs cat´egories de donn´ees avec des lois de popularit´e et des tailles diff´erentes partagent les ressources r´eseaux. Nous nous limitons `a une hi´erarchie `a deux niveaux. On consid`ere un r´eseau constitu´e d’un premier niveau de caches (situ´es dans les routeurs d’acc`es)CHAPITRE 9. LES PERFORMANCES DES HIERARCHIES DE CACHES ´ 73 reli´es `a un grand cache de deuxi`eme niveau situ´e au coeur du r´eseau (voir la figure 9.7). Notons que le deuxi`eme niveau serait r´ealis´e en pratique par un r´eseau de caches dont les contenus seraient typiquement coordonn´es. L’´etude d’un tel r´eseau est l’objectif de nos travaux suivants. sources . . . . . layer 2 layer 1 users . Figure 9.7 – Hi´erarchie de caches `a deux niveaux On consid`ere que le nombre de routeurs d’acc`es au premier niveau de caches est large ; donc, les requˆetes arrivant au deuxi`eme niveau de cache peuvent ˆetre consid´er´ees comme ind´ependantes (i.e., l’IRM s’applique pour le deuxi`eme niveau). La popularit´e au deuxi`eme niveau de cache est ´egale `a q ′ (n) = q(n)(1 − h(n)) ; il suffit d’appliquer la formule de Che, en utilisant la nouvelle valeur de popularit´e, pour trouver la popularit´e de hit au deuxi`eme niveau h ′ (n). La figure 9.8 repr´esente la probabilit´e globale de hit, P n q(n)(h(n) + (1 − h(n)h ′ (n))/ P n q(n), en fonction des tailles de cache au premier et au deuxi`eme niveau. La loi de popularit´e est de Zipf et la figure montre l’impact de son param´etre α. 102 104 C1 102 104 C2 20% 40% 60% 80% 102 104 C1 102 104 20% 40% 60% 80% Figure 9.8 – Taux de hit(%) en fonction de la taille des caches dans les niveaux 1 et 2 : `a gauche, α = .8, N = 104 ; `a droite, α = 1.2. La mˆeme approche a ´et´e appliqu´ee dans le cas d’un mix de trafic, permettant ainsi de quantifier les tailles de cache aux deux niveaux pour une probabilit´e de hit cible.74 9.2. PERFORMANCES DES HIERARCHIES DE CACHES ´ 108 1012 1016 C1 108 1012 1016 C2 20% 40% 60% 80% 108 1012 1016 108 1012 1016 C2 20% 40% 60% 80% Figure 9.9 – Taux de hit(%) en fonction de la taille des caches dans les niveaux 1 et 2 : `a gauche trafic UGC, α = .8 ; `a droite, VoD α = 1.2, N = 104 . Les caches au premier niveau sont efficaces pour le trafic VoD voir figure 9.9. En effet, avec un cache `a 1012, la probabilit´e de hit du trafic VoD est de 80%. Avec un cache de mˆeme taille, on ne peut d´epasser 20% de probabilit´e de hit pour les autres types de trafic : UGC, fichiers partag´es et web. Il serait donc n´ecessaire de choisir des tailles de caches assez grandes. Les tailles de cache au premier niveau sont petites permettant ainsi leur insertion `a un routeur d’acc`es. Les VoD peuvent ainsi ˆetre stock´ees au premier niveau, contrairement aux autres types de donn´ees. Dans la table ci-dessous, nous ´evaluons la bande passante conserv´ee par une succession de deux caches ; le premier situ´e au premier niveau est de taille 1 TB , et le deuxi`eme situ´e au deuxi`eme niveau est de taille 100 TB. L’´evaluation est effectu´ee dans le cas d’un partage normal entre contenus, et dans le cas d’un premier niveau r´eserv´e au VoD. Les r´esultats pr´esent´es correspondent au trafic Mix de 2011 et 2015. On remarque qu’un stockage exclusif du VoD au premier niveau am´eliore la proZipf VoD(α) niveau1 R´eduction bande passante au niveau1 R´eduction bande passante au niveau1 et 2 2011 0.8 partag´e 17% 50% VoD 23% 58% 1.2 partag´e 24% 50% VoD 23% 58% 2015 0.8 partag´e 27% 59 % VoD 37% 61% 1.2 partag´e 36% 59% VoD 37% 61% Table 9.3 – R´eduction en bande passante pour C1=1TB et C2=100TB babilit´e du hit au premier niveau de cache ; ceci est plus significative pour une loi Zipf(0.8). Dans tous les cas, un stockage discriminatoire des VoD au premier niveauCHAPITRE 9. LES PERFORMANCES DES HIERARCHIES DE CACHES ´ 75 am´eliore le taux de hit global r´esultant des deux caches, que ce soit pour une loi Zipf(0.8) ou Zipf(1.2) ; la diff´erence devient moins importante pour le trafic futur en 2015.Chapitre 10 Conclusion Dans ce chapitre, nous avons ´evalu´e la nature, la quantit´e et la popularit´e des objets ´echang´es sur Internet, le trafic ´etant presque majoritairement du contenu distribu´e. Nous avons par la suite ´evalu´e les performances d’un cache LRU ; pour ce faire nous avons test´e la formule de Che. Cette derni`ere est facile `a utiliser num´eriquement, valable pour toute valeur de α de la loi de Zipf et pour plusieurs autres lois de popularit´e. Nous avons expliqu´e les raisons de son exactitude, en se basant sur la d´emonstration pr´esent´ee par Fricker et al. [40]. Nous avons aussi explor´e le cas d’un cache Random, et compar´e ses performances `a celles d’un cache LRU. Nous avons d´ecrit les caract´eristiques d’une hi´erarchie de caches, et compar´e les perfomances des politiques de m´etacaching et forwarding optimales avec des politiques d’usage. Nous avons constat´e `a travers un exemple en se basant sur des donn´ees repr´esentant l’´echange actuel sur Internet que pour des lois de popularit´e Zipf avec un param`etre α ¡1 il faut choisir des taille de cache tr`es grands pour r´eduire d’une mani`ere significative l’utilisation de la bande passante, ce qui est techniquement difficile `a mettre en place au niveau d’un routeur, ce qui nous m`ene `a penser qu’un d´eployment au niveau applicatif est plutot convenable. Dans la partie suivante nous comparons les couts des hierarchies distribu´es `a la CCN et des hierarchie centralis´ees `a la CDN qui nous paraissent techniquement plus convenables. 76Troisi`eme partie Coˆuts d’une hi´erarchie de caches 77Chapitre 11 Introduction 11.1 Probl´ematique Dans la partie pr´ec´edente, nous avons propos´e un stockage diff´erenci´e pour am´eliorer la probabilit´e de hit de la hi´erarchie. Or, en stockant la majorit´e des contenus au deuxi`eme niveau, la consommation de la bande passante est plus ´elev´ee. Pour les op´erateurs, deux crit`eres importants leur permettent de prendre des d´ecisions pour le stockage de donn´ees : les coˆuts et la difficult´e de mise en oeuvre. Puisque les r´eseaux de caches comme CCN sont distribu´es, il faut prendre en compte, en plus du coˆut de la m´emoire, le coˆut de la bande passante. Plus l’op´erateur investit en m´emoire, moins il investit en bande passante et vice versa. L’op´erateur est amen´e `a d´eterminer un tradeoff optimal entre bande passante et caches. D´eploy´es depuis des ann´ees, les CDN deviennent un acteur principal g´erant le contenu des sites les plus connus et utilis´es `a travers le monde. Les CDN, contrairement aux CCN, sont des hi´erarchies centralis´ees et non distribu´ees. Entre une technologie d´ej`a exp´eriment´ee et une technologie en cours de recherche, le choix de l’op´erateur est vite fait ; les CDN sont d´ej`a d´eploy´es et utilis´es, ne n´ecessitant aucune mise `a jour au niveau des routeurs ou des protocoles r´eseau. Le seul enjeu pouvant encourager les op´erateurs `a opter pour une architecture distribu´ee est le coˆut. Le travail de Ghodsi et al. [5] n’apporte pas cet espoir pour l’avenir des r´eseaux CCN ; cet article vient, en plein milieu de notre travail de recherche, remettre en cause l’utilisation des CCN `a l’avenir. Les arguments avanc´es par Ghodsi et al. sont : – La difficult´e de changer tout le mod`ele Internet de l’orient´e IP vers l’orient´e contenu. La convergence vers un r´eseau orient´e contenu n´ecessite un changement au niveau des routeurs et des protocoles. Ceci n´ecessiterait beaucoup de temps, compte tenu du temps d’attente avant la mise en place d’une mise `a 78CHAPITRE 11. INTRODUCTION 79 jour mineure telle que l’adressage IPv6. Sans compter le temps n´ecessaire pour l’impl´ementation de PKI assurant la s´ecurit´e des donn´ees. – Pour des caches de grande taille, il a ´et´e prouv´e que la coop´eration n’apporte que tr`es peu de gain ; donc l’utilit´e de distribuer la m´emoire sur tous les routeurs n’est pas forc´ement b´en´efique. En plus de ces arguments, notre travail pr´esent´e dans [40] montre qu’une r´eduction significative de bande passante n´ecessite l’utilisation de caches de tr`es grande taille, d´epassant largement la taille de la m´emoire pouvant ˆetre ajout´ee `a un routeur [53]. Notre probl`eme revient `a ´evaluer les coˆuts d’une architecture de cache distribu´ee : qu’est ce qui coˆuterait plus cher, mettre des petits caches partout ou de grands caches au premier niveau ? Ou plus exactement dans une architecture en arbre, comment faut-il choisir la taille de cache dans les diff´erents niveaux pour avoir un taux de hit cible avec le moindre coˆut ? 11.2 Etat de l’art Certaines ´etudes abordent le probl`eme de l’optimisation en fixant un ou plusieurs param`etres. Par exemple, Dario Rossi et Giuseppe Rossini [54] se sont d´ej`a pench´es sur ce probl`eme. Ils ont conclu que l’utilisation de caches de tailles diff´erentes ne fait augmenter le gain que de 2,5% dans le meilleur des cas, et qu’il vaut mieux utiliser la mˆeme taille de cache, vu le gain faible et la d´et´erioration de la rapidit´e d’un cache quand sa taille augmente. Dans cette ´etude, la taille globale de caches a ´et´e fix´ee. Le coˆut de caching est donc fix´e et le facteur de performance est le taux de hit. Le coˆut de bande passante n’a pas ´et´e pris en compte. Une configuration est pr´ef´erable `a une autre si sa probabilit´e de hit est meilleure. On voit que ceci n’est pas suffisant. Par ailleurs, le param`etre α de la loi Zipf de popularit´e est suppos´e sup´erieur `a 1, ce qui n’est pas conforme aux mesures. Sem Borst et al. [52] calculent les coˆuts de bande passante et proposent un algorithme de coop´eration pour le placement des donn´ees dans une hi´erarchie de caches afin de minimiser ce coˆut. Le coˆut de la m´emoire n’est pas pris en consid´eration, donc le tradeoff m´emoire/bande passante n’est pas ´etudi´e. Kangasharju [55] suppose que la localisation des caches et la capacit´e sont fixes, et dans [47] est explor´e le probl`eme d’optimisation avec une taille fixe de cache global comme contrainte. Notre vision des choses est diff´erente, nous pensons que le coˆut global inclut le coˆut de la bande passante et le coˆut de la m´emoire. C’est un crit`ere de comparaison fiable entre architectures. D’autres ´etudes cherchent le tradeoff m´emoire/bande passante, comme par exemple Nussbaumer et al. [56]. Ils adoptent une ´evaluation de coˆut similaire `a la nˆotre, en prenant en compte les coˆuts de la bande passante et de la m´emoire pour une hi´erarchie80 11.2. ETAT DE L’ART de caches en arbre. Cependant, leurs r´esultats ne sont pas applicables pour nous. Nous rencontrons le mˆeme probl`eme avec l’article [57] qui n’offre aucun r´esultat num´erique exploitable. Il est clair que la taille des caches d´etermine le coˆut de la m´emoire qui est croissant en fonction de la taille. Par contre le coˆut de la bande passante devient plus petit car le taux de hit augmente avec la taille. Notre objectif est de d´eterminer la hi´erarchie de cache optimale, que ce soit une architecture avec, ou sans coop´eration de caches. On fixe la probabilit´e de hit global de l’architecture `a une probabilit´e de hit cible. Avec une hi´erarchie de caches, il est toujours possible d’am´eliorer les gains en utilisant la coop´eration, comme c’´etait le cas pour les proxies, et donc rendre la distribution de caches rentable par rapport `a une hi´erarchie centralis´ee. Pourtant, Wolman et al. [58] ont d´emontr´e, par simulation sur une large population de 107 ou 108 objets que la coop´eration n’apporte que tr`es peu de gain ; ce qui remet en question l’utilit´e de mettre en place un lourd m´ecanisme de coop´eration et d’´echange de messages entre serveurs. Pour une population r´eduite, utiliser un seul proxy revient moins cher et est aussi efficace que plusieurs caches coop´eratifs. Plus r´ecemment, Fayazbakhsh et al. [59] ont constat´e que le stockage `a l’edge offre des performances plutˆot bonnes en termes de temps de latence, de congestion, et de charge du serveur d’origine. De plus, il offre un ´ecart de 17% par rapport `a une architecture CCN presque optimale, ce que les auteurs consid`erent comme minime, et compensable en pla¸cant plus de m´emoire `a l’edge. L’article [44] a remis en question cette ´etude par une contre-´etude, en montrant que le fait de coupler une strat´egie de forwarding optimale (chercher l’objet dans le cache le plus proche contenant l’objet recherch´e) avec une politique de m´eta-caching optimale (LCD pour Leave a Copy Down) augmente consid´erablement la marge entre un caching `a l’edge et un caching distribu´e `a la CCN, ceci en consid´erant une popularit´e Zipf avec α = 1. Wang et al. [60] traitent le probl`eme d’allocation de la m´emoire et cherchent `a trouver l’optimum. Le raisonnement va dans le mˆeme sens : l’optimum est obtenu `a travers une distribution non ´equitable des tailles de caches dans une hi´erarchie ; cette distribution d´epend de la popularit´e des objets, et de la taille du catalogue. En somme, un caching exclusif `a l’edge est une solution contest´ee du fait qu’elle n’a pas ´et´e compar´ee `a une solution ICN optimale avec une strat´egie de forwarding, et de meta-caching optimales. Dans tous les cas, le probl`eme que nous ´etudions est diff´erent du probl`eme trait´e dans les articles pr´ec´edemment cit´es. En g´en´eral, les chercheurs fixent une taille de cache global, ou par cache individuel, et cherchent un optimum pour la probabilit´e de hit globale, ou un optimum selon le nombre moyen de sauts travers´es par les requˆetes pour trouver l’objet recherch´e, ce qui mesure implicitement la consommation de la bande passante. Le fait de fixer la taille de cache ´elimine desCHAPITRE 11. INTRODUCTION 81 solutions possibles pour les optima, sachant que l’optimum absolu pour les utilisateurs est un stockage entier du catalogue `a l’edge ´evitant ainsi tout d´eplacement vers les serveurs d’origine. Entre la taille n´ecessaire pour assurer le stockage et la consommation de bande passante, le choix des op´erateurs devrait se baser sur les coˆuts, sachant que les coˆuts de la m´emoire d´ecroissent tr`es vite. Un stockage `a l’edge ´eliminerait tous les probl`emes li´es au routage orient´e contenu qui nous paraissent lourdes et posent des probl`emes de passage `a l’´echelle. L’utilisateur demande un objet de son edge. Si le routeur d’acc`es ne d´etient pas l’objet, alors la requˆete est redirig´ee vers le serveur d’origine. Le tableau 11.1 r´esume quelques ´etudes ayant trait´e la probl´ematique de l’optimum. R´ef´erence Contraintes Optimums Fayazbakhsh et al. [59] Les caches ont la mˆeme taille Peu d’avantages `a cacher partout dans le r´eseau, un caching `a l’edge est b´en´efique Rossini et al. [44] Les caches ont la mˆeme taille Avec une politique de meta caching LCD (Leave a copy down) et une strat´egie de forwarding menant au cache le plus proche(NRR) contenant l’objet, nous obtenons un ´ecart de 4% en nombre de sauts par rapport `a un couplage LCD et une politique de forwarding SPF menant au serveur d’origine le plus proche. Borst et al. [52] Taille de caches fixes, recherche de l’optimum en terme de bande passante L’optimum inter-niveau correspond `a un stockage des objets les plus populaires dans toutes les feuilles de l’arbre, et les moins populaires dans le cache sup´erieur. Table 11.1 – Comparaison des quelques ´etudes des coˆuts des hi´erarchies de cachesChapitre 12 Cout d’une hi ˆ erarchie de ´ caches a deux niveaux ` Dans ce chapitre, nous nous int´eressons aux coˆuts d’une hi´erarchie de caches. Nous souhaitons r´epondre `a la question : est-ce b´en´efique d’utiliser des caches distribu´es plutˆot que des caches centralis´es ? Nous pensons que c’est la diff´erence de base entre les r´eseaux CCN et les CDN. Afin de voir clairement la diff´erence des deux solutions, nous commen¸cons par une hi´erarchie `a deux niveaux uniquement. Notre hi´erarchie est repr´esent´ee `a la figure 12 : Le trafic g´en´er´e par les demandes des utilisateurs est de A en bit/s, le requˆetes ..... Figure 12.1 – architecture `a deux niveaux proviennent de n noeuds d’acc`es au premier niveau. Tous les caches au premier niveau ont la mˆeme taille T1. Le cache de deuxi`eme niveau est de taille T2. On cherche la structure optimale avec le minimum de coˆut. Les tailles T1 et T2 varient afin d’obtenir `a chaque fois une structure diff´erente, mais chaque structure devrait r´ealiser un taux de hit fix´e `a une valeur cible (afin de pouvoir comparer les coˆuts 82CHAPITRE 12. COUT D’UNE HI ˆ ERARCHIE DE CACHES ´ A DEUX NIVEAUX ` 83 des architectures). Donc pour chaque valeur de T1, il existe une seule valeur de T2 v´erifiant un taux de hit global fixe. 12.1 Diff´erence des coˆuts entre structures Le coˆut d’une architecture est ´egal `a la somme des coˆuts de la bande passante, le coˆut de la m´emoire, et le coˆut de gestion au niveau des caches. – Le coˆut de la bande passante Cb est proportionnel (on introduira une constante kb) au trafic global g´en´er´e entre le premier et le deuxi`eme niveau. Le coˆut d’acc`es est le mˆeme pour toutes les architectures, et donc il s’annule en diff´erence Cb = kb × T raf ic . – Le coˆut de la m´emoire Cm est proportionnel `a la taille des caches Cm = km × Taille de la m´emoire . – Le coˆut de gestion au niveau des caches Cg est proportionnel au trafic, soit Cg = ks × T raf ic. La diff´erence de coˆut entre deux architectures ayant le mˆeme taux de hit global est : δcout(T1, T′ 1 ) = (pmiss1 − p ′ miss1 )(kb + ks)A + (n(T1 − T ′ 1 ) + (T2 − T ′ 2 ))km o`u – pmiss1 est le taux de miss de la premi`ere architecture, – pmiss′ 1 est le taux de miss de la deuxi`eme architecture, – T1 (T2) est la taille des caches au premier (deuxi`eme) niveau de la premi`ere architecture – et T ′ 1 (T ′ 2 ) est la taille des caches au premier (deuxi`eme) niveau de la deuxi`eme architecture. Les deux architectures ont le mˆeme taux de hit global et le mˆeme nombre de caches au premier niveau.84 12.2. ESTIMATION NUMERIQUE DES CO ´ UTS ˆ 12.2 Estimation num´erique des coˆuts Nous avons besoin de quelques estimations des coˆuts unitaires kb, km et ks. – kb : On estime kb `a 15 $ par Mbps ; c’est une valeur non publi´ee mais estim´ee par quelques sources sur le web, et par des ´echanges priv´es avec un op´erateur. Ce coˆut unitaire couvre le coˆut du transport et des routeurs. – km : On estime le coˆut unitaire de la m´emoire `a km=0.15 $ par Gigabyte. Cette estimation est d´eriv´ee des fournisseurs de cloud comme Amazon. – ks : Le coˆut unitaire de gestion ks est estim´e `a 10 c par Mbps. Cette observation est tir´ee des charges de t´el´echargement des offres Cloud. La valeur de ks est donc n´egligeable par rapport `a la valeur de kb de sorte que la diff´erence des coˆuts entre architectures devient : ∆cout(T1, T′ 1 ) = (pmiss1 − p ′ miss1 )kbA + (n(T1 − T ′ 1 ) + (T2 − T ′ 2 ))km. Etudier la diff´erence de coˆuts revient `a ´etudier la fonction δ d´efinie par : ∆(T1) = pmiss(T1)kbA + (nT1 + T2(T1))km. Notre objectif est de d´eterminer les tailles T1 (et donc T2 puisque le taux de hit global est fix´e) correspondant au minimum de la fonction ∆. Le trafic `a l’acc`es est de l’ordre de A = 1 Tbp, la taille moyenne de chaque objet est de 1 GB et la taille du catalogue est estim´ee `a M = 2 × 106 objets. De plus, notons que la probabilit´e de hit d´epend du rapport entre la taille du cache et le catalogue. Pour v´erifier ceci, on consid`ere un cache de taille T et un catalogue de taille M. On trace sur la figure 12.2 la probabilit´e de hit en fonction de T /M pour diff´erentes valeurs de M et deux lois de popularit´e, Zipf(0.6) et Zipf(0.8). On observe que la probabilit´e de hit pour une popularit´e Zipf(0.6) est presque la mˆeme pour des tailles de catalogue diff´erentes. Pour le cas Zipf(0.8), la probabilit´e de hit est l´eg`erement plus faible pour un catalogue de taille 2 × 104 mais les probabilit´es de hit se rapprochent pour les grandes tailles de catalogue. Dans tous les cas, `a partir d’une taille de cache normalis´ee de 10%, les valeurs sont tr`es rapproch´ees. Puisque les tailles de catalogues sont grandes en r´ealit´e, il est clair que la probabilit´e de hit ne d´epend en pratique que du rapport entre la taille du cache et du catalogue. On note c la taille normalis´ee du cache c = T /M, ¯c la taille normalis´ee du cache au deuxi`eme niveau et h(c) la probabilit´e de hit au premier niveau de cache. La diff´erence de coˆut devient : δ(c) = (1 − h(c))kbA + M(nc + ¯c)km.CHAPITRE 12. COUT D’UNE HI ˆ ERARCHIE DE CACHES ´ A DEUX NIVEAUX ` 85 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1e-05 0.0001 0.001 0.01 0.1 1 M = 2 ∗ 104 M = 2 ∗ 105 M = 2 ∗ 106 M = 2 ∗ 107 (a) Popularit´e Zipf α=0.6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1e-05 0.0001 0.001 0.01 0.1 1 M = 2 ∗ 104 M = 2 ∗ 105 M = 2 ∗ 106 M = 2 ∗ 107 (b) Popularit´e Zipf α=0.8 Figure 12.2 – Taux de hit en fonction des tailles de caches normalis´ees T /M 12.2.1 Coˆut normalis´e Dans un premier temps, on consid`ere que l’op´erateur souhaite ´eviter de chercher les donn´ees en dehors de son r´eseau, et donc la probabilit´e de hit cible est ´egale `a 1. Dans ce cas ¯c = 0 ( est une diff´erence de tailles de caches au deuxi`eme niveau qui s’annule dans ce cas). On pose Γ = Akb Mnkm , le rapport entre le coˆut maximal de bande passante Akb et le coˆut maximal de m´emoire Mnkm. Le tradeoff entre m´emoire et bande passante est visible sur l’´equation. On note Γnominal la valeur de Γ correspondante aux valeurs cit´ees dans le paragraphe pr´ec´edent, Γnominal = 1012 × 15 × 10−6 0.15 × 10−9 × 100 × 2 × 106 × 109 = 0.5. Avec ce choix taux de hit cible, la diff´erence de coˆut normalis´e peut s’exprimer : δ(c) = Γ(1 − h(c)) + c. Cette fonction est pr´esent´e `a la figure 12.3. On observe que, pour Γ=0.5 (c’est-`a-dire la valeur nominale), la valeur optimale du coˆut correspond `a une taille de cache au premier niveau plutˆot petite (inf´erieure `a 10% du catalogue) ; plus Γ devient petit, plus on a int´erˆet `a tout cacher au deuxi`eme niveau. Plus la valeur Γ augmente, plus la solution optimale correspondrait `a un grand cache au premier niveau. Ces observations sont valables pour les deux valeurs de α comme le montre la figure 12.4. On remarque que le choix d’une taille de cache au premier niveau ´egale `a la taille du catalogue offre une solution optimale pour une valeur de Γ > 2 pour α = 0.6, et de Γ > 3.8 pour α = 0.8. Plus le cache est efficace (param`etre α plus grand ou politique86 12.2. ESTIMATION NUMERIQUE DES CO ´ UTS ˆ 0.1 1 10 0 0.2 0.4 0.6 0.8 1 Γ = 10 Γ = 5.0 Γ = 1.0 Γ = 0.5 Γ = 0.1 (a) Popularit´e Zipf α=0.6 0.1 1 10 0 0.2 0.4 0.6 0.8 1 Γ = 10 Γ = 5.0 Γ = 1.0 Γ = 0.5 Γ = 0.1 (b) Popularit´e Zipf α=0.8 Figure 12.3 – Coˆuts normalis´es en fonction des tailles de caches normalis´es au premier niveau 0 0.2 0.4 0.6 0.8 1 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 coptimal Γ α = 0.6 α = 0.8 (a) Popularit´es diff´erentes 0 0.2 0.4 0.6 0.8 1 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 coptimal Γ LRU(α = 0.6) LF U(α = 0.6) LRU(α = 0.8) LF U(α = 0.8) (b) Politiques de remplacement diff´erentes Figure 12.4 – Taille de cache normalis´ee optimale en fonction de Γ de remplacement plus efficace), moins il est utile de stocker des donn´ees au premier niveau de cache. Mais, mˆeme pour un cache LFU et un α = 0.8, la solution optimale correspond `a un stockage complet du catalogue `a partir de Γ = 5. La solution optimale actuelle pour le Γ nominal correspond `a une valeur de moins de 10 % du catalogue ; mais les valeurs des coˆuts sont si rapproch´ees pour Γ = 1 que le choix d’un c a une valeur ´egale au catalogue semble meilleur car l’utilisateur peut profiter d’une latence plus faible. La valeur de kc diminue de 40% chaque ann´ee1 , kb diminue de 20% chaque ann´ee, le 1http ://www.jcmit.com/memoryprice.htmCHAPITRE 12. COUT D’UNE HI ˆ ERARCHIE DE CACHES ´ A DEUX NIVEAUX ` 87 trafic T augmente de 40% chaque ann´ee2 et le catalogue augmente d’`a peu pr`es 50%. Le Γ tend alors `a augmenter au fil du temps, et peut devenir cinq fois plus grand que le Γ nominal dans 6 ans. Cependant, les valeurs des coˆuts varient selon la r´egion g´eographique 3 et des ´el´ements inattendus peuvent augmenter le prix des m´emoires. Le tradeoff reste quand mˆeme compliqu´e `a d´eterminer car il d´epend de l’emplacement g´eographique, et des lois de popularit´e. 12.2.2 Solution optimale en fonction du taux de hit global Nous avons consid´er´e la probabilit´e de hit global ´egale `a 1. Or, pour diff´erentes raisons, le fournisseur de contenu peut manquer de moyens pour payer le prix optimal. La solution consiste `a g´erer une partie du catalogue (ce qui correspond `a la partie la plus populaire) et de confier le stockage des autres contenus `a un serveur distant. Le nombre de caches au premier niveau ne change rien `a la probabilit´e de hit global, on peut utiliser la formule de Che `a deux niveaux de caches. Donc δ(c) = Γ(1 − h(c)) + c + ¯c/n). On obtient les r´esultats pr´esent´es dans la figure 12.5 avec des probabilit´es de hit global de 0.5 et 0.9. 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 cout c Γ = 0.05 Γ = 0.5 Γ = 2 hitg=0.9 hitg=0.5 (a) Loi Zipf(0.6) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 cout c Γ = 0.05 Γ = 0.5 Γ = 2 hitg=0.9 hitg=0.5 (b) Loi Zipf(0.8) Figure 12.5 – Coˆuts pour diff´erentes valeurs de Γ et de probabilit´es de hit On remarque que la courbe des diff´erences de coˆuts pour une probabilit´e de hit de 50% co¨ıncide avec la courbe des coˆuts pour une probabilit´e de hit de 90%. Ceci s’explique 2http ://www.networkworld.com/news/tech/2012/041012-ethernet-alliance-258118.html ?page=1 3http ://www.zdnet.fr/actualites/le-prix-du-transit-ip-baisse-encore-partout-dans-le-monde- 39775196.htm88 12.3. EXEMPLE : COUTS DES TORRENTS ˆ peut-ˆetre par la valeur de n (n=100) qui implique que la valeur de ¯c/n reste petite et n’influence pas la valeur des coˆuts. Ceci peut mener `a nous demander `a quel niveau il faut placer les caches dans un r´eseau. 12.3 Exemple : coˆuts des torrents Un moteur de recherche comme mininova.org offre un ensemble de torrents. Dan et al. [61] ont analys´e des donn´ees torrents de cette source et partag´e leurs donn´ees avec nous. Un premier fichier de donn´ees contient 2.9 × 106 torrents actifs et donne le nombre de leechers au moment de la capture. Nous avons conserv´e 1.6 × 106 torrents correspondant `a au moins un leecher actif au moment de la capture. On consid`ere un torrent de taille s et un nombre de leechers instantan´e de l. En utilisant la loi de Little, et en consid´erant la dur´ee de t´el´echargement proportionnelle `a la taille s, la fr´equence d’arriv´ee du torrent est : λ = nombre moyen de torrents temps de t´el´echargement . La fr´equence d’arriv´ee des torrents est donc proportionnelle `a l/s. Un deuxi`eme fichier correspondant aux tailles des torrents nous a ´et´e fourni. La popularit´e de chaque chunk est ´egale `a la popularit´e de son torrent. La formule de Che est plus facile `a utiliser au niveau chunk qu’au niveau torrent car les torrents ont des tailles diff´erentes, alors que les chunks ont la mˆeme taille. La popularit´e des chunks des torrents d´eduite des traces est repr´esent´ee `a la figure 12.6. 1e-05 0.001 0.1 10 1000 100000 1 100 10000 1e+06 1e+08 popularity rank head (Zipf(.6)) body (Zipf(.8)) tail Figure 12.6 – Popularit´e des chunks pour les torrentsCHAPITRE 12. COUT D’UNE HI ˆ ERARCHIE DE CACHES ´ A DEUX NIVEAUX ` 89 0.01 0.1 1 10 100 0 0.2 0.4 0.6 0.8 1 .3 .5 .7 .8 .9 .95 .99 (a) coˆuts normalis´es pour un coˆut de bande passante lin´eaire 0.01 0.1 1 10 100 0 0.2 0.4 0.6 0.8 1 .3 .5 .7 .8 .9 .95 .99 (b) coˆut normalis´es pour un coˆut de bande passante non lin´eaire Figure 12.7 – Coˆuts normalis´es pour diff´erentes valeur de Γ Les courbes pr´esent´ees dans la figure 12.7 repr´esentent les coˆuts normalis´es dans le cas d’une probabilit´e de hit globale de 1 pour les torrents. La figure 12.7(b) repr´esente le cas d’un coˆut de bande passante non lin´eaire T .75kb, ce qui permet d’´evaluer l’impact d’´economie d’´echelle. On remarque que les observations sont les mˆemes : pour un Γ = 10, la solution optimale correspond `a un stockage complet au premier niveau ; de mˆeme pour Γ = 1 la courbe est presque plate, impliquant qu’un stockage au premier niveau semble plutˆot b´en´efique.Chapitre 13 Cooperation de caches ´ Un grand nombre de publications met en avant l’avantage qu’on peut tirer de la distribution de caches et de leur coop´eration. Mais ces publications ne prennent pas en compte le coˆut des ´echanges entre les caches et donc le coˆut de la bande passante qui est typiquement plus important que le coˆut de la m´emoire. Dans cette section, nous nous int´eressons `a l’´evaluation du tradeoff bande passante/ m´emoire, dans le cas de coop´eration de caches. 13.1 Load sharing Li et Simon [62] proposent la division du catalogue en P partitions. Chaque cache peut stocker les ´el´ements d’une partition de taille ´egale `a M/P o`u M est la taille du catalogue. Figure 13.1 – R´eseau de caches P=3 et S=12 Pour ´evaluer les coˆuts de cette architecture, on introduit un nouveau facteur k ′ b repr´esentant le coˆut unitaire des liens entre caches du premier niveau. Une proportion 90CHAPITRE 13. COOPERATION DE CACHES ´ 91 1/P des requˆetes arrivant `a chaque cache correspondent au sous-catalogue attribu´e au cache, les (P − 1)/P requˆetes restantes ´etant redirig´ees vers les autres caches. La consommation de bande passante au premier niveau est donc estim´ee `a : nk′ b P − 1 P . Le coˆut de la bande passante r´esultante du taux de miss au premier niveau est estim´ee `a : nkbmiss(C) o`u miss(C) est la probabilit´e de miss au premier niveau pour cette architecture. Elle correspond au taux de miss d’un cache LRU de taille C pour un catalogue de taille M/P. On pose c = CP/M la taille normalis´ee des caches pour cette architecture. On ´ecrit miss(c) = miss(CP/M) o`u miss(c) correspond `a la probabilit´e de miss d’un cache LRU de taille P C et un catalogue de taille M. Tout calcul fait : δLS(c) = δ(c) + (1 − 1 P )(Γk ′ b kb − c). Le partitionnement du catalogue est b´en´efique dans le cas o`u Γk ′ b kb < c. Si, par exemple, Γ = 4 il faut que k ′ b kb < c/4 ≤ 1/4. Or, il serait surprenant que le coˆut d’un lien entre caches de premier niveau soit 4 fois inf´erieur au coˆut d’un lien entre le premier et le deuxi`eme niveau de caches, ce qui sugg`ere que ce type de partage de charge n’est gu`ere utile. 13.2 Caches coop´eratifs Ni et Tsang [63] proposent un algorithme de caches coop´eratifs bas´e sur le load sharing. Chaque cache est responsable d’une partition du catalogue. Toute requˆete destin´ee `a un cache sera d’abord recherch´ee dans le cache d’accueil, avant de s’adresser au cache responsable le plus proche. On d´esigne q(i) la popularit´e normalis´ee de l’objet i. Pour un cache du premier niveau, la popularit´e de chaque chunk appartenant `a la partition dont il est responsable est q ′ (n) = q(n)(1 + (P − 1)e −q ′ (n)tc ). En adaptant la formule de Che, on trouve tc en utilisant la formule (13.1) : X N n=1 1 P (1 − e −q ′ (n)tc ) + (1 − 1 P )(1 − e −q(n)tc ) = C (13.1) On consid`ere un cache k au premier niveau. On note R la fonction d´eterminant l’identit´e de la partition du catalogue dont il est responsable : R(k) = l veut dire que le cache k est responsable de la partition Ml . On note Ak l’ensemble de caches au premier niveau sauf le cache k. On observe les ´echanges entre caches illustr´es par la figure 13.2.92 13.2. CACHES COOPERATIFS ´ cache niveau 2 objet i, i /∈ Ml,miss(i) objet i, i ∈ Ml,miss(i) cache k Autres caches Ak objet i objet i, i ∈ Ml,miss(i) Figure 13.2 – illustration des diff´erents ´echanges pour une hi´erarchie coop´erative En utilisant l’´equation (13.1) pour trouver tc, nous pouvons calculer la probabilit´e de miss au niveau du cache k pour tout objet i missk(i) =  exp(−q(i)tc) ;i /∈ Ml exp(−q ′ (i)tc) ;i ∈ Ml . Les caches g`erent le trafic arrivant des noeuds d’acc`es. On note par θ ′ la probabilit´e de hit global de ce trafic : θ ′ = X N n=1 q(n)  1 P (1 − e −q ′ (n)tc ) + (1 − 1 P )(1 − e −q(n)tc )  / X N n=1 q(n) En cas de miss local, les requˆetes seront transmises aux caches responsables de la partition dont l’objet en question fait partie. Le cache k r´ealise un taux de hit suppl´ementaire pour ces requˆetes. La popularit´e des objets appartenant `a ces requˆetes `a l’entr´ee du cache k est q ′ (n) et la probabilit´e de hit de chaque objet est 1−e −q ′ (i)tc . Ces objets arrivent avec un taux q(i)e −q(i)tc et ils arrivent de P − 1 caches. Chaque cache autre que k envoie uniquement une fraction 1/P des objets ayant subi un miss au cache k, c’est-`a-dire les objets appartenant au sous-catalogue Ml . La probabilit´e de hit global du trafic d´ebordant des caches apr`es un premier essai (cache local) est exprim´ee par : θ ′′ = X N n=1 q(n)(1 − 1 P )e −q(n)tc (1 − e −q ′ (n)tc )/ X N n=1 q(n). On compare `a la figure 13.3 les r´esultats obtenus par simulation et par calcul pour les probabilit´es de hit locales θ ′ et les probabilit´es de hit suppl´ementaires dˆu `a laCHAPITRE 13. COOPERATION DE CACHES ´ 93 coop´eration θ ′′, pour un catalogue de 105 objets, pour P ´egale `a 2, 5 et 10 et pour les deux lois Zipf(0.6) et Zipf(1.2). 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 hit local c calcul simulation (a) α = 0.6 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 hit local c calcul simulation (b) α = 1.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.2 0.4 0.6 0.8 1 hit de coop’eration c P = 2 P = 5 P = 10 simulation calcul (c) α = 0.6 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.2 0.4 0.6 0.8 1 hit de coop’eration c P = 2 P = 5 P = 10 simulation calcul (d) α = 1.2 Figure 13.3 – Les probabilit´es de hit locales et de coop´eration en fonction de la taille de cache normalis´ee On remarque que la probabilit´e de hit locale obtenue par calcul co¨ıncide avec celle obtenue par simulation, et que cette probabilit´e est ind´ependante de P. Pour les probabilit´es de hit de coop´eration, une l´eg`ere diff´erence entre simulation et calcul est `a noter pour P = 2. Cette diff´erence diminue au fur et `a mesure que P augmente. Pour P = 10, les valeurs de simulation et de calcul sont identiques. Ces r´esultats s’expliquent par la pr´ecision de l’approximation de Che. Appliqu´ee `a un deuxi`eme niveau de caches, on s’attend `a ce que l’erreur due `a l’hypoth`ese d’ind´ependance soit faible si le nombre de caches de premier niveau est assez grand (n > 5). On note θ la probabilit´e de hit global au premier niveau : θ = θ ′ + θ ′′. Pour un cache94 13.2. CACHES COOPERATIFS ´ de taille C, le coˆut global de la structure est : cout(C) = Ak′ b P − 1 P (1 − θ ′ ) + Akb(1 − θ) + nkmC. Cet algorithme est inspir´e du load-sharing et chaque cache ne peut d´epasser la taille de M/P. La taille normalis´ee du cache peut donc s’´ecrire c = CP/M. On trace la probabilit´e θ(c/P) en fonction de c pour diff´erentes valeurs de P pour la loi Zipf(0.6). Les r´esultats sont pr´esent´es `a la figure 13.2. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 θ(c/P) c P = 1 P = 2 P = 5 P = 10 Figure 13.4 – Probabilit´e de hit global θ(c/P) On remarque que θ(c) > θP (c/P) pour P ≥ 2 o`u θ(c) correspond `a la probabilit´e de hit au premier niveau sans coop´eration (P = 1). On note ˜θ la probabilit´e de hit locale des objets qui ne font pas partie de la partition du cache. On a ˜θ(c) < θ′ (c) et θp(c) < θ(cp). Divisant par nkmM, le coˆut normalis´e peut s’exprimer : δcc(c) = Γ(1 − θP (c)) + Γk ′ b kb (1 − 1 P )(1 − ˜θ(c)) + c/P. Tout calcul fait : δcc(c) > δ(c) + (1 − 1 P )(Γk ′ b kb (1 − θ ′ (c)) − c). Pour savoir quelle architecture est la plus rentable, il faut donc ´evaluer la quantit´e Γ k ′ b kb (1 − θ ′ (c)) − c. On pose f(c) = Γk ′ b kb (1 − θ ′ (c)) − c. La fonction f(c) est minimale si θ ′ (c) et c sont maximales, ce qui correspond `a un stockage complet au premier niveau, quel que soitCHAPITRE 13. COOPERATION DE CACHES ´ 95 la valeur de k ′ b /kb et de Γ. La coop´eration est moins rentable qu’une hi´erarchie simple si la taille des caches de premier niveau est grande. La conclusion d´epend encore des coˆuts relatifs k ′ b et kb. Notons que, plus le coˆut de bande passante augmente relatif au coˆut des caches, plus il s’av`ere inutile de faire coop´erer les caches. 13.3 Routage orient´e contenu Notre ´etude est favorable `a un stockage complet `a l’Edge puisque le tradeoff entre bande passante et m´emoire est optimal dans le cas d’un stockage entier au premier niveau. Nos r´esultats sont soutenus par d’autres ´etudes telles que [59] et [5]. Cependant, une autre ´etude remet en question ces r´esultats, les jugeant insuffisants car ils comparent un r´eseau de caches non optimal avec un caching `a l’Edge ce qui est pour les auteurs simpliste. Effectivement, [44] met en doute les r´esultats constat´es du fait que la comparaison n’a pas ´et´e faite avec une hi´erarchie de caches optimale. Avec un couplage meta-caching/strat´egie de forwarding, l’´ecart entre une hi´erarchie de caches optimale et un stockage `a l’edge peut s’av´erer beaucoup plus important. Nous nous int´eressons `a l’influence que peut avoir l’utilisation d’une hi´erarchie de caches plus efficace que la hi´erarchie LRU utilisant SPF (shortest path first) comme politique de forwarding. Nous souhaitons utiliser les caches pour obtenir des taux de hit assez grands (plus de 90%). Dans ce cas, l’efficacit´e de la politique de m´eta-caching LCD tend vers celle d’une politique LCE. La seule question restant `a traiter est l’influence de la politique de forwarding sur les coˆuts. On consid`ere une hi´erarchie de caches LRU `a deux niveaux. On note n le nombre de caches au premier niveau. Les caches au premier niveau ont la mˆeme taille C et on consid`ere une demande sym´etrique de donn´ees. Ces caches sont reli´es `a un cache de niveau 2 stockant la totalit´e du catalogue. Nous ´etudions donc l’optimum dans le cas d’un taux de hit de 100%. Chaque cache au premier niveau est reli´e `a tous les autres caches par des liens et le coˆut de bande passante de chaque lien est kb. Le coˆut de bande passante des liens reliant le cache du premier niveau au cache du deuxi`eme niveau est k ′ b . On note θ ′ le taux de hit local dˆu au hit des objets demand´es par les utilisateurs et on note θ ′′ le taux de hit dˆu `a la coop´eration. Le taux de hit global au premier niveau est θ = θ ′ + θ ′′. En utilisant les simulations, on trace θ ′ et on le compare `a hit(c) le taux de hit au premier niveau d’une hi´erarchie SPF pour un catalogue de M = 104 objets. On remarque que θ ′ est ´egal au taux de hit d’un cache LRU simple, la valeur de n ne changant rien `a ce constat. Le coˆut de cette hi´erarchie peut donc ˆetre exprim´e par : cost = Akb(1 − θ(c)) + Ak′ b θ ′′ + nkmC. On normalise cette formule et on trouve : costN (c) = δ(c) + Aθ′′(k ′ b − kb).96 13.3. ROUTAGE ORIENTE CONTENU ´ 0 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 θ ′ c hit1(c) θ ′ (a) Zipf(0.6) 0 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 T auxdehit c hit1(c) θ ′ (b) Zipf(1.2) Figure 13.5 – Probabilit´e de hit locale θ ′ et hit(c) compar´es par simulation Puisque k ′ b < kb, costN (c) < δ(c). Donc en utilisant une politique de forwarding NRR, on r´ealise un coˆut inf´erieur. Nous allons ´evaluer les optima dans ce cas. La popularit´e de l’objet i `a l’entr´e de chaque cache au premier niveau peut ˆetre exprim´e par : q ′ (i) = q(i)+Pn−1 k=1(mn−l (1− m) l )/l tel que m est la probabilit´e de miss de i de popularit´e q(i) pour un cache LRU. La probabilit´e de hit locale d’un objet peut ˆetre exprim´ee par : hit(n) = 1 − exp(−q ′ (n) ∗ tc) Le tc est calcul´e en utilisant l’´equation modifi´ee de Che : PM i=1(1 − exp(−q ′ (i) ∗ tc) = C La probabilit´e de hit locale est exprim´ee par : θ ′ = PM i=1 q(i) ∗ (1 − exp(−q ′ (i) ∗ tc)) La probabilit´e de hit de coop´eration est exprim´ee par :θ ′′ = PM i=1 q(i) ∗ exp(−q ′ (i) ∗ tc) ∗ (1 − exp(−(n − 1) ∗ tc ∗ q ′ (i))) On compare les r´esultats de calcul avec les r´esultats de simulation pour un catalogue de 104 objets, n prenant les valeurs de 10,20,40 et 100. Les calculs sont effectu´ees pour une loi Zipf(0.6) mais les conclusions sont vrai pour tout α. Les r´esultats du hit local et du hit de coop´eration sont repr´esent´es dans les graphs 13.6(a) et 13.6(b) On remarque que les hit locaux calcul´es par la m´ethode de Che modifi´ee co¨ıncident exactement avec les probabilit´es de hit r´ecup´er´es des simulations, et ces taux de hit sont ind´ependants de la valeur de n. D’autre part, on note une diff´erence entre le taux de hit de coop´eration r´ecup´er´e des calculs, et le taux de hit obtenu par simulation pour n = 5 et n = 10, mais cette diff´erence disparait pour des tailles de caches grands. Pour n = 100 aucune diff´erence est `a noter entre les deux m´ethodes calcul et simulation. Le cout d’une hi´erarchie NRR est estim´ee `a : cost = n∗km ∗C +A∗kb ∗(1−θ)+A∗k ′ b ∗θ ′′CHAPITRE 13. COOPERATION DE CACHES ´ 97 0 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 hit local c simulation calcul (a) hit local 0 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 hit de coope ′ration c simulation calcul n = 5 n = 10 n = 50 n = 100 (b) hit coop´eration Figure 13.6 – Probabilit´e de hit compar´es par simulation et calcul le cout normalis´e est estim´e `a : costn = c + Γ(1 − θ) + Γ ∗ ( k ′ b kb ) ∗ θ ′′ On trace la taille de cache optimale en fonction de Γ et k ′ b /kb On remarque que pour les valeurs futures 0 1 2 3 4 5 Γ 0 0.2 0.4 0.6 0.8 1 k ′ b kb 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 13.7 – Taille de cache normalis´ee optimale en fonction de Γ et k ′ b /kb du Γ nominal, et pour un k ′ b /kb=0.8 la solution optimale consiste `a tout stocker au98 13.3. ROUTAGE ORIENTE CONTENU ´ premier niveau de caches.Chapitre 14 Hierarchies alternatives ´ 14.1 Coˆut d’une hi´erarchie de caches `a plusieurs niveaux 14.1.1 Evaluation des coˆuts Sem Borst et al. [52] affirment qu’il n’y a aucun int´erˆet `a distribuer des caches sur plus de deux niveaux. Cette affirmation a attir´e notre curiosit´e. Pour v´erifier ceci, on consid`ere une hi´erarchie de caches `a 5 niveaux. L’approximation de Che ´etant bonne pour tous les niveaux de caches, on l’utilise pour calculer la probabilit´e de hit globale de la structure ainsi que la probabilit´e de hit `a chaque niveau. Pour simplifier notre analyse, on consid`ere le cas d’une hi´erarchie homog`ene et sym´etrique en arbre `a nblevel niveaux. On se restreint dans cette ´etude `a 5 niveaux de caches (nblevel = 5). Le cache de niveau 5 est la racine de l’arbre et n’a pas de parent. Chaque noeud a le mˆeme nombre nbf de fils. A chaque niveau i de l’arbre, les caches ont la mˆeme taille Ti . On fixe nbf = 5. On consid`ere 3 cas : – une r´epartition ´equitable de m´emoire `a tous les niveaux, Ti = Ti+1 pour tout i, – une r´epartition favorisant les caches aux premiers niveaux Ti+1 = Ti/2, – une r´epartition favorisant les caches sup´erieurs Ti+1/2 = Ti . Le catalogue a une taille M. Notre objectif est de comparer les diff´erents cas en terme de coˆuts de m´emoire et de bande passante tout en pr´eservant la mˆeme valeur de probabilit´e de hit global. 99100 14.1. COUT D’UNE HI ˆ ERARCHIE DE CACHES ´ A PLUSIEURS NIVEAUX ` 1 2 3 nb_f Figure 14.1 – Hi´erarchie de caches `a 5 niveaux On note pmissi la probabilit´e de miss au niveau i. Le coˆut de la structure est : cout = Akb nblevel X−1 k=1 Y k i=1 pmissi + km nblevel X i=1 Ti(nbf ) nblevel−i . On note coutn le coˆut normalis´e en divisant le coˆut par kmMn, n ´etant le nombre total de caches. On pose Γ = Akb kmMn . Alors coutn = Γ( nblevel X−1 k=1 Y k i=1 pmissi ) + nblevel X i=1 tinbnblevel−i f /n. La figure 14.1.1 repr´esente les r´esultats obtenus pour diff´erentes valeurs de Γ. La valeur nominale de Γ est de 0.12. Le stockage favorisant les premiers niveaux est optimal juqu’`a la valeur d’un taux de hit global de 93%. Le stockage favorisant les premiers niveaux est toujours optimal si Γ ≥ 0.16. Il est probable que cette valeur soit bientˆot atteinte puisque Γ est en augmentation permanente du fait de l’augmentation rapide de la demande entrainant une augmentation de la valeur de d´ebit. Actuellement les coˆuts de la m´emoire baissent plus vite que les coˆuts de la bande passante et Γ augmente de plus en plus d´epassant actuellemnt la valeur de 0.12. Il est vrai que la popularit´e ne suit pas une loi Zipf(0.6) mais plutˆot une loi compos´ee de Zipf(0.6) et Zipf(0.8), mais dans tous les cas, la tendance future est le stockage entier au premier niveau en ´evitant la maintenance excessive de la bande passante entre routeurs. La r´epartition des caches n’est pas forc´ement gagnante pour un op´erateur ou un fournisseur de contenu puisqu’il faut prendre en compte les coˆuts ´elev´es de la bande passante par rapport aux coˆuts de la m´emoire.CHAPITRE 14. HIERARCHIES ALTERNATIVES ´ 101 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0 0.2 0.4 0.6 0.8 1 coutn hitg same size lower levels favored higher level favored (a) Γ = 0.01 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0 0.2 0.4 0.6 0.8 1 coutn hitg same size lower levels favored higher level favored (b) Γ = 0.08 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0 0.2 0.4 0.6 0.8 1 coutn hitg same size lower levels favored higher level favored (c) Γ = 0.12 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0 0.2 0.4 0.6 0.8 1 coutn hitg same size lower levels favored higher level favored (d) Γ = 0.15 Figure 14.2 – Coˆuts normalis´es pour diff´erentes valeur de Γ 14.1.2 Coop´eration de caches On restreint notre ´etude dans cette section au cas d’une r´epartition ´equitable de m´emoire comme c’est le cas pour CCN. Notre objectif est de d´eterminer le gain en m´emoire et en bande passante en comparant le cas d’une coop´eration des caches et le cas d’une non-coop´eration des caches. Il est clair que le gain en m´emoire qu’on peut avoir avec une coop´eration est tr`es int´eressant mais par ailleurs le gain en bande passante d´ecroit. On choisit la coop´eration par r´epartition du catalogue, c’est-`a-dire que chaque niveau de cache est destin´e `a stocker 1/5 du catalogue. Ainsi, on stocke le chunk num´ero i dans le niveau i (mod 5). Puisque les niveaux de caches traitent des catalogues disjoints et que les chunks suivent la mˆeme loi de popularit´e que les objets, il est simple de trouver la probabilit´e de hit des caches `a chaque niveau.102 14.2. COUTS ET POLITIQUES DE REMPLACEMENT ˆ La quantit´e de m´emoire utilis´ee est : Pnblevel i=1 C1(nbf ) nblevel−i . On note hiti(c1) la probabilit´e de hit d’un cache de taille normalis´ee c1 pour le niveau de cache i pour une hi´erarchie de caches p`ere-fils et un catalogue de taille M. La bande passante utilis´ee pour la hi´erarchie sans coop´eration est : nblevel X−1 i=1 (nbf ) nblevel−i (1 − hiti(c1)). 0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.2 0.4 0.6 0.8 1 bande passante/A hitg caches non cooperatives caches cooperatives (a) Bande passante 0 5e+07 1e+08 1.5e+08 2e+08 2.5e+08 3e+08 3.5e+08 4e+08 0 0.2 0.4 0.6 0.8 1 Memory hitg caches non cooperatives caches cooperatives (b) M´emoire Figure 14.3 – Comparaison entre caches coop´eratifs et non coop´eratifs On remarque que les caches coop´eratifs utilisent moins de m´emoire pour obtenir la mˆeme probabilit´e de hit global. Cependant, la quantit´e de bande passante consomm´ee par les caches coop´eratifs est plus importante que la quantit´e de bande passante consomm´ee par les caches non coop´eratifs. 14.2 Coˆuts et politiques de remplacement 14.2.1 Politique LFU On consid`ere une hi´erarchie en arbre de caches LFU avec les objets stock´es du niveau bas au niveau sup´erieur suivant l’ordre d´ecroissant de leur popularit´e. La politique de forwarding est une politique de recherche p`ere/fils puisque le fils cherche l’objet en suivant le chemin le plus court menant au serveur d’origine qui est plac´e au niveau sup´erieur de la hi´erarchie. On consid`ere alors un catalogue de M objets class´es selon un ordre d´ecroissant de popularit´e : o1, o2, o3, ..., oM. La taille des caches au niveau i est ci . La figure 14.4 repr´esente notre hi´erarchie `a ´etudier et le tableau 14.2.1 pr´esente les param`etres.CHAPITRE 14. HIERARCHIES ALTERNATIVES ´ 103 niveau n niveau n−1 C1 niveau 1 Cn−1 Cn Figure 14.4 – R´eseau de caches avec une politique LFU M taille du catalogue Ci taille du cache au niveau i ci taille du cache au niveau i normalis´ee ci = Ci/M Pmi probabilit´e de miss de la hierarchie jusqu’au niveau i kb coˆut unitaire de la bande passante km coˆut unitaire de la m´emoire A d´ebit d’entr´ee `a la hierarchie nbi nombre de caches au niveau i Pmcible la probabilit´e de miss global cibl´ee La probabilit´e de miss au premier niveau Pm1 peut ˆetre exprim´e pour LFU par : Pm1 = M1−α − C 1−α 1 M1−α (14.1) ou Pm1 = 1 − c 1−α 1 . D’une mani`ere g´en´erale, la probabilit´e de miss de la hi´erarchie de caches constitu´ee des niveaux 1 `a i est exprim´ee par : Pmi = 1 − ( X i k=1 ck) 1−α . (14.2) On suppose la probabilit´e de hit globale de la hi´erarchie est fix´ee `a une valeur cible : Pmg = 1 − ( Xn k=1 ck) 1−α d’o`u cn = (1 − Pmg ) 1/(1−α) − nX−1 k=1 ck. (14.3) La fonction de coˆut peut ˆetre exprim´ee par :104 14.2. COUTS ET POLITIQUES DE REMPLACEMENT ˆ cout = Akb( nX−1 k=1 pmk ) + nX−1 k=1 (nbk − nbn)Ckkm On pose Γ′ = Akb M km . On divise la fonction de coˆut par une constante kmM et on obtient : coutN = nX−1 k=1 Γ ′ (1 − ( X k i=1 ci) 1−α ) + nX−1 i=1 (nbi − nbn)ci . L’optimum v´erifie ∂coutN ∂c1 = 0 et ∂coutN ∂c2 = 0, pour une probabilit´e de hit global de 1 : c1 = M in(  Γ ′ (1 − α) (nb1 − nb2) 1/α , 1). En posant Γ = Γ ′ nb1 − nb2 , on obtient : c1 = M in(1,(Γ(1 − α))1/α). (14.4) A la figure 14.5, on trace la valeur de l’optimum en fonction de α. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Γ = 1 Γ = 2 Γ = 3 Γ = 5 Figure 14.5 – La taille de cache optimale en fonction de α. On remarque que plus Γ augmente, plus l’optimum correspond `a un stockage au premier niveau de cache. Pour α = 0.8, `a partir de Γ = 5 l’optimum correspond `a un stockage entier `a l’edge. Nous nous int´eressons de plus `a la diff´erence de coˆuts entre LRU et LFU, les r´esultats sont repr´esent´es sur la figure 14.6. On remarque que la diff´erence de coˆuts entre politiques LRU et LFU diminue quand la taille des caches augmente. Donc il n’est pas utile d’impl´ementer une politique mat´eriellement coˆuteuse comme LFU car LRU suffit.CHAPITRE 14. HIERARCHIES ALTERNATIVES ´ 105 0 1 2 3 4 5 6 Γ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 taille de caches normalis´e 0 5 10 15 20 25 30 (a) α = 0.6 0 1 2 3 4 5 6 Γ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 taille de caches normalis´e 0 5 10 15 20 25 30 35 40 (b) α = 1.2 Figure 14.6 – Diff´erence entre coˆuts LRU et LFU en % 14.2.2 Politique Random La politique Random est une concurrente forte de LRU, son impl´ementation mat´erielle est simple et ses performances ne s’´eloignent pas de celles de LRU. Dans cette section nous ´etudions le tradeoff de cette politique. En utilisant la mˆeme ´equation des coˆuts pour la hi´erarchie LRU, le coˆut d’une hi´erarchie de caches Random est exprim´e par : costN = Γ(1 − h) + c o`u h est la probabilit´e de hit globale de la hi´erarchie, et c est la taille de cache normalis´ee au premier niveau. Avec la politique Random, nous tra¸cons `a la figure 14.7 la valeur de l’optimum en fonction du Γ pour diff´erentes valeurs de α. Nous distinguons les cas α < 1 et α > 1. On remarque que l’optimum correspond `a un 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.5 1 1.5 2 2.5 3 3.5 Cache Optimale Γ α = 0.6 α = 0.7 α = 0.8 α = 0.9 (a) α < 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 1 2 3 4 5 6 Cache Optimale Γ α = 1.1 α = 1.2 α = 1.3 α = 1.4 (b) α > 1 Figure 14.7 – Tailles de caches optimales en fonction de Γ106 14.2. COUTS ET POLITIQUES DE REMPLACEMENT ˆ stockage complet `a l’Edge pour des arriv´ees suivant la loi Zipf(0.6) `a partir d’une valeur de Γ = 1.5. On conclut que mˆeme pour une politique Random, et compte tenu des valeurs de Γ selon nos estimations, un stockage complet `a l’edge correspond `a la solution optimale. Par ailleurs, quand la taille des caches est proche de la taille du catalogue, les probabilit´es de hit LRU et Random sont tr`es proches. L’utilisation de la politique Random devient int´eressante alors, car cette derni`ere est moins coˆuteuse mat´eriellement que LRU.Chapitre 15 Conclusion Dans cette partie, nous ´evaluons le coˆut d’une hi´erarchie de caches en prenant en compte le coˆut de la m´emoire et le coˆut de la bande passante. Nous utilisons la formule de Che pour ´evaluer num´eriquement ce tradeoff. Le coˆut optimal d´epend de la valeur du rapport Γ = kcA/(M kmn) ainsi que de la loi de popularit´e des objets. Nos r´esultats sugg`erent que dans un futur proche, l’optimum correspondra `a un stockage complet `a l’edge. Mais cette ´etude ´etant r´ealis´ee avec une fonction de coˆut lin´eaire, et avec des estimations non exacts des facteurs impactant les coˆuts, on ne peut la consid´erer que comme une premi`ere ´evaluation. Les op´erateurs poss´edant plus de d´etails et pr´ecisions pourraient adapter cette ´etude en int´egrant des fonctions de coˆuts r´eels. N´eanmoins, g´erer un grand cache est plus compliqu´e que la gestion d’un petit cache, et les performances et la rapidit´e de r´eponse d’un cache d´ependent de sa taille. Dans ce cas, chaque cache `a l’edge peut ˆetre lui-mˆeme distribu´e localement en un ensemble de caches. L’op´erateur pourrait d´edier `a titre d’exemple un ou plusieurs caches de taille maximale `a chaque fournisseur de contenu. Nous pensons que les fonctions de stockage et de gestion de contenu sont ind´ependantes. L’op´erateur ne peut en plus des fonctions de routage, fournir des r´eponses aux requˆetes d’utilisateurs, v´erifier les donn´ees en supprimant les donn´ees ill´egales par exemple, en mettant `a jour une donn´ee, etc. Le fait de centraliser les donn´ees permet de r´ealiser rapidement des op´erations de mise `a jour sur les donn´ees. 107Conclusion gen´ erale ´ L’Internet devient centr´e sur les contenus, 90% des ´echanges sur internet ´etant des consultations de sites Web, des t´el´echargements de fichier, ou des streamings audio ou vid´eo. La proposition CCN de Jacobson et al. vient donc au bon moment et suscite un grand int´erˆet. L’objectif de cette th`ese ´etait de rajouter `a la proposition CCN des fonctionnalit´es de gestion du trafic. En effet le protocole de transport propos´e au d´ebut n’offrait pas un bon d´ebit, puisque la d´etection de rejet se basait uniquement sur le timeout. D’autre part les paquets Data suivant le sens inverse des paquets Interest ne suffit pas pour assurer un partage ´equitable de bande passante et la protection des flux sensibles. Nous avons propos´e une solution dite “flow aware networking”, inspir´ee de travaux ant´erieurs sur IP o`u cette mani`ere de g´erer le traffic par flot a prouv´e son efficacit´e. Nous proposons ´egalement l’utilisation de la d´etection rapide de congestion pour rendre plus efficaces les protocoles de transport. L’interest Discard est un nouveau m´ecanisme con¸cu sp´ecialement pour les r´eseaux CCN permettant de limiter les rejets de paquets Data en supprimant s´electivement des paquets Interest en cas de congestion. Avec les r´eseaux orient´es contenus, on cherche les donn´ees dans n’importe quelle destination poss´edant le contenu. Dans ce contexte, les protocoles multipaths semblent prometteurs. Nous avons observ´e cependant que le t´el´echargement des donn´ees de sources multiples “pollue” les caches et d´et´eriore ainsi la probabilit´e de hit globale de l’architecture. Utiliser les chemins les plus courts pour tous les flux tant que la charge des liens ne d´epasse pas un seuil maximal s’av`ere suffisant. Les chemins longs ne doivent ˆetre utilis´es que lorsque leur charge est faible, sauf bien entendu en cas d’une panne au niveau du chemin le plus court. Les caches sont des acteurs majeurs dans la gestion du trafic et l’´etude de leur performance est essentielle. Il est n´ecessaire notamment d’´evaluer la mani`ere dont devrait ˆetre r´eparti les contenus dans une hi´erarchie de caches. La simulation ´etant coˆuteuse, puisque toute simulation de catalogue tr`es volumineux consomme beaucoup de temps, nous avons utilis´e des calculs approch´es `a l’aide de la formule de Che. Cette formule 108CHAPITRE 15. CONCLUSION 109 donne le taux de hit d’un cache de taille donn´ee sous l’hypoth`ese d’ind´ependance antre les arriv´ees de requˆetes. Elle s’av`ere tr`es pr´ecise et facile `a utiliser ´etant valable pour tout loi de popularit´e et toute taille de cache. L’utilisation de cette formule peut s’´etendre `a deux ou plusieurs niveaux de caches, `a condition que le nombre de cache au niveau inf´erieur soit grand (sup´erieur ou ´egale `a 5, d’apr`es nos ´etudes) afin de garantir l’ind´ependance des requˆetes. En utilisant ce mod`ele math´ematique, et avec des popularit´es et tailles de catalogues s’approchant de la r´ealit´e, nous avons pu ´etudier les perfomances d’une hierarchie de caches LRU. Dans un article r´ecent [59], Fayazbakhsh et al. se demandent s’il est vraiment n´ecessaire de distribuer les caches dans une hierarchie. D’autre part les CDNs rencontrent un succ`es ´enorme et sont op´erationnelles depuis des ann´ees. Pour un op´erateur, l’enjeu principale dans le choix d’une architecture orient´e contenu est le coˆut et la difficult´e de mise en place. Au niveau de la difficult´e de mise en place les CDN sont sans contestation mieux puisqu’ils sont d´eja op´erationnels. Le savoir faire dans ce domaine est fleurissant contrairement aux ICN qui manquent toujours d’une solution claire pour le routage, pour l’identification des objets dans le r´eseau ainsi que le changement des fonctionnalit´es des routeurs pour effectuer le stockage. Au niveau des coˆuts, nous ne pouvons estimer pr´ecis´ement laquelle des architectures est la moins coˆuteuse. Il est ´egalement difficile d’estimer tous les coˆuts possibles et d’avoir un mod`ele de coˆut fiable s’approchant des mod`eles utilis´es par les op´erateurs. Ne pouvant affranchir le pas de la confidentialit´e des donn´ees des op´erateurs, nous nous contentons d’estimer les coˆuts en prenant en compte les coˆuts de la m´emoire et de la bande passante. Nous adoptons un simple mod`ele de coˆut lin´eaire permettant une comparaison chiffr´ee quoique grossi`ere des diff´erentes options. Nous consid´erons qu’une hi´erarchie est mieux qu’une autre si le coˆut global incluant le coˆut des caches et le coˆut de la bande passante est plus faible. Notre objectif revient `a trouver un tradeoff optimal, puisque une hierarchie distribu´e est coˆuteuse en m`emoire mais ne coˆute rien en bande passante, alors qu’une hi´erarchie centralis´ee coˆute moins cher en m´emoire mais est plus coˆuteuse en bande passante. Suite aux r´esultats de nos ´etudes, nous recommandons un stockage presque complet `a l’edge. Cette solution paraˆıt optimale sutout dans un avenir proche car le coˆut des m´emoires baisse beaucoup plus vite que celui de la bande passante. D’autres ´etudes remettent en question cette conclusion, la jugeant rapide puisque certaines politiques de meta caching et forwarding am´eliorent la probabilit´e de hit des caches au premier niveau et r´eduisent donc le coˆut de la bande passante. Nous avons approfoni l’´etude de certains cas, dont celui de caches coop´eratives avec une politique de forwarding NRR (Nearest Replica Routing). Dans ce cas pour une hierarchie `a deux niveaux l’optimum correspond bien `a un stockage `a l’edge. Alors que Borst et al. [52] assurent qu’il n’ y a aucun avantage `a distribuer les contenus sur plusieurs niveaux de caches, cette affirmation n’´est pas accompagn´e de preuve. Nous avons donc estim´e les coˆuts pour une hierarchie `a plusieurs niveaux avec politique LRU. Les constats restent les mˆemes, il est plus rentable `a moyen terme d’utiliser de110 grands caches `a l’edge. On devrait poursuivre cette ´evaluation avec, par exemple, une politique de remplacement LCD et un routage NRR mais nos r´esultats pr´eliminaires mettent s´erieusement en question la notion CCN de mettre des caches dans tous les routeurs. Nos r´esultats s’accordent avec ceux de Fayazbakhsh et al. [59], mˆeme si les mod`eles utilis´es sont tr`es diff´erents. Un autre probl`eme pos´e par CCN est la difficult´e de gestion de donn´ees dans les caches. Si un fournisseur de donn´ees souhaite supprimer un objet, il est difficile de parcourir toute la hierarchie pour supprimer l’objet ou le mettre `a jour. Une hi´erarchie avec peu de niveaux est plus facile `a g´erer. Compte tenu des doutes concernant le coˆut ´elev´e de CCN, de la difficult´e de mise en oeuvre n´ecessitant des modifications majeures, de la n´ecessit´e de mettre en place une strat´egie de gestion de noms d’objets et leur authentifications, et de la difficult´e `a g´erer les donn´ees sur des serveurs distribu´es, il nous parait pr´ef´erable de stocker tous les objets `a l’edge tout proche des usagers. De grands caches `a ce niveau pourraient ˆetre sp´ecialis´e par fournisseur de contenus pour faciliter la gestion. Chaque objet s’identifierait d’abord par son fournisseur, qui a la responsabilit´e d’assurer l’unicit´e de l’objet et son authentification, ainsi que la s´ecurit´e de ses objets. L’op´erateur peut prendre en charge la fonctionnalit´e du caching `a l’edge en accord avec les fournisseur de contenus et attribuer `a chaque fournisseur un ou plusieurs caches. L’op´erateur peut devenir lui aussi fournisseur de contenus. Ceci bien sur peut ˆetre conclu `a travers des accords commerciaux entre les diff´erents acteurs. La fonctionnalit´e de l’internet serait alors surtout localis´ee `a l’edge du r´eseau de l’op´erateur. Certaines fonctionnalit´es propos´es dans CCN peuvent bien entendu ˆetre utiles dans cette architecture comme, par exemple, l’utilisation des noms d’objets au lieu des adresses IP. Toute utilisateur demanderait un objet avec le nom du fournisseur suivi du nom d’objet et c’est le routeur d’acc`es qui va, soit envoyer la donn´e demand´e `a travers ces caches `a l’edge, soit rediriger la demande vers le fournisseur de contenus. Pour les travaux futurs, nous souhaitons approfondir l’´etude du tradeoff m´emoire/bande passante en utilisant des mod`eles de coˆuts plus r´ealistes, et pour des hi´erarchies optimales NRR/LCD, ou des caches coop´eratives pour plusieurs niveaux de caches. Nous souhaitons aussi compl´eter notre ´etude sur les multipaths et leurs performances, d’analyser l’impact des caches et la mani`ere de les g´erer sur les performances des multipaths. Nous souhaitons aussi adapter la formule de Che `a plusieurs cas, notament pour le cas d’une hierarchie LCD `a plusieurs niveau de caches. Il est aussi important d’´evaluer l’´evolution des popularit´es des objets et leurs tailles.Ref´ erences ´ [1] T. Koponen, M. Chawla, G.-B. Chun, A. Ermolinskiy, H. Kim, S. Shenker, and I. Stoica, “A data-oriented (and beyond) network architecture,” ACM SIGCOMM Computer Communication Review, vol. 37, no. 4, pp. 181–192, 2007. [2] V. Jacobson, D. Smetters, J. Thornton, M. Plass, N. Briggs, and R. Braynard, “Networking named content,” in CoNext 2009, 2009. [3] B. Ahlgren, M. D’Ambrosio, M. Marchisio, I. Marsh, C. Dannewitz, B. Ohlman, K. Pentikousis, O. Strandberg, R. Rembarz, and V. Vercellone, “Design considerations for a network of information,” Proceedings of the 2008 ACM CoNEXT Conference, 2008. [4] “Publish-subscribe internet routing paradigm.” http://psirp.hiit.fi/. [5] A. Ghodsi, S. Shenker, T. Koponen, A. Singla, B. Raghavan, and J. Wilcox, “Information-centric networking : seeing the forest for the trees,” in Proceedings of HotNets-X, pp. 1 :1–1 :6, 2011. [6] S. Fdida and N. Rozhnova, “An effective hop-by-hop interest shaping mechanism for ccn communications,” in in IEEE NOMEN Workshop, co-located with INFOCOM, 2012. [7] G. Carofiglio, M. Gallo, and L. Muscariello, “Joint hop-by-hop and receiverdriven interest control protocol for content-centric networks,” ACM SIGCOMM Workshop on Information Centric Networking (ICN’12), 2012. [8] G. Carofiglio, M. Gallo, and L. Muscariello, “Icp : Design and evaluation of an interest control protocol for content-centric networking,” IEEE NOMEN Workshop, co-located with INFOCOM, 2012. [9] D. Rossi and G. Rossini, “Caching performance of content centric networks under multi-path routing (and more).” Technical report, Telecom ParisTech, 2011. [10] P. R. Jelenkovic, “Approximation of the move-to-front search cost distribution and least-recently-used caching fault probabilities,” Annals of Applied Probability, vol. 9, no. 2, pp. 430–464, 1999. 111112 REF´ ERENCES ´ [11] P. Gill, M. Arlitt, Z. Li, and A. Mahanti, “Youtube traffic characterization : a view from the edge,” in Proceedings of the 7th ACM SIGCOMM conference on Internet measurement, IMC ’07, (New York, NY, USA), pp. 15–28, ACM, 2007. [12] K.Gummadi, R. S.saroiu, S.Gribble, A.Levy, and J.Zahorjan., “Measurement, modeling and analysis of a peer-to-peer file shraing workload.,” in SOSP’03, 2003. [13] M. Cha, H. Kwak, P. Rodriguez, Y.-Y. Ahn, and S. Moon, “I tube, you tube, everybody tubes : analyzing the world’s largest user generated content video system,” in Proceedings of the 7th ACM SIGCOMM conference on Internet measurement, IMC ’07, (New York, NY, USA), pp. 1–14, ACM, 2007. [14] G. Appenzeller, I. Kesslassy, and N. McKeown, “Sizing router buffers,” in SIGCOMM 2004, 2004. [15] J.Nagle, “On packet switches with infinite storage,” in RFC 970, 1985. [16] S. Oueslati and J. Roberts, “A new direction for quality of service : Flow aware networking,” in NGI 2005, 2005. [17] R. Pan, L. Breslau, B. Prabhakar, and S. Shenker, “Approximate fair dropping for variable length packets,” ACM Computer Communications Review, vol. 33, no. 2, pp. 23–39, 2003. [18] B. Suter, T. Lakshman, D. Stiliadis, and A. Choudhury, “Buffer management schemes for supporting TCP in gigabit routers with per-flow queueing,” IEEE JSAC, vol. 17, no. 6, pp. 1159–1169, 1999. [19] A. Kortebi, L. Muscariello, S. Oueslati, and J. Roberts, “Evaluating the number of active flows in a scheduler realizing fair statistical bandwidth sharing,” in ACM Sigmetrics, 2005. [20] S. Ben Fredj, T. Bonald, A. Proutiere, G. R´egni´e, and J. W. Roberts, “Statistical bandwidth sharing : a study of congestion at flow level,” SIGCOMM Comput. Commun. Rev., vol. 31, no. 4, 2001. [21] B. Briscoe, “Flow rate fairness : Dismantling a religion,” SIGCOMM Comput. Commun. Rev., vol. 37, no. 2, pp. 63–74, 2007. [22] M. Shreedhar and G. Varghese, “Efficient fair queueing using deficit round robin,” SIGCOMM Comput. Commun. Rev., vol. 25, pp. 231–242, October 1995. [23] T. Bonald, M. Feuillet, and A. Prouti´ere, “Is the ”law of the jungle” sustainable for the internet ?,” in INFOCOM, 2009. [24] P. Key and L. Massouli´e, “Fluid models of integrated traffic and multipath routing,” Queueing Syst. Theory Appl., vol. 53, pp. 85–98, June 2006. [25] D. Wischik, C. Raiciu, A. Greenhalgh, and M. Handley, “Design, implementation and evaluation of congestion control for multipath tcp,” in NSDI’11 Proceedings of the 8th USENIX conference on Networked systems design and implementation, 2011.REF´ ERENCES ´ 113 [26] C. Raiciu, D. Wischik, and M. Handley, “Practical congestion control for multipath transport protocols,” in UCL Technical Report, 2010. [27] G. Carofiglio, M. Gallo, L. Muscariello, and M. Papalini, “Multipath congestion control in content-centric networks,” IEEE INFOCOM Workshop on emerging design choices in name oriented networking, 2013. [28] P. Viotti, “Caching and transport protocols performance in content-centric networks.” Eurocom Master Thesis, 2010. [29] Cisco, “Cisco visual networking index : Forecast and methodology, 2010-2015.” White paper, 2011. [30] A. Mahanti, C. Williamson, and D. Eager, “Traffic analysis of a web proxy caching hierarchy,” IEEE Network, pp. 16–23, May/June 2000. [31] J.Zhou, Y. Li, K. Adhikari, and Z.-L. Zhang, “Counting youtube videos via random prefix sampling,” in Proceedings of IMC’07, 2011. [32] L. Breslau, P. Cao, L. Fan, G. Phillips, and S. Shenke, “Web caching and zipflike distributions : evidence and implications,” INFOCOM ’99. Eighteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings. IEEE, 1999. [33] Y. Carlinet, B. Kauffmann, P. Olivier, and A. Simonian, “Trace-based analysis for caching multimedia services.” Orange labs technical report, 2011. [34] H. Yu, D. Zheng, B. Y. Zhao, and W. Zheng, “Understanding user behavior in large-scale video-on-demand systems,” SIGOPS Oper. Syst. Rev., vol. 40, pp. 333–344, April 2006. [35] P. Jelenkovic, X. Kang, and A. Radovanovic, “Near optimality of the discrete persistent caching algorithm,” Discrete Mathematics and Theoretical Computer Science, pp. 201–222, 2005. [36] G. Carofiglio, M. Gallo, and L. Muscariello, “Bandwidth and storage sharing performance in information centric networking,” in ACM Sigcomm workshop on ICN, 2011. [37] E.Rosensweig, J. Kurose, and D.Towsley, “Approximate models for general cache networks,” in Infocom, 2010. [38] A. Dan and D.Towsley, “An approximate analysis of the lru and fifo buffer replacement schemes,” in SIGMETRICS, 1990. [39] H. Che, Y. Tung, and Z. Wang, “Hierarchical web caching systems : modeling, design and experimental results,” IEEE JSAC, vol. 20, no. 7, pp. 1305–1314, 2002. [40] C. Fricker, P. Robert, and J. Roberts, “A versatile and accurate approximation for lru cache performance,” in Proceedings of ITC 24, 2012. [41] M. Gallo, B. Kauffmann, L. Muscariello, A. Simonian, and C. Tanguy, “Performance evaluation of the random replacement policy for networks of caches,” ASIGMETRICS ’12, 2012.114 REF´ ERENCES ´ [42] N. Laoutaris, S. Syntila, and I. Stavrakakis, “Meta algorithms for hierarchical web caches,” in IEEE ICPCC, 2004. [43] I. Psaras, W. K. Chai, and G. Pavlou, “Probabilistic in-network caching for information-centric networks,” in Proceedings ICN ’12, (New York, NY, USA), pp. 55–60, ACM, 2012. [44] G. Rossini and D. Rossi, “Coupling caching and forwarding : Benefits, analysis, and implementation,” tech. rep., telecom-paristech. [45] K. Cho, M. Lee, K. Park, T. Kwonand, Y. Choi, and S. Pack, “Wave : Popularitybased and collaborative in-network caching for content- oriented networks,,” in IEEE INFOCOM, NOMEN, 2012. [46] W. Chai, D. He, I. Psaras, and G. Pavlou, “Cache less for more in informationcentric networks,” in IFIP NETWORKING, 2012. [47] N. Laoutaris, H. Che, and I. Stavrakakis, “The lcd interconnection of lru caches and its analysis,” Perform. Eval., vol. 63, pp. 609–634, July 2006. [48] R. Chiocchetti, D. Rossi, G. Rossini, G. Carofiglio, and D. Perino, “Exploit the known or explore the unknown ? : hamlet-like doubts in icn,” in ACM SIGCOMM,ICN, 2012. [49] R. Chiocchetti, D. Perino, G. Carofiglio, D. Rossi, and G. Rossini, “Inform : a dynamic interest forwarding mechanism for information centric networking,” in ACM SIGCOMM, ICN, 2013. [50] S. Eum, K. Nakauchi, M. Murata, Y. Shoji, and N. Nishinaga, “Catt :potential based routing with content caching for icn,” in ACM SIGCOMM, ICN, 2012. [51] C. Yi, A. Afanasyev, I. Moiseenko, L. Wang, B. Zhang, and L. Zhang, “A case for stateful forwarding plane,” Computer Communications, vol. 36, no. 7, 2013. [52] S. Borst, V. Gupta, and A. Walid, “Distributed caching algorithms for content distribution networks,” IEEE INFOCOM, 2010. [53] D. Perino and M. Varvello, “A reality check for content centric networking,” in Proceedings of the ACM SIGCOMM workshop on Information-centric networking, ICN ’11, (New York, NY, USA), pp. 44–49, ACM, 2011. [54] D. Rossi and G.Rossini, “On sizing ccn content stores by exploiting topogical information,” IEEE INFOCOM, NOMEN Worshop, 2012. [55] J. Kangasharju, J. Roberts, and K. W. Ross, “Object replication strategies in content distribution networks,” Computer Communications, pp. 367–383, 2001. [56] J. P. Nussbaumer, B. V. Patel, F. Schaffa, and J. P. G. Sterbenz, “Networking requirements for interactive video on demand,” IEEE Journal on Selected Areas in Communications, vol. 13, pp. 779–787, 1995. [57] I. Cidon, S. Kutten, and R. Soffer, “Optimal allocation of electronic content,” Computer Networks, vol. 40, no. 2, pp. 205 – 218, 2002.REF´ ERENCES ´ 115 [58] A. Wolman, G. M. Voelker, N. Sharma, N. Cardwell, A. Karlin, and H. M. Levy, “On the scale and performance of cooperative web proxy caching,” in ACM Symposium on Operating Systems Principles, pp. 16–31, ACM New York, 1999. [59] S. K. Fayazbakhsh, Y. Lin, A. Tootoonchian, A. Ghodsi, T. Koponen, B. Maggs, K. C. Ng, V. Sekar, and S. Shenker, “Less pain, most of the gain : incrementally deployable icn,” SIGCOMM ’13 Proceedings of the ACM SIGCOMM 2013 conference on SIGCOMM, 2013. [60] Y. Wang, Z. Li, G. Tyson, S. Uhlig, and G. Xie, “Optimal cache allocation for content-centric networking,” ICNP, IEEE, 2013. [61] N.Carlson, G.Dan, A.Mahanti, and M. Arlitt., “A longitudinal a characterization of local and global bittorrent workload dynamics,” In Proceedings of PAM12, 2012. [62] Z. Li and G. Simon, “Time-shifted tv in content centric networks : The case for cooperative in-network caching,” In Communications (ICC), 2011. [63] J. Ni and D. Tsang, “Large-scale cooperative caching and application-level multicast in multimedia content delivery networks,” Communications Magazine, IEEE, 2005. Trigraphes de Berge apprivoises Th´eophile Trunck To cite this version: Th´eophile Trunck. Trigraphes de Berge apprivoises. Discrete Mathematics. Institut d’Optique Graduate School, 2014. French. . HAL Id: tel-01077934 https://tel.archives-ouvertes.fr/tel-01077934 Submitted on 27 Oct 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destin´ee au d´epˆot et `a la diffusion de documents scientifiques de niveau recherche, publi´es ou non, ´emanant des ´etablissements d’enseignement et de recherche fran¸cais ou ´etrangers, des laboratoires publics ou priv´es.École Normale Supérieure de Lyon THÈSE pour obtenir le grade de Docteur de l’Université de Lyon, délivré par l’École Normale Supérieure de Lyon Spécialité : Informatique préparée au Laboratoire de l’Informatique du Parallélisme dans le cadre de l’École Doctorale Info-Math présentée et soutenue publiquement par Théophile Trunck le 17 septembre 2014 Titre : Trigraphes de Berge apprivoisés Directeur de thèse : Nicolas Trotignon Jury Celina De Figueiredo, Rapporteur Frédéric Maffray, Rapporteur Stéphan Thomassé, Examinateur Nicolas Trotignon, Directeur Annegret Wagler, ExaminateurRésumé L’objectif de cette thèse est de réussir à utiliser des décompositions de graphes afin de résoudre des problèmes algorithmiques sur les graphes. Notre objet d’étude principal est la classe des graphes de Berge apprivoisés. Les graphes de Berge sont les graphes ne possédant ni cycle de longueur impaire supérieur à 4 ni complémentaire de cycle de longueur impaire supérieure à 4. Dans les années 60, Claude Berge a conjecturé que les graphes de Berge étaient des graphes parfaits, c’est-à-dire que la taille de la plus grande clique est exactement le nombre minimum de couleurs nécessaires à une coloration propre et ce pour tout sous-graphe. En 2002, Chudnovsky, Robertson, Seymour et Thomas ont démontré cette conjecture en utilisant un théorème de structure : les graphes de Berge sont basiques ou admettent une décomposition. Ce résultat est très utile pour faire des preuves par induction. Cependant, une des décompositions du théorème, la skew-partition équilibrée, est très difficile à utiliser algorithmiquement. Nous nous focalisons donc sur les graphes de Berge apprivoisés, c’est-à-dire les graphes de Berge sans skew-partition équilibrée. Pour pouvoir faire des inductions, nous devons adapter le théorème de structure de Chudnovsky et al. à notre classe. Nous prouvons un résultat plus fort : les graphes de Berge apprivoisés sont basiques ou admettent une décomposition telle qu’un côté de la décomposition soit toujours basique. Nous avons de plus un algorithme calculant cette décomposition. Nous utilisons ensuite notre théorème pour montrer que les graphes de Berge apprivoisés admettent la propriété du grand biparti, de la clique-stable séparation et qu’il existe un algorithme polynomial permettant de calculer le stable maximum. Abstract The goal of this thesis is to use graph’s decompositions to solve algorithmic problems on graphs. We will study the class of tamed Berge graphs. A Berge graph is a graph without cycle of odd length at least 4 nor complement of cycle of odd length at least 4. In the 60’s, Claude Berge conjectured that Berge graphs are perfect graphs. The size of the biggest clique is exactly the number of colors required to color the graph. In 2002, Chudnovsky, Robertson, Seymour et Thomas proved this conjecture using a theorem of decomposition: Berge graphs are either basic or have a decomposition. This is a useful result to do proof by induction. Unfortunately, one of the decomposition, the skewpartition, is really hard to use. We are focusing here on tamed Berge graphs, i.e Berge graph without balanced skew- partition. To be able to do induction, we must first adapt the Chudnovsky et al’s theorem of structure to our class. We prove a stronger result: tamed Berge graphs are basic or have a decomposition such that one side is always basic. We also have an algorithm to compute this decomposition. We then use our theorem to prove that tamed Berge graphs have the big-bipartite property, the clique-stable set separation property and there exists a polytime algorithm to compute the maximum stable set. iiRemerciements Je tiens tout d’abord à remercier mon directeur de thèse Nicolas Trotignon, pour sa disponibilité, l’aide qu’il m’a accordée pendant cette thèse et toutes les choses qu’il a pu m’apprendre tant au niveau de la recherche que de comment bien rédiger ou de choses plus anecdotiques comme la liste des monarques danois à partir de XVe siècle 1 . Je souhaite à tous les thésards de pouvoir bénéficier d’un encadrement comme le tien. Si je peux présenter cette thèse, c’est aussi grâce à mes relecteurs. Je remercie Celina de Figueiredo pour toutes ses remarques sur le manuscrit et aussi pour avoir accepté de faire le déplacement à Lyon pour faire partie de mon jury. Je remercie également Frédéric Maffray, pour sa relecture scrupuleuse qui m’a permis de rendre plus claires certaines preuves. Merci également à Annegret Wagler d’avoir bien voulu faire partie de mon jury. Stéphan, merci pour tous les problèmes souvent suivis de solutions très élégantes que tu as pu nous poser durant ces années à Lyon, merci aussi d’avoir accepté de participer à ce jury. Durant ce temps passé en thèse, mais aussi durant mon stage de master où j’ai pu commencer à travailler sur les problèmes présentés ici, j’ai eu la chance de rencontrer de nombreuses personnes. Je pense tout d’abord à mes cobureaux, par ordre d’apparition : Pierre, lorsque j’étais encore en stage de master au Liafa à Paris et qui nous a ensuite fait l’honneur de ses visites à Lyon. Bruno pour ton accueil au sein du LIP, tes explications sur le fonctionnement du DI et qui par ta maîtrise de Sage m’a donné goût à la science expérimentale. Sébastien tes discussions ont contribué à enrichir mes connaissances en physique, merci surtout pour ton enthousiasme à vouloir travailler sur n’importe quel problème, qu’il soit ou non lié à ton domaine de recherche. Aurélie sans qui les grapheux de MC2 n’auraient pas pu aller à toutes ces conférences, merci pour ton soutien. Emilie, je ne sais pas si nous avons officiellement été cobureaux mais merci pour toutes ces discussions (scientifiques ou pas) et bien sûr pour ces quiz musicaux. Jean-Florent, pendant ce semestre à Varsovie j’ai été très content d’avoir un cobureau lui aussi conditionné à manger à heures fixes, merci de m’avoir aidé à comprendre les wqo. Comme le temps passé au labo ne se limite pas à mon bureau merci à toute l’équipe MC2 : Nathalie, Pascal, Irena, Natacha, Michaël, Éric, Éric, Zhentao, Kévin, Maxime, Petru, Sebastián, Matthieu. Je souhaite à tout le monde de pouvoir travailler dans une telle équipe. La recherche ne se limite pas à Lyon. Merci à Maria et Kristina de m’avoir appris autant de choses sur les graphes parfaits lors de votre séjour à Paris au début de mon stage. Marcin dziękuję za zaproszenie do Warszawy. Merci aussi à Marko et Jarek avec qui j’ai eu l’occasion de travailler respectivement à Belgrade et à Varsovie. Merci enfin à ceux qui m’ont les premiers fait découvrir la théorie des graphes lors de mon stage de M1. Daniël pour avoir proposé un sujet de stage intéressant et accessible, Matthew et Viresh pour vos discussions, Pim pour tes parties de billard. La thèse c’est aussi l’enseignement, à ce titre merci à tous ceux avec qui j’ai eu l’occasion d’enseigner : Christophe, Xavier, Éric, Vincent et Arnaud. Merci aussi à tous mes élèves, particulièrement à ceux que j’ai eu l’occasion de coacher pour le SWERC. 1. C’est une alternance de Christian et de Frédéric iiiMême s’ils ne m’ont pas fait gagner de gourde j’ai été très content de les accompagner à Valence. Un immense merci à Marie et Chiraz pour les réponses aux innombrables questions que j’ai pu leur poser et pour l’organisation de mes missions. Chiraz encore merci pour t’être occupée de l’organisation de ma soutenance. Merci à Damien pour ton aide lors des procédures d’inscription, de réinscription et de soutenance de thèse. Merci aussi à Catherine, Évelyne, Laetitia, Sèverine et Sylvie d’avoir toutes, à un moment donné, su répondre à mes demandes. Comme une thèse ce n’est pas seulement ce qui se passe au labo, je voudrais remercier tous ceux qui m’ont accompagné durant ces trois ans. Au risque d’en oublier, merci à : Coco, Dédé, Pedro, Jo, So, Nico, Julie, Rom, Jess, Marion, Mika, Pauline, Pippo, Audrey, Camille... Merci aussi à toute ma famille, à mes parents qui ont pu être présents pour ma soutenance, à ma sœur qui est régulièrement passée à Lyon pendant ma thèse, mais surtout à Léopold sans qui l’organisation du pot m’aurait bien plus stressé. Enfin Marie, merci pour ton soutien pendant ces trois dernières années, mais merci surtout de m’avoir suivi à Lyon. Et toi lecteur, si tu n’as pas encore trouvé ton nom, je ne t’ai pas oublié. J’espère juste que /dev/random te permettra de le trouver parmi tous les autres dans la grille suivante. O K R A M S H T E R G E N N A Z O G U X U E L L I M A C H I R A Z J E S S I C A V E A H C A T A N E M A T T H E W Z V M S O P E E I L E R U A Y H T A C R W S A O A Q N H K T T H E N I R E H T A C U R L T M P I P D X N W P L E O P O L D E I E N S I U C O C X E M O S E V E R I N E N E F T K D O T R A R A L L P E O L D H N H E F E A E L S O V O R E Y G E E B E O Z I G T L S N A I L I L C N D W A A Y I A L E V N L A I S R A E F I E W H W E R I A R N U E A L L V H N R N N P C A I A R H A Y E S C J S U K C D I A C I L V M A T R M O I N N E I A R K Q I E M S L Z M A D O I S T I I M N P I R F X J O Y N H N T C H G Q S A V I A E S U D F R S C F R E D E R I C A M J X T F T R I A Y L A E T I T I A L J B O R A S T I T J O E I L I M E V E L Y N E R X M R V N E M M N A H T A N O J R M V S C E L I N A P ivTable des matières Table des matières . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v 1 Introduction 1 2 Trigraphes de Berge apprivoisés 9 2.1 Définitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2 Trigraphes basiques . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3 Décompositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4 Structure des trigraphes de Berge apprivoisés . . . . . . . . . . . . . 21 2.5 Blocs de décomposition . . . . . . . . . . . . . . . . . . . . . . . . . 28 3 Propriété du grand biparti 35 3.1 Grand biparti dans les trigraphes de Berge apprivoisés . . . . . . . 37 3.2 Clôture par k-joints généralisés . . . . . . . . . . . . . . . . . . . . 44 3.3 Grand biparti dans les classes closes par k-joints . . . . . . . . . . . 45 4 Clique-Stable séparateur 49 4.1 Clique-stable séparateur dans les trigraphes de Berge apprivoisés . . 51 4.2 Clique-stable séparateur dans les classes closes par k-joints . . . . . 55 5 Calcul du stable maximum 59 5.1 Le cas des trigraphes basiques . . . . . . . . . . . . . . . . . . . . . 60 5.2 Stocker α dans les blocs . . . . . . . . . . . . . . . . . . . . . . . . 62 5.3 Calculer α . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 6 Décompositions extrêmes 79 6.1 Décompositions extrêmes . . . . . . . . . . . . . . . . . . . . . . . . 79 6.2 Calculer une fin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 7 Conclusion 95 Bibliographie 97 vTABLE DES MATIÈRES viChapitre 1 Introduction Les graphes permettent de modéliser de nombreux problèmes. Par exemple, affecter un nombre minimal de salles permettant de faire cours sans que deux cours n’aient lieu dans la même salle au même moment est un problème de coloration du graphe d’intervalles représentant les cours. Certains de ces problèmes sont “faciles”, c’est-à-dire qu’il existe un algorithme en temps d’exécution polynomial pour les résoudre, d’autres sont difficiles (NPcomplet) c’est-à-dire que si l’on trouve un algorithme efficace pour les résoudre, on résout par la même occasion tous les problèmes NP (ceux pour lesquels si l’on nous donne une solution il est facile de la vérifier, en particulier tous les problèmes de chiffrement). Cependant même si un problème est difficile sur les graphes généraux, il est possible que sur certaines classes de graphes il soit facile. Le problème du stable maximum est en général difficile, mais dans le cas des graphes bipartis (les graphes sans cycle de longueur impaire ou de manière équivalente, les graphes pouvant se partitionner en deux ensembles de sommets V1 et V2, tels que les seules arêtes soient entre V1 et V2, c’est-à-dire qu’il n’y ait aucune arête dans V1 ni dans V2) il devient facile. Notons bien que l’adjectif facile veut seulement dire ici qu’il existe un algorithme polynomial (donc dans un certain sens efficace) pour le résoudre et pas que cet algorithme est “facile” à trouver. Par exemple, pour résoudre le problème du stable maximum dans les graphes bipartis, il faut utiliser le théorème de Kőnig assurant l’égalité entre deux paramètres de graphe et un algorithme de couplage qui est maintenant classique. Il est donc intéressant d’étudier des classes de graphes. D’un point de vue théorique afin de comprendre ce qui fait la difficulté d’un problème, mais également d’un point de vue pratique. En effet en modélisant un problème il est tout à fait possible que le graphe obtenu ait des propriétés spécifiques et qu’il soit alors 1CHAPITRE 1. INTRODUCTION 1 1 1 1 2 2 2 3 3 3 Figure 1.1 – Exemple de coloration possible d’avoir des algorithmes efficaces. De nombreuses restrictions peuvent être obtenues, par exemple un graphe peut être obtenu à partir d’une carte routière et donc être représenté sans que ses arêtes ne se croisent (il est alors planaire). Il peut être obtenu à partir d’intersections d’intervalles de R (c’est alors un graphe d’intervalles) ou de contraintes entre deux types d’objets. Nous nous intéressons particulièrement aux classes de graphes définies par sous-graphes induits interdits, c’est-à-dire qu’en supprimant des sommets, on est assuré de ne pas pouvoir obtenir certains sous-graphes. De manière équivalente cela veut dire que quel que soit le sommet que l’on supprime, le graphe reste dans notre classe, ce qui permet par exemple de faire des inductions. Les graphes bipartis sont un exemple de classe de graphe définie par sous-graphes induits interdits (les cycles de longueur impaire). La question générale est celle de l’influence des propriétés locales (on interdit localement une configuration) sur les propriétés globales : comment peut-on trouver un ensemble de taille maximum de sommets tous deux à deux non-adjacents, comment peut-on colorier le graphe avec seulement k couleurs ? Le problème de coloration est le suivant : étant donné un graphe G, nous voulons donner à chacun de ses sommets une couleur (classiquement on identifie les couleurs à des entiers), telle que deux sommets adjacents n’aient pas la même couleur. Une solution triviale est de donner à chaque sommet une couleur diffé- rente, mais nous cherchons à minimiser le nombre de couleurs différentes utilisées. Pour un graphe G on appelle nombre chromatique que l’on note χ(G) le nombre minimum de couleurs nécessaire pour colorier G. Une question naturelle à propos du problème de coloration, et c’est celle qui 2va motiver l’introduction de notre classe de graphes, est la suivante : pourquoi un graphe G peut-il être colorié avec c couleurs mais pas avec c − 1 ? Il est facile de trouver des conditions nécessaires. Par exemple, si un graphe contient une arête, c’est-à-dire deux sommets adjacents, il ne peut pas être colorié avec moins de 2 couleurs. Plus généralement s’il contient une clique de taille k (un ensemble de sommets tous deux à deux adjacents) alors il ne pourra pas être colorié avec strictement moins de k couleurs. Pour un graphe G on note ω(G) la taille de sa plus grande clique. Ce que nous venons de voir peut se traduire avec nos notations : Pour tout graphe G, χ(G) ≥ ω(G). Cependant la présence de clique n’est pas la seule obstruction. Par exemple, il n’y a pas de clique de taille strictement supé- rieure à deux dans un cycle sans corde sur 5 sommets, pourtant il est impossible de le colorier avec strictement moins de 3 couleurs. Plus généralement, il existe de nombreuses constructions qui pour tout entier k fournissent un graphe Gk n’ayant pas de triangle (de clique de taille plus que 3), mais qui n’est pas coloriable avec moins de k couleurs. Dans ce contexte, il est intéressant de regarder comment sont construits les graphes pour lesquels avoir une clique de taille k est la seule raison de ne pas être k coloriable. Il y a une petite subtilité technique, avec seulement cette propriété n’importe quel graphe est, si on lui ajoute une clique suffisamment grande, dans la classe. Pour éviter ce problème et comprendre vraiment les structures de nos graphes, Claude Berge a proposé dans les années 1960 de demander que tous les sous-graphes induits soient également dans la classe. C’est ainsi que sont définis les graphes parfaits : un graphe G est parfait si et seulement si pour tout sous-graphe induit H de G, alors la taille de la plus grande clique de H est égale au nombre de couleurs minimum permettant de colorier le graphe H. Soit avec nos notations un graphe G est parfait si et seulement si pour tout sous-graphe induit H de G, χ(H) = ω(H). Une autre notion importante est celle du stable. Un stable dans un graphe est un ensemble de sommets sans aucune arête entre eux. Pour tout graphe G on note α(G) la taille d’un stable maximum de G. Une opération classique sur un graphe G est de prendre son complémentaire G définit comme suit : les sommets de G sont les même que ceux de G et deux sommets de G sont adjacents si et seulement s’ils ne sont pas adjacents dans G. On peut alors voir qu’un stable dans G devient une clique dans G et inversement. On a alors avec nos notations α(G) = ω(G). D’un point de vue algorithmique, il est intéressant de noter que trouver un algorithme de coloration dans les graphes parfaits se ramène à un algorithme de calcul de stable et de clique maximum pondérés (on met des poids sur les sommets et on cherche un stable ou une clique de poids maximum). C’est pourquoi cette 3CHAPITRE 1. INTRODUCTION thèse contient un chapitre sur le calcul du stable mais pas sur un algorithme de coloration. Ce résultat classique est difficile à extraire des travaux originaux de Gröstchel, Lovász, Schrijver mais est exposé plus clairement dans plusieurs travaux ultérieurs [33, 36]. Afin que le problème de coloration soit également traité dans cette thèse, le voici avec sa démonstration. Théorème 1.1 (Gröstchel, Lovász, Schrijver, 1988). Pour toute classe C autocomplémentaire de graphes parfaits, s’il existe un algorithme de complexité O(n k ) pour calculer un stable maximum pondéré d’un graphe de C, alors il existe un algorithme de complexité O(n k+2) pour colorier les graphes de C. Démonstration. Commençons par donner un algorithme qui étant donné un graphe G de C et une liste de cliques maximums de G K1, . . . , Kt avec t < |V (G)| calcule en temps O(n k ) un stable de G intersectant toutes ces cliques. Donnons à chaque sommets le poids yv = |{i; v ∈ Ki}|. Ce poids peut être nul. Calculons alors avec notre algorithme un stable pondéré maximum S. Nous alons montrer que S intersecte bien toutes les cliques Ki . Considérons alors le graphe G0 obtenu à partir de G en réplicant yv fois chaque sommet v. Observons que G0 peut ne pas être dans C mais est un graphe parfait. En réplicant yv fois chaque sommet v de S nous obtenons S 0 un stable maximum de G0 . Par construction, G0 peut être partitionné en t cliques de taille ω(G) qui forment un coloriage optimale de G0 car α(G0 ) = ω(G0 ) = ω(G). Puisque que le complémentaire d’un graphe parfait est parfait, G0 est parfait, et donc |S 0 | = t. Donc dans G, S intersecte toutes les cliques Ki . Nous allons maintenant montrer comment trouver un stable S intersectant toutes les cliques maximum de G. Ce stable formera une couleur et nous appliquerons ensuite cette méthode inductivement sur G \ S qui est bien χ(G) − 1 coloriable. Notons que la classe C n’a pas besoin d’être héréditaire, puisqu’on peut émuler G\S en donnant un poids nul aux sommets de S. Commençons avec t = 0. À chaque étape nous avons une liste de cliques maximums K1, . . . , Kt et nous calculons un stable S les intersectant toutes avec la méthode décrite précédament. Si ω(G \ S) < ω(G), alors notre stable intersecte toute les cliques maximums de G. Dans le cas contraire, calculons une clique maximum Kt+1 de G\S. Ce qui revient à calculer un stable maximum dans G \ S et qui est possible car notre classe est autocomplémentaire et en donnant un poids nul aux sommets de S. Pour prouver notre résultat nous n’avons plus qu’à montrer que t la taille de notre liste de clique maximum est bornée par |V (G)|. 41 1 2 2 3 1 1 1 2 2 2 3 1 1 2 2 3 3 4 Figure 1.2 – Trous et complémentaire de trous (C5 = C5, C7 et C7) Soit Mt la matrice d’incidence des cliques K1, . . . Kt . C’est-à-dire que les colonnes de Mt correspondent aux sommets de G et que chaque ligne est une clique. Montrons par induction que les lignes de Mt sont indépendantes. Le cas de base est trivial. Supposons que les lignes de Mt sont indépendantes et montrons que celles de Mt+1 le sont. Soit x le vecteur d’incidence x de S. On a Mtx = 1 mais pas de Mt+1x = 1. Supposons que les lignes de Mt+1 ne soient pas indépendantes. Nous avons, Kt+1 = λ1K1 + · · · + λtKt . En multipliant par x nous avons Kt+1x = λ1 + · · · + λt 6= 1. En multipliant par le vecteur colonne 1 nous avons alors ω(G) = Kt+11 = λ1ω(G) + · · · + λtω. Donc λ1 + · · · + λt = 1, une contradiction. Par conséquent, les matrices M1, M2, . . . ne peuvent avoir plus de |V (G)| lignes, et notre nombre d’itérations est bien borné par |V (G)|. Comme nous l’avons vu, contenir un cycle sans corde de longueur impaire nous assure de ne pas être parfait. Il est facile de voir que contenir le complémentaire d’un cycle sans corde de longueur impaire (tous les sommets non-adjacents deviennent adjacents et ceux adjacents deviennent non-adjacents, pour tout graphe G on note G son complémentaire) empêche également un graphe d’être parfait. On appelle trou un cycle sans corde, et on note Cn le trou de longueur n. On dit qu’un graphe sans trou, ni complémentaire de trou de longueur impaire supérieure à 4 est un graphe de Berge. La propriété d’être parfait pour un graphe est une propriété globale et elle implique la propriété locale d’être de Berge. Un exemple trivial d’influence de propriété locale ou d’interdiction de structures locales sur la coloration est celui d’interdire les arêtes. Dans ce cas il est immédiat que le graphe est coloriable avec une seule couleur. Voyons un exemple plus intéressant. On note P4 le chemin de 4 sommets, ce graphe a 3 arêtes. Un 5CHAPITRE 1. INTRODUCTION graphe G est sans P4 s’il ne contient pas P4 en tant que sous graphe induit. Théorème 1.2. Les graphes sans P4 sont coloriables optimalement avec l’algorithme suivant : Commençons par attribuer à chaque couleur un entier. Puis tant qu’il existe un sommet non colorié le colorier avec la plus petite couleur non utilisée par un de ses voisins. Démonstration. Notons k le nombre de couleurs utilisées par l’algorithme. Soit i le plus petit entier, tel qu’il existe une clique composée de k − i + 1 sommets coloriés de i à k. Si le graphe ne contient pas d’arête, il est clair que l’algorithme est valide, sinon cette clique contient au moins 2 sommets. Nous allons montrer que i = 1. Supposons que ce n’est pas le cas. Par définition de l’algorithme, tout sommet de la clique a un voisin colorié par i − 1, notons S cet ensemble de sommets. Par minimalité de i, S ne peut être réduit à un unique sommet. De plus, les sommets de S ayant la même couleur, ils forment donc un stable (ils sont tous deux à deux non-adjacents). Il existe deux sommets u et v de S, tels que u a un voisin x dans la clique qui n’est pas un voisin de v et v a un voisin dans la clique qui n’est pas un voisin de u. Les sommets u − x − y − v forment un P4, c’est une contradiction. Il existe donc une clique de taille k. Comme nous l’avons vu précédemment il est donc impossible de colorier le graphe avec strictement moins de k couleurs. Notre algorithme est donc optimal. Cette démonstration classique montre en fait que les graphes sans P4 sont des graphes parfaits. Dans les années 1960, Claude Berge a conjecturé qu’un graphe était parfait si et seulement si il était de Berge. Comme on vient de le voir, le passage du global (être coloriable avec exactement le même nombre de couleurs que la taille de la plus grande clique) au local (ne pas contenir de trou ni de complémentaire de trou de longueur impaire plus grande que 4) est clair. D’après les définitions : les graphes parfaits sont des graphes de Berge. C’est la réciproque, le passage du local au global qui est complexe. Cette conjecture a motivé de nombreuses recherches, utilisant des outils très différents (polyèdre, combinatoire). Finalement c’est grâce à un théorème de structure qu’en 2002 Chudnovsky, Robertson, Seymour et Thomas ont pu démontrer cette conjecture. Théorème 1.3 (Chudnovsky, Robertson, Seymour, Thomas (2002)). Les graphes de Berge sont parfaits. 6Un théorème de structure est un théorème disant que les graphes d’une classe sont : ou bien basiques ou bien admettent une décomposition. Par basique, on entend qu’ils sont dans une sous-classe de graphe suffisamment simple ou étudiée pour qu’il soit possible de résoudre notre problème sur cette sous-classe. Par décomposition, on entend que le graphe est obtenu en recollant de manière bien définie deux graphes plus petits de la classe. Le sommet d’articulation (un sommet dont la suppression déconnecte le graphe) est un exemple de décomposition. Si l’on a deux graphes G1 et G2 on peut en former un troisième G3 plus gros en identifiant un sommet de G1 avec un sommet de G2, ce sommet devient alors un sommet d’articulation de G3. Il existe également une version plus forte des théorèmes de structure : les théorèmes de structure extrême, qui énoncent qu’un graphe est ou bien basique, ou bien admet une décomposition telle qu’un côté de la décomposition est basique. Si l’on regarde l’arbre de décomposition d’un théorème de structure extrême, c’est un peigne (chaque nœud interne de l’arbre a comme fils une feuille et un nœud). Les théorèmes de structure sont généralement très utiles pour faire des démonstrations ou obtenir des algorithmes. Ils permettent de faire des inductions. Si une propriété est vraie pour les graphes basiques et qu’il est possible en utilisant la forme de la décomposition d’avoir cette propriété sur la composition de graphes, alors la propriété est vraie pour la classe de graphes. Certaines décompositions sont faciles à utiliser, par exemple le sommet d’articulation ou plus généralement la clique d’articulation (une clique dont la suppression déconnecte le graphe) qui peuvent, par exemple, être utilisées pour la coloration. D’autres sont très difficiles à utiliser, par exemple le star-cutset (un sommet dont la suppression et la suppression de certains de ses voisins déconnectent le graphe) et ses généralisations. En effet, il est possible que le star-cutset soit constitué de la majeure partie du graphe, dans ce cas toute tentative d’induction nécessitant de conserver le starcutset dans chaque partie de la décomposition (c’est ce qui est le cas classique) conduira nécessairement à un algorithme exponentiel. Dans cette thèse, nous allons utiliser le théorème de structure des graphes de Berge afin de démontrer un certain nombre de propriétés et d’algorithmes. Le théorème de Chudnovsky et al. et dont nous avons déjà parlé est le suivant. Théorème 1.4 (Chudnovsky, Robertson, Seymour, Thomas (2002)). Les graphes de Berge sont basiques ou décomposables par skew-partitions équilibrées, 2-joints ou complémentaire de 2-joints. Toutes les définitions seront données dans le chapitre 2. Nous ne savons pas 7CHAPITRE 1. INTRODUCTION utiliser les skew-partitions (une décomposition généralisant les star-cutset) dans les algorithmes, nous nous focalisons donc sur les graphes de Berge sans skewpartition. Vu que notre classe et que nos problèmes sont auto-complémentaires notre principale décomposition est donc le 2-joint. Nous essayerons tant que possible de généraliser nos résultats à d’autres classes décomposables par k-joints. Dans le chapitre 2, nous donnons toutes les définitions. Nous avons en fait besoin de généraliser la définition d’arête d’un graphe. Nous aurons alors des arêtes fortes, des non-arêtes fortes et un nouveau type d’adjacence : les arêtes optionnelles qui encodent une adjacence floue. Ces graphes généralisés sont appelés trigraphes et on été introduit par Chudnovsky et al. lors de la preuve du théorème fort des graphes parfaits. Nous avons donc besoin de redéfinir toutes les notions usuelles de graphe, ainsi que le théorème de structure des graphes de Berge. Dans le chapitre 3, nous nous intéressons aux classes pour lesquelles il existe pour tout graphe de la classe deux ensembles de sommets complets dans le graphe ou dans son complément de taille linéaire. Il y a des contre-exemples de classes de graphes de Berge sans ces ensembles, mais lorsqu’on exclut les skew-partition ces ensembles existent toujours. Il est possible de généraliser cette propriété aux classes construites par k-joints. Dans le chapitre 4, nous nous intéressons à la propriété de la clique-stable séparation, c’est-à-dire à l’existence d’un nombre polynomial de partitions du graphe en 2 ensembles tels que pour toute clique et tout stable sans intersection, il existe une partition contenant la clique d’un côté et le stable de l’autre. Ce problème est ouvert en général sur les graphes de Berge, mais nous pouvons le démontrer dans le cas où on exclut les skew-partition. Ici encore cette propriété peut-être étendue aux classes construites par k-joints. Dans le chapitre 5, nous nous intéressons au calcul en temps polynomial du stable maximum. Notre algorithme est constructif et donne directement les sommets du stable. Dans le cas général des classes héréditaires on peut reconstruire avec un surcoût linéaire un stable maximum en sachant calculer sa valeur, cependant notre classe n’est pas héréditaire. Cet algorithme ne se généralise pas aux classes construites par k-joints, il existe de telles classes où le calcul du stable maximum est NP-complet. À partir de l’algorithme de calcul du stable on peut déduire un algorithme qui calcule une coloration optimale avec un surcoût de O(n 2 ). Dans le chapitre 6, nous montrons qu’en étendant notre ensemble de décompositions nous pouvons obtenir une version extrême du théorème de structure. Nous donnons également des algorithmes permettant de calculer une telle décomposition. 8Chapitre 2 Trigraphes de Berge apprivoisés Les résultats de ce chapitre ont été obtenus avec Maria Chudnovsky, Nicolas Trotignon et Kristina Vušković, ils font l’objet d’un article [15] soumis à Journal of Combinatorial Theory, Series B. Nous introduisons dans ce chapitre une généralisation des graphes, les trigraphes. Il semble que cette notion de trigraphe, inventée par Chudnovsky, soit en train de devenir un outil très utile en théorie structurelle des graphes. Les trigraphes ont entre autre été utilisés pour éliminer la paire homogène (deux ensembles A et B de sommets se comportant comme deux sommets vis à vis du reste du graphe i.e tels que le reste du graphe se décompose en quatre ensembles : les sommets complétement adjacents aux sommets de A et de B, ceux complétement adjacents aux sommets de A et complétement non-adjacents aux sommets de B, ceux complétement non-adjacents aux sommets de A et complétement adjacents aux sommets de B et ceux complétement non-adjacents aux sommets de A et de B) de la liste des décompositions utilisées pour décomposer les graphes de Berge [9]. Ils apparaissent également dans l’étude des graphes sans griffe [14] ou dans celle des graphes sans taureau [11, 10]. La notion de trigraphe apparait également lors de l’étude d’homomorphismes [18, 19]. Cependant il faut bien noter que dans ce dernier cas, même si le nom et les idées générales sont identiques, les définitions diffèrent légèrement. Avant de tout définir, essayons d’expliquer informellement l’intérêt des trigraphes. Le premier exemple de leurs utilisation est dans la démonstration de l’amélioration du théorème de décomposition des graphes de Berge. En effet, lors 9CHAPITRE 2. TRIGRAPHES DE BERGE APPRIVOISÉS A B Figure 2.1 – Paire Homogène (Une double arête entre deux ensembles indique qu’ils sont complet, une arête simple indique qu’on ne sait rien sur leur adjacence, l’absence d’arête indique qu’il n’y a aucune arête entre les deux ensembles) de la preuve du théorème des graphes parfaits Chudnovsky, Robertson, Seymour et Thomas [13] parviennent à décomposer les graphes de Berge en utilisant trois types de décompositions. Chudnovsky [9] montre alors qu’une de ces décompositions, la paire homogène est en fait inutile. Voyons l’intérêt des trigraphes dans la démonstration de ce résultat. L’idée de sa preuve est de prendre le plus petit graphe G dont la seule décomposition possible est une paire homogène et de chercher une contradiction. L’idée naturelle est alors de contracter cette paire homogène afin d’obtenir un graphe plus petit G0 , qui par hypothèse de minimalité, admet alors une décomposition autre que la paire homogène. On peut alors déduire de cette décomposition une décomposition dans G. Ce qui est contradictoire puisque par hypothèse, G n’est décomposable que par paire homogène. Cette idée est en fait une méthode classique de démonstration en théorie structurelle de graphes. Le principal problème (et c’est pour répondre à ce problème que les trigraphes ont été introduit) est de savoir comment contracter la paire homogène. Une paire homogène étant deux ensembles A et B de sommets se comportant comme deux sommets vis à vis du reste du graphe, on peut vouloir les réduire à deux sommets a et b tout en préservant leurs adjacences par rapport au reste du graphe. La question est de savoir s’il faut ou non mettre une arête entre ces deux sommets contractés. Sans donner les définitions précises, si on décide de ne pas mettre d’arête entre a et b, dans un certain sens, a va pouvoir être séparé de b, alors que dans le graphe de départ, séparer A de B n’a pas de sens. Nous avons le même problème dans le complémentaire du graphe si l’on décide de mettre une arête entre a et b. En 10fait aucun de ces choix n’est le bon, en effet a priori chacun de ces choix pourrait créer une décomposition. Ce n’est finalement pas le cas puisque le résultat est vrai mais toute démonstration se heurte à ce problème. L’idée est alors de mettre une arête optionnelle. Les trigraphes sont alors définis comme des graphes mais avec trois types d’adjacence, les arêtes, les arêtes optionnelles et les non-arêtes. Une réalisation d’un trigraphe est alors une affectation des arêtes optionnelles en arêtes et non-arêtes. Tout le vocabulaire et les propriétés sur les graphes se traduisent alors sur les trigraphes de la manière suivante : Une propriété P est vraie sur le trigraphe T si et seulement si elle est vraie sur toutes les réalisations G de T. On ne crée alors plus de décomposition car on peut alors montrer que si toutes les réalisations contiennent cette décomposition, alors cette décomposition était présente à l’origine, ce qui mène à une contradiction. Bien entendu pour pouvoir faire l’induction tous les résultats doivent être vrais sur les trigraphes. Comme c’est souvent le cas lors des preuves par induction afin d’obtenir le résultat voulu nous devons en montrer un plus fort. Les trigraphes permettent donc de travailler naturellement sur des hypothèses d’induction plus fortes ; pour toute réalisation la propriété doit être vraie ; ce qui nous permet alors de contracter des ensembles de sommets tout en préservant l’information d’adjacence entre ces sommets. Certains étaient adjacents, d’autre non, et suivant la réalisation choisie on a ou pas cet état. Ceci nous permet, si une décomposition existe (dans toutes les réalisations), de la trouver dans le graphe de départ. Ceci n’étant que de brèves explications, voyons maintenant les définitions pré- cises. Dans ce chapitre nous allons commencer par formaliser le vocabulaire usuel des graphes sur les trigraphes. Nous définissons ensuite plusieurs classes de trigraphes basiques (il s’agit des classes de base du thérorème fort des graphes parfaits, à savoir les trigraphes bipartis, les line trigraphes et les trigraphes doublés) et les décompositions que nous allons utiliser, là encore il s’agit des décompositions utilisées pour la démonstration du théorème fort des graphes parfaits à savoir les 2-joints et les skew-partitions équilibrées. Comme mentionné précédemment les paires homogènes ne sont pas utiles et nous les définirons plus tard lorsque nous voudrons une version “extrême” du théorème de structure, c’est à dire un théorème de structure dans lequel à chaque étape de décomposition, un côté au moins de la décomposition est un trigraphe basique. Nous définissons ensuite la classe des trigraphes que nous allons étudier, les trigraphes de Berge bigames, il s’agit d’une généralisation des trigraphes monogames utilisés dans [8]. Dans les trigraphes monogames les arêtes optionnelles forment un couplage alors que dans 11CHAPITRE 2. TRIGRAPHES DE BERGE APPRIVOISÉS les trigraphes bigames on autorise sous certaines conditions des composantes de deux arêtes optionnelles. Nous devons alors étendre le théorème de décomposition des trigraphes de Berge monogames aux trigraphes de Berge bigames. Enfin, nous pourrons définir la sous-classe qui nous intéresse, à savoir les trigraphes de Berge apprivoisés et montrer qu’ils se comportent bien vis à vis des décompositions du théorème. En effet il est possible de les décomposer tout en gardant l’information utile de l’autre partie de la décomposition et en restant dans la sous-classe. 2.1 Définitions Pour tout ensemble X, on note  X 2  l’ensemble de tous les sous-ensembles de X de taille 2. Pour alléger les notations, un élément {u, v} de  X 2  sera également noté uv ou vu. Un trigraphe T est composé d’un ensemble fini V (T), appelé l’ensemble de sommet de T et d’une application θ :  V (T) 2  −→ {−1, 0, 1}, appelée fonction d’adjacence. Deux sommets distincts de T sont dit fortement adjacents si θ(uv) = 1, fortement antiadjacents si θ(uv) = −1 et semiadjacents si θ(uv) = 0. On dit que u et v sont adjacents s’ils sont fortement adjacents ou semiadjacents ; et qu’ils sont antiadjacents, s’ils sont fortement antiadjacents ou semiadjacents. Une arête (antiarête) est une paire de sommets adjacents (antiadjacents). Si u et v sont adjacents (antiadjacents), on dit également que u est adjacent (antiadjacent) à v, ou que u est un voisin (antivoisin) de v. De la même manière, si u et v sont fortement adjacents (fortement antiadjacents), alors u est un voisin fort (antivoisin fort) de v. On note E(T), l’ensemble de toutes les paires fortement adjacentes de T, E(T) l’ensemble de toutes les paires fortement antiadjacentes de T et E ∗ (T) l’ensemble de toutes les paires semiadjacentes de T. Un trigraphe T peut être considèré comme un graphe si et seulement si E ∗ (T) est vide. Une paire {u, v} ⊆ V (T) de sommets distincts est une arête optionnelle si θ(uv) = 0, une arête forte si θ(uv) = 1 et une antiarête forte si θ(uv) = −1. Une arête uv (antiarête, arête forte, antiarête forte, arête optionnelle) est entre deux ensembles A ⊆ V (T) et B ⊆ V (T) si u ∈ A et v ∈ B ou si u ∈ B et v ∈ A. Soit T un trigraphe. Le complément de T est le trigraphe T avec le même ensemble de sommet V (T) que T, et avec la fonction d’adjacence θ = −θ. Pour v ∈ V (T), on note N(v) l’ensemble de tous les sommets de V (T) \ {v} qui sont adjacents à v. Soit A ⊂ V (T) et b ∈ V (T) \ A. On dit que b est fortement complet à A si b est fortement adjacent à tous les sommets de A ; b est fortement 122.1. DÉFINITIONS anticomplet à A si b est fortement antiadjacent à tous les sommets de A ; b est complet à A si b est adjacent à tous les sommets de A ; et b est anticomplet à A si b est anticomplet à tous les sommets de A. Pour deux ensembles disjoints A, B de V (T), B est fortement complet (fortement anticomplet, complet, anticomplet) à A si tous les sommets de B sont fortement complets (fortement anticomplets, complets, anticomplets) à A. Un ensemble de sommets X ⊆ V (T) domine (domine fortement) T, si pour tout sommet v ∈ V (T) \ X il existe u ∈ X, tel que v est adjacent (fortement adjacent) à u. Une clique de T est un ensemble de sommets deux à deux adjacents. Une clique forte est un ensemble de sommets deux à deux fortement adjacents. Un stable est un ensemble de sommets deux à deux antiadjacents. Un stable fort est un ensemble de sommets deux à deux fortement antiadjacents. Remarquons qu’avec ces définitions une clique et un stable peuvent s’intersecter sur plusieurs sommets, dans ce cas l’intersection est composée uniquement d’arêtes optionnelles. Pour X ⊆ V (T), le trigraphe induit par T sur X (noté T|X) a X comme ensemble de sommets et θ restreinte sur  X 2  comme fonction d’adjacence. L’isomorphisme entre deux trigraphes est défini de manière naturelle et pour deux trigraphes T et H, on dit que H est un trigraphe induit de T ou que T contient H en tant que sous-trigraphe induit, s’il existe X ⊆ V (T), tel que H est isomorphe à T|X. Comme la relation de sous-graphe induit est la principale relation étudiée dans cette thèse, on dit également que T contient H si T contient H comme sous-trigraphe induit. On note T \ X le trigraphe T|(V (T) \ X). Soit T un trigraphe. Un chemin P de T est une suite de sommets distincts p1, . . . , pk telle que ou bien k = 1, ou bien pour i, j ∈ {1, . . . , k}, pi est adjacent à pj , si |i − j| = 1 et pi est antiadjacent à pj si |i − j| > 1. Dans ce cas, V (P) = {p1, . . . , pk} et on dit que P est un chemin de p1 à pk, son intérieur est l’ensemble P ∗ = V (P) \ {p1, pk}, et la taille de P est k − 1. Parfois on note P par p1- · · · -pk. Notons que puisqu’un graphe est également un trigraphe, un chemin dans un graphe avec notre définition est plus généralement appelé dans la littérature un chemin sans corde. Un trou dans un trigraphe T est un sous-trigraphe induit H de T sur un ensemble de sommets h1, . . . , hk avec k ≥ 4, et pour i, j ∈ {1, . . . , k}, hi est adjacent à hj si |i − j| = 1 ou si |i − j| = k − 1 ; et hi est antiadjacent à hj si 1 < |i − j| < k − 1. La taille d’un trou est égale au nombre de sommets qu’il contient. Parfois on note H par h1- · · · -hk-h1. Un antichemin (resp. antitrou) dans T est un sous-trigraphe induit de T dont le complément est un chemin (resp. trou) dans T. 13CHAPITRE 2. TRIGRAPHES DE BERGE APPRIVOISÉS Une semiréalisation d’un trigraphe T est n’importe quel trigraphe T 0 sur l’ensemble de sommet V (T) qui vérifie les propriétés suivantes : pour tout uv ∈  V (T) 2  , si uv ∈ E(T) alors uv ∈ E(T 0 ), et si uv ∈ E(T) alors uv ∈ E(T 0 ). On peut voir une semiréalisation de T comme une affectation des arêtes optionnelles de T avec trois valeurs possibles : “arête forte”, “antiarête forte” ou “arête optionnelle”. Une réalisation de T est n’importe quel graphe qui est une semiréalisation de T (c’est à dire que toutes les arêtes optionnelles sont assignées aux valeurs “arête forte” ou “antiarête forte”). Pour S ⊆ E ∗ (T), on note par GT S la réalisation de T avec E(T)∪S comme ensemble d’arêtes, c’est à dire que dans GT S les arêtes optionnelles de S sont assignées à la valeur “arête” et que celles de E ∗ (T) \ S sont assignées à la valeur “antiarête”. La réalisation GT E ∗(T) est appelée réalisation complète de T. Soit T un trigraphe. Pour X ⊆ V (T), on dit que X et T|X sont connexes (resp. anticonnexes) si le graphe G T|X E ∗(T|X) (G T|X ∅ ) est connexe, c’est à dire qu’en remplaçant toute les arêtes optionnelles par des arêtes fortes (resp. antiarêtes fortes) le graphe obtenu est connexe (resp. le complémentaire du graphe obtenu est connexe). Une composante connexe (ou simplement une composante) de X est un sous-ensemble connexe maximal de X, et une anticomposante connexe (ou simplement une anticomposante) de X est un ensemble maximal anticonnexe de X. L’idée des ces définitions est la suivante : — une propriété est vraie sur un trigraphe T s’il existe un graphe G réalisation de T sur laquelle elle est vraie. — une propriété forte est vraie sur un trigraphe T si pour tout graphe G réalisation de T elle est vraie. — une antipropriété est vraie sur un trigraphe T si elle est vraie sur le complémentaire de T (les arêtes fortes deviennent des antiarêtes fortes et inversement). Attention dans les sections suivantes, les trigraphes basiques et les décompositions sont implicitement fortes. En effet si on a besoin de pouvoir parler de trigraphe connexe (au sens faible), nous n’aurons jamais besoin de parler de trigraphe faiblement de Berge ou admettant un 2-joint faible. En effet comme mentionné dans lors des motivations des trigraphes, nous voulons qu’un trigraphe soit basique ou admette une décomposition si et seulement si c’est le cas pour toutes ses réalisations. De cette manière nous pourront transformer une obstruction dans le trigraphe contracté en une obstruction dans le trigraphe de départ. 142.2. TRIGRAPHES BASIQUES 2.2 Trigraphes basiques Un trigraphe T est de Berge, s’il ne contient pas de trou impair ni d’antitrou impair. Par conséquent, un trigraphe est de Berge si et seulement si son complé- ment l’est. Notons également que T est de Berge si et seulement si, toutes ses semiréalisations (réalisations) sont de Berge. Remarquons qu’un trigraphe sans arêtes optionnelles et en particulier toutes réalisations d’un trigraphe peuvent être vues comme un graphe, il est alors important de voir qu’être un trigraphe de Berge pour un trigraphe sans arêtes optionnelles est exactement être un graphe de Berge. Notre définition dans les trigraphes est bien une généralisation aux trigraphes de la définition usuelle dans les graphes. Un trigraphe T est biparti si on peut partitionner son ensemble de sommets en deux stables forts. Toute réalisation d’un trigraphe biparti est un graphe biparti, et donc tout trigraphe biparti est de Berge. De la même manière, les compléments de trigraphes bipartis sont également de Berge. De même cette définition est bien une généralisation de la définition de biparti dans les graphes. Un trigraphe T est un line trigraphe, si la réalisation complète de T est le line graphe d’un graphe biparti et que toute clique de taille au moins 3 dans T est une clique forte. L’énoncé suivant est un résultat simple sur les lines trigraphes. Ici encore un line trigraphe sans arêtes optionnelles est un line graphe de graphe biparti. Lemme 2.1. Si T est un line trigraphe, alors toute réalisation de T est le line graphe d’un graphe biparti. Et plus, toute semiréalisation de T est un line trigraphe. Démonstration. Par définition, la réalisation complète G de T est le line graphe d’un graphe biparti R. Soit S ⊆ E ∗ (T). Définissons RS comme suit. Pour tout xy ∈ E ∗ (T) \ S, soit vxy l’extrémité commune de x et y dans R. Alors vxy est de degré 2 dans R car toute clique de taille au moins 3 dans T est une clique forte. Soit axy et bxy ses voisins. Supprimons vxy de R et remplaçons le par deux nouveaux sommets, uxy, wxy tels que uxy est seulement adjacent à axy, et wxy est seulement adjacent à bxy. Maintenant RS est biparti et GT S est le line graphe de RS. On a alors la première partie du résultat, la seconde suit car la réalisation complète d’une semiréalisation est une réalisation. Remarquons que cela implique que les line trigraphes ainsi que leurs complé- ments sont de Berge. Définissons maintenant les trigraphes semblables aux double 15CHAPITRE 2. TRIGRAPHES DE BERGE APPRIVOISÉS split graphes (défini pour la première fois dans [13]), c’est à dire les trigraphes doublés. Une bonne partition d’un trigraphe T est une partition (X, Y ) de V (T) (les cas X = ∅ ou Y = ∅ ne sont pas exclus) telle que : — Chaque composante de T|X a au plus deux sommets, et chaque anticomposante de T|Y a au plus deux sommets. — Il n’y a pas d’arête optionnelle de T qui intersecte à la fois X et Y — Pour toute composante Cx de T|X, toute anticomposante CY de T|Y et tout sommet v dans CX ∪ CY , il existe au plus une arête forte et une antiarête forte entre CX et CY qui est incidente à v. Un trigraphe est doublé si et seulement s’il a une bonne partition. Les trigraphes doublés peuvent aussi être définis comme les sous-trigraphes induits des double split trigraphes (voir [9] pour une définition des double split trigraphes que nous n’utiliserons pas ici). Remarquons que les trigraphes doublés sont clos par sous-trigraphes induit et par complémentation (en effet (X, Y ) est une bonne partition d’un trigraphe T si et seulement si (Y, X) est une bonne partition de T). Un graphe doublé est n’importe quelle réalisation d’un trigraphe doublé. Nous montrons maintenant le résultat suivant : Lemme 2.2. Si T est un trigraphe doublé, alors toute réalisation de T est un graphe doublé. De plus, toute semiréalisation de T est aussi un graphe doublé. Démonstration. L’énoncé sur les réalisations est clair par définition. Soit T un trigraphe doublé, et (X, Y ) une bonne partition de T. Soit T 0 une semiréalisation de T. Il est facile de voir que (X, Y ) est aussi une bonne partition de T 0 (par exemple, si une arête optionnelle ab de T|X est assignée à la valeur “antiarête”, alors {a} et {b} deviennent des composantes de T 0 |X, mais ils vérifient toujours la définition d’une bonne partition). Ceci prouve le résultat sur les semiréalisations. Remarquons que ceci implique que tout trigraphe doublé est de Berge, car tout graphe doublé est de Berge. Un trigraphe est basique si c’est, ou bien un trigraphe biparti, ou bien le complément d’un trigraphe biparti, ou bien un line trigraphe, ou bien le complément d’un line trigraphe ou bien un trigraphe doublé. Le résultat suivant résume les ré- sultats de cette section et montre bien que nos classes basiques sont implicitement fortement basiques. Lemme 2.3. Les trigraphes basiques sont de Berge et sont clos par soustrigraphe induit, semiréalisation, réalisation et complémentation. 162.3. DÉCOMPOSITIONS A1 C1 B1 A2 C2 B2 X1 X2 Figure 2.2 – 2-joint (Une double arête entre deux ensembles indique qu’ils sont complet, une arête simple indique qu’on ne sait rien sur leur adjacence, l’absence d’arête indique qu’il n’y a aucune arête entre les deux ensembles) 2.3 Décompositions Nous pouvons maintenant décrire les décompositions dont nous aurons besoin afin d’énoncer notre théorème de décomposition. Pour commencer, un 2-joint dans un trigraphe T est une partition (X1, X2) de V (T) telle qu’il existe des ensembles disjoints A1, B1, C1, A2, B2, C2 ⊆ V (T) vérifiant : — X1 = A1 ∪ B1 ∪ C1 et X2 = A2 ∪ B2 ∪ C2 ; — A1, A2, B1 et B2 sont non-vides ; — il n’y a pas d’arête optionnelle qui intersecte à la fois X1 et X2 ; — tout sommet de A1 est fortement adjacent à tous les sommets de A2, et tout sommet de B1 est fortement adjacent à tous les sommets de B2 ; — il n’y a pas d’autre arête forte entre X1 et X2 ; — pour i = 1, 2 |Xi | ≥ 3 ; et — pour i = 1, 2, si |Ai | = |Bi | = 1, alors la réalisation complète de T|Xi n’est pas un chemin de taille deux reliant les membres de Ai et ceux de Bi . 17CHAPITRE 2. TRIGRAPHES DE BERGE APPRIVOISÉS Remarquons bien qu’aucune arête importante (celles entre X1 et X2) pour la définition du 2-joint ne peut être une arête optionnelle. Ici aussi et sauf cas pathologique (Xi est un triangle d’arêtes optionelles ayant un sommet dans chaque ensemble Ai , Bi et Ci), le 2-joint est implicitement fort, dans tout graphe G réalisation de T, (X1, X2) est un 2-joint. Notons bien que dans les trigraphes de Berge ce cas pathologique ne peut pas apparaitre car il contredit le lemme 2.4 énoncé juste après. Nous aurions pu éviter ce problème en choisissant une définition plus forte du 2-joint, par exemple un 2-joint vérifiant par définition tous les points du théorème 2.9. Ce théorème prouve que dans le cas des trigraphes de Berge apprivoisés les 2-joints possèdent certaines conditions techniques supplémentaires que n’a pas ce cas pathologique. Cependant l’utilisation du théorème 2.5 qui est exactement le théorème 3.1 de [9] ne nous autorise pas à utiliser une définition adaptée aux trigraphes de Berge apprivoisés. Dans ces conditions, on dit que (A1, B1, C1, A2, B2, C2) est une affectation de (X1, X2). Le 2-joint est propre si pour i = 1, 2, toute composante de T|Xi intersecte à la fois Ai et Bi . Remarquons que le fait que le 2-joint soit propre ne dépend pas du choix de l’affectation. Un complément de 2-joint d’un trigraphe T est un 2-joint de T. Plus précisé- ment, un complément de 2-joint d’un trigraphe T est une partition (X1, X2) de V (T) telle que (X1, X2) est un 2-joint de T ; et (A1, B1, C1, A2, B2, C2) est une affectation de ce complément de 2-joint, si c’est une affectation du 2-joint correspondant dans le complément, i.e. A1 est fortement complet à B2 ∪ C2 et fortement anticomplet à A2, C1 est fortement complet à X2, et B1 est fortement complet à A2 ∪ C2 et fortement anticomplet à B2. Lemme 2.4. Soit T un trigraphe de Berge et (A1, B1, C1, A2, B2, C2) une affectation d’un 2-joint propre de T. Alors tous les chemins dont une extrémité est dans Ai, l’autre étant dans Bi et dont l’intérieur est dans Ci, pour i = 1, 2 ont des longueurs de même parité. Démonstration. Dans le cas contraire, pour i = 1, 2, soit Pi des chemins dont une extrémité est dans Ai , l’autre extrémité étant dans Bi et dont l’intérieur est dans Ci , tels que P1 et P2 ont des parités différentes. Ils forment un trou impair, c’est une contradiction. Notre deuxième décomposition est la skew-partition équilibrée. Soit A, B deux ensembles disjoints de V (T). On dit que la paire (A, B) est équilibrée s’il n’y a pas de chemin impair de longueur strictement supérieure à 1 dont les extrémités 182.3. DÉCOMPOSITIONS A1 C1 B1 A2 C2 B2 X1 X2 Figure 2.3 – Complèment de 2-joint (Une double arête entre deux ensembles indique qu’ils sont complet, une arête simple indique qu’on ne sait rien sur leur adjacence, l’absence d’arête indique qu’il n’y a aucune arête entre les deux ensembles) A B C D Figure 2.4 – Skew-partition (Une double arête entre deux ensembles indique qu’ils sont complet, l’absence d’arête indique qu’il n’y a aucune arête entre les deux ensembles) 19CHAPITRE 2. TRIGRAPHES DE BERGE APPRIVOISÉS sont dans B et dont l’intérieur est dans A et qu’il n’y a pas non plus d’antichemin de longueur strictement supérieure à 1 dont les extrémités sont dans A et dont l’intérieur est dans B. Une skew-partition est une partition (A, B) de V (T) telle que A n’est pas connexe et B n’est pas anticonnexe. Une skew-partition (A, B) est équilibrée si la paire (A, B) l’est. Étant donné une skew-partition équilibrée (A, B), (A1, A2, B1, B2) est une affectation de (A, B) si A1, A2, B1 et B2 sont des ensembles disjoints et non-vide, A1 ∪ A2 = A, B1 ∪ B2 = B, A1 est fortement anticomplet à A2, et B1 est fortement complet à B2. Remarquons que pour toute skew-partition équilibrée, il existe au moins une affectation. Attention, l’adjectif “équilibrée” pourrait laisser penser que les tailles des deux parties sont comparables, ce n’est absolument pas le cas. Il est tout à fait possible qu’un des ensembles de l’affectation concentre presque tout le trigraphe, le reste ne comportant qu’un nombre fixe négligeable de sommets. C’est le problème majeur à l’élaboration d’algorithmes utilisant les skew-partitions équilibrées. Si nous étions assurés que chaque ensemble de l’affectation fût composé d’au moins une fraction du trigraphe, nos algorithmes pourraient alors s’étendre sur tous les graphes parfaits. Ces deux décompositions généralisent les décompositions utilisées dans [13]. De plus toutes les arêtes et non-arêtes “importantes” dans ces décompositions doivent respectivement être des arêtes fortes et des antiarêtes fortes du trigraphe. Nos décompositions sont donc bien implicitement fortes. Nous pouvons maintenant énoncer plusieurs lemmes techniques. Un trigraphe est dit monogame si tous ses sommets appartiennent à au plus une arête optionnelle. Nous pouvons maintenant énoncer le théorème de décomposition pour les trigraphes monogames de Berge. C’est le théorème 3.1 de [9]. Théorème 2.5. Soit T un trigraphe monogame de Berge. Alors un des points suivants est vrai : — T est basique ; — T ou T admet un 2-joint propre ; ou — T admet une skew-partition équilibrée. Si (A, B) est une skew-partition d’un trigraphe T, on dit que B est un star cutset de T si au moins une anticomposante de B a taille 1. L’énoncé suivant est le Théorème 5.9 de [8]. 202.4. STRUCTURE DES TRIGRAPHES DE BERGE APPRIVOISÉS Lemme 2.6. Si un trigraphe de Berge admet un star cutset, alors il admet une skew-partition équilibrée. On dit que X est un ensemble homogène d’un trigraphe T si 1 < |X| < |V (T)|, et que tout sommet de V (T) \ X est ou bien fortement complet ou bien fortement anticomplet à X. Lemme 2.7. Soit T un trigraphe et X un ensemble homogène de T, tel qu’il existe un sommet de V (T)\X fortement complet à X, et un sommet de V (T)\ X fortement anticomplet à X. Alors T admet une skew-partition équilibrée. Démonstration. Soit A l’ensemble des sommets de V (T) \ X qui sont fortement anticomplets à X, et C l’ensemble des sommets de V (T) \ X qui sont fortement complets à X. Soit x ∈ X. Alors C ∪ {x} est un star cutset de T (puisque A est X \ {x} sont non-vides et fortement anticomplets entre eux), et donc T admet une skew-partition équilibrée d’après le lemme 2.6. Nous aurons également besoin du résultat suivant (qui est un corollaire immé- diat du théorème 5.13 de [8]) : Lemme 2.8. Soit T un trigraphe de Berge. Supposons qu’il y ait une partition de V (T) en quatre ensembles non-vides X, Y, L, R, tels que L est fortement anticomplet à R, et X est fortement complet à Y . Si (L, Y ) est équilibrée alors T admet une skew-partition équilibrée. 2.4 Structure des trigraphes de Berge apprivoisés Pour les besoins de nos inductions nous aurons besoin d’utiliser des trigraphes plus généraux que les trigraphes monogame. Nous allons donc définir les trigraphes bigame et montrer que le théorème de décomposition des trigraphes monogames de Berge s’étend sur les trigraphes bigames de Berge. Pour se familiariser avec notre objet d’étude principal, les trigraphes de Berge apprivoisés (définis dans la suite de ce paragraphe), nous allons commencer par montrer que dans ces trigraphes les 2-joints vérifient plusieurs conditions techniques supplémentaires. Soit T un trigraphe, notons par Σ(T) le graphe ayant V (T) comme ensemble de sommets et E ∗ (T) (les arêtes optionnelles de T) comme ensemble d’arêtes. Les 21CHAPITRE 2. TRIGRAPHES DE BERGE APPRIVOISÉS Figure 2.5 – Configuration possible des arêtes optionnelles dans les trigraphes bigame (les arêtes sont représentées par des traits pleins et les arêtes optionnelles par des pointillés) composantes connexes de Σ(T) sont appelées les composantes optionnelles de T. On dit qu’un trigraphe de Berge est bigame si les propriétés suivantes sont vérifiées : — Chaque composante optionnelle de T a au plus deux arêtes (et donc aucun sommet n’a plus de deux voisins dans Σ(T)). — Soit v ∈ V (T) de degré deux dans Σ(T), notons x et y ses voisins. Alors, ou bien v est fortement complet à V (T) \ {v, x, y} dans T, et x est fortement adjacent à y dans T (dans ce cas on dit que v et la composante optionnelle qui contient v sont lourds) ou bien v est fortement anticomplet à V (T) \ {v, x, y} dans T, et x est fortement antiadjacent à y dans T (dans ce cas on dit que v et la composante optionnelle qui contient v sont légers). Remarquons qu’un trigraphe T est bigame si et seulement si T l’est aussi ; de plus v est léger dans T si et seulement si v est lourd dans T. On dit qu’un trigraphe de Berge est apprivoisé s’il est bigame et qu’il ne contient pas de skew-partition équilibrée. On dit qu’un graphe de Berge est apprivoisé s’il ne contient pas de skewpartition équilibrée. Théorème 2.9. Soit T un trigraphe de Berge apprivoisé et soit (A1, B1, C1, A2, B2, C2) une affectation d’un 2-joint (X1, X2) dans T. Alors les propriétés suivantes sont vérifiées : (i) (X1, X2) est un 2-joint propre ; (ii) chaque sommet de Xi a un voisin dans Xi, i = 1, 2 ; (iii) chaque sommet de Ai a un antivoisin dans Bi , i = 1, 2 ; (iv) chaque sommet de Bi a un antivoisin dans Ai, i = 1, 2 ; 222.4. STRUCTURE DES TRIGRAPHES DE BERGE APPRIVOISÉS (v) chaque sommet de Ai a un voisin dans Ci ∪ Bi, i = 1, 2 ; (vi) chaque sommet de Bi a un voisin dans Ci ∪ Ai, i = 1, 2 ; (vii) si Ci = ∅, alors |Ai | ≥ 2 et |Bi | ≥ 2, i = 1, 2 ; (viii) |Xi | ≥ 4, i = 1, 2. Démonstration. Remarquons que d’après le lemme 2.6, ni T ni T ne peuvent contenir de star cutset. Pour démontrer (i), nous devons simplement démontrer que toute composante de T|Xi intersecte à la fois Ai et Bi , i = 1, 2. Supposons par contradiction qu’il y ait une composante connexe C de T|X1 qui n’intersecte pas B1 (les autres cas sont symétriques). S’il y a un sommet c ∈ C \ A1 alors pour tout sommet u ∈ A2, {u} ∪ A1 est un star cutset qui sépare c de B1, c’est une contradiction. Donc C ⊆ A1. Si |A1| ≥ 2 alors nous pouvons choisir deux sommets c ∈ C et c 0 6= c dans A1. Dans ce cas {c 0} ∪ A2 est un star cutset qui sépare c de B1. On a alors C = A1 = {c}. Il existe donc une composante de T|X1 qui n’intersecte pas A1 et par le même argument on peut déduire que B1 = 1 et que l’unique sommet de B1 n’a pas de voisin dans X1. Puisque |X1| ≥ 3, il existe un sommet u dans C1. Maintenant, {c, a2} avec a2 ∈ A2 est un star cutset qui sépare u de B1, c’est une contradiction. Pour démontrer (ii), nous avons simplement à remarquer que si un sommet de Xi n’a pas de voisin dans Xi , alors il forme une composante de T|Xi qui n’intersecte pas à la fois Ai et Bi . Ceci contredit (i). Pour démontrer (iii) et (iv), considérons un sommet a ∈ A1 fortement complet à B1 (les autres cas sont symétriques). Si A1 ∪ C1 6= {a} alors B1 ∪ A2 ∪ {a} est un star cutset qui sépare (A1 ∪ C1) \ {a} de B2. Donc A1 ∪ C1 = {a} et |B1| ≥ 2 car |X1| ≥ 3. Mais alors B1 est un ensemble homogène, fortement complet à A1 et fortement anticomplet à A2 et donc T admet une skew-partition équilibrée d’après le lemme 2.7, c’est une contradiction. Pour démontrer (v) et (vi), considérons un sommet a ∈ A1 fortement anticomplet à C1 ∪ B1 (les autres cas sont symétriques). D’après (ii), le sommet a a un voisin dans A1, et donc A1 6= {a}. Dans ce cas {a} ∪ B1 ∪ C1 ∪ B2 ∪ C2 est un star cutset dans T. C’est une contradiction. Pour démontrer (vii), supposons que C1 = ∅ et que |A1| = 1 (les autres cas sont symétriques). D’après (iv) et (vi), et comme C1 = ∅, A1 est à la fois complet et anticomplet à B1. Ceci implique que l’unique sommet de A1 soit semiadjacent à tous les sommets de B1 et donc puisque T est apprivoisé, |B1| ≤ 2. Puisque |X1| ≥ 3, |B1| = 2 et comme T est apprivoisé, l’unique sommet de A1 est ou bien 23CHAPITRE 2. TRIGRAPHES DE BERGE APPRIVOISÉS fortement complet ou bien fortement anticomplet à V (T) \ (A1 ∪ B1), c’est une contradiction car A1 est fortement complet à A2 et fortement anticomplet à B2. Pour démontrer (viii), nous pouvons supposer d’après (vii) que C1 6= ∅. Supposons donc par contradiction que |A1| = |C1| = |B1| = 1. Soit a, b, c les sommets de respectivement A1, B1, C1. D’après (iii), ab est une antiarête. De plus, c est adjacent au sommet a sinon il y aurait un star cutset centré en b qui séparerait a de c. Pour la même raison c est adjacent à b. Puisque la réalisation complète de T|X1 n’est pas un chemin de longueur 2 allant de a à b, nous savons que ab est une arête optionnelle. Ceci contredit le lemme 2.4. Soit b un sommet de degré deux dans Σ(T) et soit a, c les voisins de b dans Σ(T). Supposons également que b soit léger. Nous appelons un sommet w ∈ V (T)\ {a, b, c} un a-appendice de b s’il existe u, v ∈ V (T) \ {a, b, c} tel que : — a-u-v-w est un chemin ; — u est fortement anticomplet à V (T) \ {a, v} ; — v est fortement anticomplet à V (T) \ {u, w} ; et — w n’a pas de voisin dans Σ(T) à la possible exception de v (i.e. il n’y a pas d’arête optionnelle contenant w dans T à la possible exception de vw). Un c-appendice est défini de la même manière. Si b est un sommet lourd de T, alors w est un a-appendice de b dans T si et seulement si w est un a-appendice de b dans T. Le résultat suivant est analogue au théorème 2.5 pour les trigraphes de Berge bigame. Théorème 2.10. Tout trigraphe de Berge bigame est ou bien basique, ou bien admet une skew-partition équilibrée, un 2-joint propre, ou un 2-joint propre dans son complément. Démonstration. Pour T un trigraphe de Berge bigame, notons τ (T) le nombre de sommets de degré deux dans Σ(T). La démonstration est une induction sur τ (T). Si τ (T) = 0, le résultat est direct à partir du théorème 2.5. Maintenant prenons T un trigraphe de Berge bigame et soit b un sommet de degré deux dans Σ(T). Soient a, c les deux voisins de b dans Σ(T). Quitte à passer au complément, on peut supposer que b est léger. Soit T 0 le trigraphe obtenu à partir de T en rendant a fortement adjacent à b. Si b n’a pas de a-appendice, alors nous n’avons pas besoin d’effectuer plus de modifications ; prenons W = ∅. Dans le cas contraire, choisissons un a-appendice w 242.4. STRUCTURE DES TRIGRAPHES DE BERGE APPRIVOISÉS de b, et prenons u, v comme dans la définition des a-appendices ; prenons V (T 0 ) = V (T) \ {u, v}, W = {w} et rendons a semiadjacent à w dans T 0 . Si W = ∅ alors clairement T 0 est un trigraphe de Berge bigame et τ (T) > τ (T 0 ). Supposons que W 6= ∅. Si t ∈ V (T 0 ) est adjacent à a et à w, alors a-u-v-w-t est un trou impair dans T. Par conséquent aucun sommet de T 0 n’est adjacent à la fois à a et à w. En particulier, il n’y a pas d’antitrou impair de taille au moins 7 dans T 0 qui passe par a et par w. Comme il n’y a pas de trou impair qui passe par a et par w, T 0 est un trigraphe de Berge bigame. De plus τ (T) > τ (T 0 ) (nous rappelons que dans Σ(T), v est l’unique voisin potentiel de w et b est l’unique voisin potentiel de a). Par induction, une des conséquences du théorème 2.10 est vraie pour T 0 . Nous considérons les cas suivants et montrons que pour chacun d’entre eux, une des conséquences du théorème 2.10 est vraie pour T. Cas 1 : T 0 est basique. Supposons d’abord que T 0 est biparti. Nous affirmons que T est biparti. Soit V (T 0 ) = X ∪Y où X et Y sont des stables forts disjoints. L’affirmation est claire si b n’a pas de a-appendice, on peut donc supposer que W = {w}. On peut supposer que a ∈ X ; alors w ∈ Y . Dans ce cas X ∪ {v} et Y ∪ {u} sont des stables forts de T d’union V (T) et donc T est biparti. Supposons que T 0 est un line trigraphe. Observons pour commencer qu’aucune clique de taille au moins trois dans T ne contient u, v ou b. Donc si W = ∅, il est clair que T est un line trigraphe. Nous pouvons donc supposer que W 6= ∅. Remarquons que la réalisation complète de T est obtenue à partir de la réalisation complète de T 0 en subdivisant deux fois l’arête aw. Puisque aucun sommet de T 0 n’est adjacent à la fois à a et à w, T est un line trigraphe (car les line graphes sont clos par subdivision d’arêtes n’ayant pas d’extrémité commune, et que les line graphes de graphes bipartis sont clos par double subdivision de telles arêtes). Supposons que T 0 soit biparti et prenons X, Y une partition de V (T) en deux cliques fortes de T 0 . On peut supposer que a ∈ X. Supposons pour commencer que b ∈ Y . Puisque a est l’unique voisin fort de b dans T 0 , Y = {b} et donc X contient a et c, c’est une contradiction. Par conséquent on peut supposer que b ∈ X. Puisque a est l’unique voisin fort de b dans T 0 , X = {a, b} et b est fortement anticomplet à Y \ {c}. Soit N l’ensemble des voisins forts de a dans Y \ {c} et M l’ensemble des antivoisins forts de a dans Y \ {c}. Puisque T est un trigraphe de Berge bigame, Y = N ∪ M ∪ W ∪ {c}. Si |N| > 1 ou |M| > 1, alors d’après le lemme 2.7 T admet une skew-partition équilibrée. On peut donc supposer que |N| ≤ 1 et que |M| ≤ 1. Puisqu’aucun sommet de T 0 n’est adjacent à la fois à a et 25CHAPITRE 2. TRIGRAPHES DE BERGE APPRIVOISÉS à w, |N ∪ W| ≤ 1. Maintenant si M = ∅ ou que N ∪ W = ∅, alors T 0 est biparti et nous pouvons procéder comme ci-dessus. Sinon N ∪ W ∪ {c} est un clique cutset de T 0 de taille 2 qui est un star cutset de T et donc d’après le lemme 2.6 T admet une skew-partition équilibrée. Supposons maintenant que T 0 est un line trigraphe. Puisque bc est une arête optionnelle dans T 0 et que b est fortement anticomplet à V (T 0 ) \ {a, b, c}, c est fortement complet à V (T 0 ) \ {a, b, c} sinon il y aurait dans T 0 une clique de taille 3 avec une arête optionnelle. Puisque T 0 est un line trigraphe, pour tout triangle S de T 0 et tout sommet v ∈ V (T 0 ) \ S, v a au moins un voisin fort dans S. Si x, y ∈ V (T 0 ) \ {a, b, c} sont adjacents, alors {x, y, c} est un triangle et b n’a pas de voisin fort à l’intérieur. Par conséquent, V (T 0 ) \ {a, b, c} est un stable fort. Maintenant V (T 0 ) \ {a, c}, {a, c} forme une partition de V (T 0 ) en deux stables forts de T 0 . T 0 est donc biparti et nous pouvons procéder comme ci-dessus. Finalement, supposons que T 0 est un trigraphe doublé et prenons (X, Y ) une bonne partition de T 0 . Si T 0 |Y est vide ou n’a qu’une unique anticomposante, alors T 0 est biparti. Nous pouvons donc supposer que Y contient deux sommets fortement adjacents x et x 0 . S’il existe y 6= x et y 0 6= x 0 , tels que {x, y} et {x 0 , y0} soient des anticomposantes de T 0 |Y , alors tout sommet de T 0 a au moins deux voisins forts, c’est une contradiction à cause de b. Ceci implique que par exemple {x} est une anticomposante de T 0 |Y . Si T 0 |X est connexe ou vide, alors T 0 est le complément d’un trigraphe biparti. On peut donc supposer que T 0 |X a au moins deux composantes. Dans ce cas, Y est un star cutset de T 0 centré en x. Ce cas est le prochain cas traité. Cas 2 : T 0 admets une skew-partition équilibrée. Soit (A, B) une skew-partition équilibrée de T 0 . Si W 6= ∅, prenons A0 = A ∪ {u, v} ; et si W = ∅ prenons A0 = A. Dans tous les cas, T|A0 n’est pas connecté. Nous allons montrer que si une anticomposante Y de B n’intersecte pas {a, b}, alors T admet une skew-partition équilibrée. Puisque a est complet à W dans T 0 , il existe une composante L de A qui n’intersecte pas {a, b} et donc L est aussi une composante de A0 . Sans perte de généralité, on peut supposer que Y est disjoint de W (c’est clair dans le cas où B ∩ {a, b} 6= ∅ et si B ∩ {a, b} = ∅ on peut sans perte de généralité supposer que Y ∩ W = ∅). Maintenant, dans T, Y est fortement complet à B \ Y , L est fortement anticomplet à A0 \L et donc A0 , B0 est une skew-partition de T et (L ∪ Y ) ∩ ({a, b} ∪ W ∪ (A0 \ A)) ⊆ {b}. Puisque A, B est une skew-partition équilibrée de T 0 , la paire (L, Y ) est équilibrée dans T. Par conséquent le lemme 2.8 implique que T admet une skew-partition équilibrée. Nous pouvons donc supposer qu’il n’y a pas de tel ensemble Y et donc T 0 |B a 262.4. STRUCTURE DES TRIGRAPHES DE BERGE APPRIVOISÉS exactement deux anticomposantes, B1 et B2, de plus a ∈ B1 et b ∈ B2. Puisque a est l’unique voisin fort de b dans T 0 , B1 = {a}. Puisque a est anticomplet à W ∪ {c}, nous pouvons en déduire que W ∪ {c} ⊆ A0 . Soit A1 la composante de T|A0 contenant c et A1 = A0 \ A1. Supposons que a n’ait pas de voisin fort dans T. Dans ce cas B2 = {b} et puisque T est un trigraphe de Berge bigame, a est fortement anticomplet à A0 . Nous pouvons supposer que T n’est pas biparti, car sinon nous aurions déjà le résultat. T contient donc un trou impair C, qui doit être dans A1 ou dans A2 (en effet {a, b} est fortement complet à A0 ). Puisque T est un trigraphe de Berge bigame, C contient au moins une arête forte xy. Dans ce cas {x, y} est un star cutset dans T qui sépare {a, b} d’un sommet de A2. D’après le lemme 2.6, T a une skew-partition équilibrée. Nous pouvons donc supposer que le sommet a a au moins un voisin fort dans T. Soit x ∈ A2. Notons N l’ensemble des voisins forts de a dans T. Alors (N ∪ {a}) \ {x} est un star cutset dans T séparant b de x à moins que x soit l’unique voisin fort de a. Dans ce cas {a, x} est un star cutset séparant A1 de A2 \ {x}, à moins que A2 = {x}. Supposons alors que c ait un voisin y (dans ce cas c’est un voisin fort car T est un trigraphe de Berge bigame). Alors {c, y} est un star cutset séparant A1 \ {c, y} de x, à moins que A1 = {c, y} mais dans ce cas T est biparti. Supposons donc que c n’ait pas de voisin dans A1. Si T n’est pas biparti, il contient un trou impair, dans ce cas ce trou est dans A1 et n’importe quelle arête forte (qui existe puisque T est un trigraphe de Berge bigame) forme un star cutset séparant c du reste du trou. D’après le lemme 2.6, T a une skew-partition équilibrée. Cas 3 : T 0 admet un 2-joint propre. Soit (A1, B1, C1, A2, B2, C2) une affectation d’un 2-joint propre de T 0 . Supposons que a ∈ A1 ∪ B1 ∪ C1. Alors W ⊆ A1 ∪ B1 ∪ C1. Si W 6= ∅ prenons C 0 1 = C1 ∪ {u, v}, et sinon prenons C 0 1 = C1. On peut supposer que (A1, B1, C0 1 , A2, B2, C2) n’est pas un 2-joint propre de T et donc sans perte de généralité a ∈ A1 et b ∈ A2. Alors c ∈ B2 ∪ C2. Puisque a est l’unique voisin fort de b dans T 0 , A1 = {a}. D’après le cas 2, on peut supposer que T 0 n’admet pas de skew-partition équilibré et donc le lemme 2.9 implique que a est anticomplet à B1. Remarquons que puisque T est un trigraphe de Berge bigame, ab est la seule arête optionnelle dans T contenant le sommet a. Soit N l’ensemble des voisins forts de a dans C 0 1 dans T. D’après les définitions du 2-joint propre, N 6= ∅. On peut supposer que T n’admet pas de skew-partition équilibrée et donc d’après le lemme 2.9, tout 2-joint de T est propre. Dans ce cas, ou bien (N, B1, C0 1 \ N, {a}, B2, C2 ∪ A2) est une affectation d’un 2-joint propre de T, ou bien |N| = |B1| = 1 et la réalisation complète de T|(C 0 1 ∪ B1) est un chemin de longueur deux entres N et B1. Notons 27CHAPITRE 2. TRIGRAPHES DE BERGE APPRIVOISÉS n-n 0 -b1 ce chemin avec n ∈ N et b1 ∈ B1. Puisque b1 n’a pas de voisin dans Σ(T) à l’exception possible de n 0 , b1 est un a-appendice de b. En particulier, W 6= ∅. Puisque W ⊆ B1 ∪ C1, w = b1, u = n et v = n 0 . Dans ce cas |A1 ∪ B1 ∪ C1| = 2, ce qui contredit le fait que (A1, B1, C1, A2, B2, C2) soit l’affectation d’un 2-joint propre de T 0 . Cas 4 : (T 0 ) admet un 2-joint propre. Soit (A1, B1, C1, A2, B2, C2), une affectation d’un 2-joint dans T 0 . Commençons par supposer que W 6= ∅. Alors on peut supposer que a, w ∈ A1 ∪ B1 ∪ C1. Puisqu’aucun sommet de T 0 est adjacent à la fois à a et à w, on peut supposer sans perte de généralité que a ∈ A1, w ∈ B1 et C2 = ∅. Puisque a est l’unique voisin fort de b dans T 0 , b ∈ B2 et C1 = ∅. Dans ce cas (A1, B1, ∅, B2, A2, ∅) est une affectation d’un 2-joint de T 0 . D’après le deuxième cas, on peut supposer que T 0 n’admet pas de skew-partition équilibrée et donc ce 2-joint est propre d’après le lemme 2.9. Nous pouvons alors procéder comme dans le cas précédent. Supposons donc que W = ∅. On peut supposer que (A1, B1, C1, A2, B2, C2) n’est pas une affectation d’un 2-joint propre de T, par conséquent et sans perte de généralité, a ∈ A1 ∪ B1 ∪ C1, et b ∈ A2 ∪ B2 ∪ C2. Puisque a est l’unique voisin fort de b dans T 0 et puisque A1, B1 sont tous les deux non-vides, b 6∈ C2, et on peut alors supposer que b ∈ B2. Puisque A1 6= ∅, C1 = ∅ et A1 = {a}. Puisque |A1 ∪ B1 ∪ C1| ≥ 3, |B1| ≥ 2. De plus c est fortement antiadjacent à a et semiadjacent à b dans T, on peut donc en déduire que c ∈ A2. Maintenant, si le sommet a a un voisin x dans B1 dans le trigraphe T (c’est alors un voisin fort), alors {x, a} ∪ A2 ∪ C2 est un star cutset dans T, et si a est fortement anticomplet à B1 dans T, d’après la définition du 2-joint propre, B1 est un ensemble homogène dans T. Dans tous les cas, d’après le lemme 2.6 ou le lemme 2.7, T admet une skew-partition équilibrée. 2.5 Blocs de décomposition La manière d’utiliser les décompositions dans les chapitres suivantes nous demande de construire des blocs de décompositions et de récursivement poser plusieurs questions sur ces blocs. Pour pouvoir faire cela, nous devons nous assurer que les blocs de décompositions sont toujours dans notre classe de graphes. Un ensemble X ⊆ V (T) est un fragment d’un trigraphe T si une des conditions suivantes est vérifiée : 1. (X, V (T) \ X) est un 2-joint propre de T ; 282.5. BLOCS DE DÉCOMPOSITION A2 C2 B2 X2 A2 C2 B2 X2 Figure 2.6 – Blocs de décomposition : 2-joint 2. (X, V (T) \ X) est un complément de 2-joint propre de T. Remarquons qu’un fragment de T est un fragment de T. Nous pouvons maintenant définir le bloc de décomposition TX associé à un fragment X. Un 2-joint est pair ou impair suivant la parité des longueurs des chemins décrits par le lemme 2.4. Si (X1, X2) est un 2-joint propre impair et si X = X1, alors prenons (A1, B1, C1, A2, B2, C2) une affectation de (X1, X2). Nous construisons alors le bloc de décomposition TX1 = TX comme suit. Nous partons de T|(A1 ∪ B1 ∪ C1). Nous ajoutons ensuite deux sommets marqués a et b, tels que a est fortement complet à A1, b est fortement complet à B1, ab est une arête optionnelle et il n’y a aucune autre arête entre {a, b} et X1. Remarquons que {a, b} est une composante optionnelle de TX. Nous l’appelons la composante marquée de TX. Si (X1, X2) est un 2-joint propre pair et si X = X1, alors prenons (A1, B1, C1, A2, B2, C2) une affectation de (X1, X2). Nous construisons alors le bloc de décomposition TX1 = TX comme suit. Nous partons de T|(A1 ∪ B1 ∪ C1). Nous ajoutons ensuite trois sommets marqués a, b et c, tels que a est fortement complet à A1, b est fortement complet à B1, ac et cb sont deux arêtes optionnelles et il n’y a aucune autre arête entre {a, b, c} et X1. À nouveau nous appelons {a, b, c} la composante marquée de TX. Si (X1, X2) est le complément d’un 2-joint propre impair et si X = X1, alors prenons (A1, B1, C1, A2, B2, C2) une affectation de (X1, X2). Nous construi- 29CHAPITRE 2. TRIGRAPHES DE BERGE APPRIVOISÉS A2 C2 B2 X2 A2 C2 B2 X2 Figure 2.7 – Blocs de décomposition : Complément de 2-joint sons alors le bloc de décomposition TX1 = TX comme suit. Nous partons de T|(A1 ∪ B1 ∪ C1). Nous ajoutons ensuite deux sommets marqués a et b, tels que a est fortement complet à B1 ∪ C1, b est fortement complet à A1 ∪ C1, ab est une arête optionnelle et il n’y a aucune autre arête entre {a, b} et X1. À nouveau nous appelons {a, b} la composante marquée de TX. Si (X1, X2) est le complément d’un 2-joint propre pair et si X = X1, alors prenons (A1, B1, C1, A2, B2, C2) une affectation de (X1, X2). Nous construisons alors le bloc de décomposition TX1 = TX comme suit. Nous partons de T|(A1∪B1∪C1). Nous ajoutons ensuite trois sommets marqués a, b et c, tels que a est fortement complet à B1 ∪ C1, b est fortement complet à A1 ∪ C1, ac et cb sont deux arêtes optionnelles, ab est une arête forte et il n’y a aucune autre arête entre {a, b, c} et X1. À nouveau nous appelons {a, b, c} la composante marquée de TX. Lemme 2.11. Si X est un fragment d’un trigraphe T de Berge bigame, alors TX est un trigraphe de Berge bigame. Démonstration. Par définition de TX, il est clair que tout sommet de TX est ou bien dans au plus une arête optionnelle, ou bien est lourd, ou bien est léger, TX est donc bien un trigraphe bigame. Il reste juste à démontrer que TX est de Berge. Soit X = X1 et (X1, X2) un 2-joint propre de T. Soit (A1, B1, C1, A2, B2, C2) une affectation de (X1, X2). 302.5. BLOCS DE DÉCOMPOSITION Commençons par supposer que TX1 a un trou impair H = h1- · · · -hk-h1. Notons ZX1 l’ensemble des sommets marqués de TX1 . Supposons que les sommets de ZX1 soient consécutifs dans H, alors H \ ZX1 est un chemin P dont une extrémité est dans A1, l’autre dans B1 et dont l’intérieur est dans C1. Un trou de T est obtenu en ajoutant à P un chemin dont une extrémité est dans A2, dont l’autre extrémité est dans B2 et dont l’intérieur est dans C2. D’après le lemme 2.4 ce trou est impair, c’est une contradiction. Dans ce cas les sommets marqués ne sont pas consécutifs dans H, et puisque c n’a pas de voisin dans V (T) \ {a, b, c}, on peut en déduire que c 6∈ V (H). Maintenant, un trou de même longueur que H peut être obtenu dans T en remplaçant si besoin a et/ou b par des sommets a2 ∈ A2 et b2 ∈ B2, choisi pour être antiadjacent (ce qui est possible d’après le lemme 2.9). Supposons alors que TX1 ait un antitrou impair H = h1- · · · -hk-h1. Puisqu’un antitrou de longueur 5 est également un trou, on peut supposer que H est de longueur au moins 7. Donc dans H, toute paire de sommets à un voisin commun. Il y a donc au plus un sommet parmi a, b, c qui est dans H, et à cause de son degré, c ne peut pas être dans H. Un antitrou de même longueur que H peut être obtenu dans T en remplaçant si besoin a ou b par un sommet a2 ∈ A2 ou b2 ∈ B2, encore une fois c’est une contradiction. Remarquons que les cas où T a un complément de 2-joint sont traités par complémentation. Lemme 2.12. Si X est un fragment d’un trigraphe T de Berge apprivoisé, alors le bloc de décomposition TX n’a pas de skew-partition équilibrée. Démonstration. Pour démontrer ce résultat commençons par supposer que TX ait une skew-partition équilibrée (A0 , B0 ) et notons (A0 1 , A0 2 , B0 1 , B0 2 ) une affectation de cette skew-partition. Cherchons maintenant une skew-partition dans T. Nous utiliserons le lemme 2.8 pour démontrer qu’il existe alors une skew-partition équilibrée dans T. Le résultat sera alors vrai par contradiction. Soit X = X1 et (X1, X2) un 2-joint propre de T. Soit (A1, B1, C1, A2, B2, C2) une affectation de (X1, X2). Puisque les sommets marqués dans TX, a et b n’ont pas de voisin fort commun et que c n’a pas de voisin fort, il y a, à symétrie près, deux cas : a ∈ A0 1 et b ∈ A0 1 , ou a ∈ A0 1 et b ∈ B0 1 . Remarquons que lorsque (X1, X2) est pair, le sommet marqué c doit être dans A0 1 car il est adjacent au sommet a et n’a pas de voisin fort. Commençons par supposer que les sommets a et b sont tous les deux dans A0 1 . Dans ce cas (X2 ∪ A0 1 \ {a, b, c}, A0 2 , B0 1 , B0 2 ) est une affectation d’une skew- 31CHAPITRE 2. TRIGRAPHES DE BERGE APPRIVOISÉS partition (A, B) dans T. La paire (A0 2 , B0 1 ) est équilibrée dans T car elle l’est dans TX. Donc d’après le lemme 2.8, T admets une skew-partition équilibrée, c’est une contradiction. Les sommets a et b ne sont donc pas tous les deux dans A0 1 et donc a ∈ A0 1 et b ∈ B0 1 . Dans ce cas, (A2 ∪C2 ∪A0 1 \ {a, c}, A0 2 , B2 ∪B0 1 \ {b}, B0 2 ) est une affectation d’une skew-partition (A, B) dans T. La paire (A0 2 , B0 2 ) est équilibrée dans T car elle l’est dans TX. Donc d’après le lemme 2.8, T admet une skew-partition équilibrée, c’est une contradiction. Le cas où T admet un complément de 2-joint se prouve par complémentation. Nous avons dans ce chapitre introduit toutes les notions de base qui vont nous être utiles dans les chapitres suivants. Les résultats les plus importants sont le théorème 2.10 et les lemmes 2.11 et 2.12 qui nous permettent de dire que les trigraphes de Berge apprivoisés se décomposent par 2-joint et complémentaire de 2- joint et que les blocs construits restent dans la classe. Ces résultats sont la base des trois chapitres suivants dans lesquels nous allons pouvoir prouver divers résultats sur la classe en décomposant nos graphes puis en appliquant une induction. Enfin, notons que les graphes de Berge sans skew-partition forment une sousclasse stricte des graphes de Berge. La figure 2.8 montre un graphe qui, si l’on se restreint à notre ensemble de décomposition, n’est décomposable que par skewpartition. Dans ce graphe, les arêtes de couleurs vont vers tous les sommets de possédant la même couleur. Un résultat important serait d’arriver, en étendant l’ensemble des décompositions autorisées, à se débarrasser des skew-partitions. 322.5. BLOCS DE DÉCOMPOSITION Figure 2.8 – Un graphe de Berge décomposable uniquement par skew-partition 33CHAPITRE 2. TRIGRAPHES DE BERGE APPRIVOISÉS 34Chapitre 3 Propriété du grand biparti Les résultats de ce chapitre ont été obtenus avec Aurélie Lagoutte, ils font l’objet d’un article [26] soumis à Discrete Mathematics. Une des propriétés les plus simples à obtenir à l’aide d’une décomposition par 2-joints est celle du grand biparti. On dit qu’un graphe G d’ordre n a la c-propriété du grand biparti, s’il existe deux ensembles de sommets V1 ⊆ V et V2 ⊆ V , tels que |V1| ≥ cn˙ , |V2| ≥ cn˙ et que V1 soit ou bien complet, ou bien anticomplet à V2. On dit qu’une classe de graphes C a la propriété du grand biparti s’il existe une constante c, telle que tout graphe G ∈ C d’ordre n a la c-propriété du grand biparti. Cette propriété est appelée propriété d’Erdős Hajnal forte dans [22]. Par exemple, pour tout entier k, les graphes sans Pk, ni Pk induit ont la propriété du grand biparti [5]. Cette propriété est intéressante car dans le cas des classes de graphes définies par un unique sous-graphe induit H interdit, elle implique la propriété d’Erdős-Hajnal [3, 22]. C’est à dire qu’il existe une constante δH qui dépend de H, telle que tout graphe G de la classe contient une clique ou un stable de taille |V (G) δH |. Nous allons montrer le résultat suivant : Théorème 3.1. Tout graphe de Berge sans skew-partition équilibrée a la propriété du (1/148)-grand biparti. Ce résultat n’implique pas la propriété d’Erdős-Hajnal, puisque la classe des graphes de Berge apprivoisés n’est pas close par sous-graphe induit, en effet la suppression de sommets peut créer des skew-partitions équilibrées. Cependant, il est facile de voir que la propriété d’Erdős-Hajnal est une conséquence directe 35CHAPITRE 3. PROPRIÉTÉ DU GRAND BIPARTI du théorème fort des graphes parfaits. En effet, pour tout graphe G, |V (G)| ≤ χ(G)α(G) et par perfection des graphes de Berge χ(G) = ω(G), on sait donc que pour tout graphe de Berge G |V (G)| ≤ ω(G)α(G) donc pour tout graphe de Berge, ou bien ω(G) ≥ q |V (G)| ou bien α(G) ≥ q |V (G)|. En fait nous n’avons même pas besoin du théorème fort des graphes parfaits, il suffit d’avoir l’inégalité |V (G)| ≤ ω(G)α(G), prouvée dès 1972 par Lovász [28]. Le théorème 3.1 est dans un certain sens un résultat négatif, en effet les graphes de Berge n’ont en général pas la propriété du grand biparti. Interdire les skewpartitions donne donc une classe sensiblement plus petite. Comme l’a observé Seymour [34], les graphes de comparabilité non triviaux (ici qui ne sont ni des cliques, ni des graphes bipartis) ont tous une skew-partition. En fait Chvátal à démontré [16] que les graphes parfaitement ordonables, c’est à dire une super classe des graphes de comparabilité sont, ou bien biparti, ou bien admettent un starcutset dans leur complémentaire. Les graphes de comparabilité étant des graphes de Berge, c’est une des classes les plus intéressantes à regarder pour comprendre les restrictions posées par l’interdiction des skew-partitions équilibrées. Le résultat suivant est le théorème 2 de [21]. Théorème 3.2. Soit ε ∈ (0, 1). Pour tout entier n suffisamment grand, il existe un ordre partiel P sur n éléments tel qu’aucun élément de P n’est comparable à n ε autres éléments de P et pour tout A, B ⊂ P tels que |A| = |B| > 14n ε log2 n , il y a un élément de A comparable à un élément de B. En prenant les graphes de comparabilité des ordres partiels fournis par ce théorème, nous avons une classe de graphes parfaits qui n’a pas la propriété du grand biparti. À partir d’un ordre partiel, on construit son graphe de comparabilité de la manière suivante, les sommets sont les éléments de l’ordre et il y un arête entre deux sommets si et seulement si les éléments qu’ils représentent sont comparables dans l’ordre partiel. Le but de ce chapitre est de montrer que les graphes de Berge apprivoisés ont la propriété du grand biparti. Pour cela nous allons généraliser le problème aux trigraphes de Berge apprivoisés. Commençons par étendre la définition. Nous verrons ensuite comment généraliser ce résultat aux classes construites par k-joints généralisés. 363.1. GRAND BIPARTI DANS LES TRIGRAPHES DE BERGE APPRIVOISÉS 3.1 Grand biparti dans les trigraphes de Berge apprivoisés Soit une constante 0 < c < 1, un trigraphe T sur n sommet a la c-propriété du grand biparti s’il existe V1, V2 ⊆ V (T) tels que |V1|, |V2| ≥ cn˙ et que V1 est fortement complet ou anticomplet à V2. Il est immédiat de voir que pour les trigraphes sans arêtes optionnelles, la définition coïncide avec celle sur les graphes. Une autre remarque importante est que la propriété du grand biparti est une propriété autocomplémentaire : un trigraphe T a la propriété du grand biparti si et seulement si son complémentaire T a aussi la propriété du grand biparti. Nous rappelons qu’être un trigraphe de Berge apprivoisé est aussi une propriété auto-complémentaire. Lors de nos inductions nous pouvons donc ramener le cas des complémentaires de 2- joints au cas des 2-joints. Nous allons démontrer le résultat suivant : Théorème 3.3. Soit T un trigraphe de Berge apprivoisé, tel que n = |V (T)| ≥ 3. Alors T a la (1/148)-propriété du grand biparti. Pour les besoins de l’induction nous devons encore étendre notre problème aux trigraphes de Berge apprivoisés pondérés. Dans la suite, on associe à tout trigraphe T une fonction de poids w : V (T) ∪ E ∗ (T) → N, telle que w(v) > 0 pour v ∈ V (T) et que toute arête optionnelle de poids non nul soit étiquetée avec “2-joint” ou “complément 2-joint”. Pour tout sous-ensemble V 0 ⊆ V (T), on note w(V 0 ) = P v∈V 0 w(v) + P u,v∈V 0 w(uv). Avec ces notations, le poids W d’un trigraphe T est la somme w(V (T)) de ses poids, et étant donné une constante c < 1, on dit que T a la c-propriété du grand biparti, s’il existe V1, V2 ⊆ V (T), tels que w(V1), w(V2) ≥ cW˙ et V1 est fortement complet ou fortement anticomplet à V2. Remarquons que si w est une fonction de poids telle que w(v) = 1 pour tout sommet v ∈ V (T) et que w(uv) = 0 pour toute arête optionnelle uv ∈ E ∗ (T), nous obtenons la notion précédente. Remarquons également que la propriété du grand biparti est stable par réalisation comme le montre le lemme suivant. Lemme 3.4. Soit CT une classe de trigraphes ayant la c-propriété du grand biparti, alors la classe des graphes C = {G, G est une réalisation de T ∈ CT } a la c-propriété du grand biparti. Démonstration. Si un trigraphe T a la propriété du grand biparti, il existe une paire d’ensembles de sommets V1, V2 ⊆ V (T) témoins de la propriété. Dans toute 37CHAPITRE 3. PROPRIÉTÉ DU GRAND BIPARTI réalisation de T, V1, V2 reste une paire de témoins de la propriété du grand biparti. L’idée de la démonstration du théorème 3.1 est de contracter les sommets de T, tout en préservant les poids jusqu’à obtenir un trigraphe basique. Soit T un trigraphe avec une fonction de poids w, (X1, X2) un 2-joint propre dans T ou dans son complément T, tel que w(X1) ≥ w(X2), et (A1, B1, C1, A2, B2, C2) une affectation de (X1, X2). Définissons le trigraphe T 0 avec la fonction de poids w 0 comme la contraction de T, noté (T, w) ❀ (T 0 , w0 ). T 0 est le bloc de décomposition TX1 et sa fonction de poids w 0 est définie comme suit : — Sur les sommets de X1, on définit w 0 = w. — Sur les sommets marqués a, b, on définit w 0 (a) = w(A2) et w 0 (b) = w(B2) — Si le sommet marqué c existe, on définit w 0 (c) = w(C2). Sinon, on définit w 0 (ab) = w(C2) et on étiquette ab en fonction du type de (X1, X2). On définit a (resp. b, v ∈ X1) comme étant le représentant de A2 (resp. B2, v) et pour tout sommet v ∈ A2 (resp. B2, X1) on note v → a (resp. v → b, v → v). Suivant l’existence de c, c ou ab est le représentant de C2 et pour tout sommet v ∈ C2 on note v → c ou v → ab. Si (T, w) ❀ (T 0 , w0 ) et V 0 ⊆ V (T 0 ), on note également V → V 0 si V = {v ∈ V (T)|∃v 0 ∈ V 0 v → v 0∨ ∃u 0 v 0 ∈ V 02 v → u 0 v 0}. On note par →∗ (resp. ❀∗ ) la clôture transitive de → (resp. ❀). Lemme 3.5. Si T est un trigraphe avec une fonction de poids w et (T, w) ❀∗ (T 0 , w0 ) alors : — w 0 (V (T 0 )) = w(V (T)) — Si T 0 a la c-propriété du grand biparti, alors T aussi. Démonstration. La première partie du résultat est claire. Supposons que T 0 ait la c-propriété du grand biparti. Alors il existe W1, W2 ⊆ V (T), tels que W1 →∗ V1 et W2 →∗ V2. Puisque la contraction ne crée ni adjacence forte, ni antiadjacent forte qui n’existaient pas auparavant, si V1, V2 sont fortement complets (resp. anticomplets), W1, W2 sont également fortement complets (resp. anticomplets). De plus w(W1) = w 0 (V1) et w(W2) = w 0 (V2). Donc la paire (W1, W2) prouve que T 0 a la c-propriété du grand biparti. 383.1. GRAND BIPARTI DANS LES TRIGRAPHES DE BERGE APPRIVOISÉS Lemme 3.6. Soit 0 < c < 1/6. Soit T un trigraphe de Berge apprivoisé, et w sa fonction de poids telle que w(x) < cn˙ pour tout x ∈ V (T) ∪ E ∗ (T). Ou bien T a la c-propriété du grand biparti, ou bien il existe un trigraphe basique T 0 avec sa fonction de poids associée w 0 tel que (T, w) ❀∗ (T 0 , w0 ) et pour tout x ∈ V (T 0 ) ∪ E ∗ (T 0 ), w(x) < cn˙ . Démonstration. On prouve le résultat par induction sur T, en utilisant le résultat de décomposition sur les trigraphes de Berge bigames (Théorème 2.10). Si T n’est pas basique, alors il admet un 2-joint propre ou le complément d’un 2-joint propre. Le problème étant auto-complémentaire, nous traitons uniquement le cas d’un 2- joint (X1, X2) de T. Par symétrie, supposons que w(X1) ≥ w(X2) et donc w(X1) ≥ n/2. Soit (A1, B1, C1, A2, B2, C2) une affectation de (X1, X2). Par définition de X1, max(w(A1), w(B1), w(C1)) ≥ n/6 ≥ cn˙ . Donc si max(w(A2), w(B2), w(C2)) ≥ cn˙ , on a la c-propriété du grand biparti. Sinon, (T, w) ❀ (T 0 , w0 ) avec w 0 (x) < cn˙ pour tout x ∈ V (T 0 ) ∪ E ∗ (T 0 ) et T 0 est un trigraphe de Berge apprivoisé d’après 2.11. On peut donc appliquer l’hypothèse d’induction et ; ou bien trouver un trigraphe basique T 00, tel que (T, w) ❀ (T 0 , w0 ) ❀∗ (T 00, w00) et w 00(x) < cn˙ pour tout x ∈ V (T 00) ∪ E ∗ (T 00); ou bien T 0 a la c-propriété du grand biparti, et donc T aussi d’après le lemme 3.5. Si T est un trigraphe basique avec une fonction de poids w, on veut transformer T en un trigraphe ayant des poids uniquement sur ses sommets en transférant les poids des arêtes optionnelles sur de nouveaux sommets. On définit l’extension T 0 de T comme le trigraphe avec la fonction de poids w 0 : V (T 0 ) → N\{0} définie comme suit : V (T 0 ) = V (T) ∪ {vab|ab ∈ E ∗ (T), w(ab) > 0}, w 0 (v) = w(v) pour v ∈ V (T), l’étiquette de ab est donnée par vab, w 0 (vab) = w(ab), θ(avab) = θ(bvab) = 0, et si u ∈ V \ {a, b}, alors θ(uvab) = −1 si l’étiquette de ab est “2-joint” et θ(uvab) = 1 si l’étiquette est “complément 2-joint”. Remarquons que ab était le représentant de C2 de l’affectation (A1, B1, C1, A2, B2, C2) du 2-joint, et que vab prend sa place en tant que contraction de C2, puisque qu’il a le même poids et les même adjacences et antiadjacences fortes vis à vis du reste du trigraphe. Il n’était pas possible de garder la contraction de C2 à chaque étape et en même temps de rester apprivoisé, ce qui est la clé du lemme 3.6. Remarquons finalement que les nouveaux sommets ajoutés étiquetés “2-joint” forment un stable et que ceux étiquetés “complément de 2-joint” forment une clique. 39CHAPITRE 3. PROPRIÉTÉ DU GRAND BIPARTI Lemme 3.7. Soit T un trigraphe de Berge apprivoisé et w sa fonction de poids associée, tel que w(uv) = 0 pour tout uv ∈ E ∗ (T). Supposons que (T, w) ❀∗ (T 0 , w0 ) et soit T 00 avec sa fonction de poids w 00 l’extension de T 0 . Si T 00 a la c-propriété du grand biparti, alors T aussi. Démonstration. Soit (V1, V2) deux sous-ensembles de T 00 prouvant que T 00 a la c propriété du grand biparti. Soit X1 ⊆ V1 un sous-ensemble de sommets de V1 étiquetés. Alors V1\X1 ⊆ V (T 0 ) et il existe W1 ⊆ V (T), tel que W1 →∗ V1\X1. Soit Y1 = {v ∈ V (T)|∃vab ∈ X1, v → ab}. Alors w(W1 ∪ Y1) = w 00(V1) ≥ cn˙ . On définit de la même manière X2, W2 et Y2 et nous avons les même inégalités w(W2 ∪Y2) = w 00(V2) ≥ cn˙ . De plus, W1∪Y1 est fortement complet (resp. anticomplet) à W2∪Y2 si V1 est fortement complet (resp. anticomplet) à V2. Donc T a la c-propriété du grand biparti. Lemme 3.8. Si T 0 est l’extension d’un trigraphe basique T de Berge apprivoisé et que sa fonction de poids associée w0 vérifie w0(x) < cn˙ pour tout x ∈ V (T) ∪ E ∗ (T) et si c ≤ 1/148, alors T 0 a la c-propriété du grand biparti. Avant de pouvoir démontrer ce résultat nous avons besoin d’un lemme technique. Un graphe G a m arêtes-multiples si son ensemble d’arêtes E est un multiensemble de V 2 \ {xx|x ∈ V (G)} de taille m : il peut y avoir plusieurs arêtes entre deux sommets distincts mais pas de boucle. Une arête uv a deux extrémités u et v. Le degré de v ∈ V (G) est d(v) = |{e ∈ E|v est une extrémité de e}|. Lemme 3.9. Soit G un graphe biparti (A, B) avec m arêtes multiples et de degré maximum cm˙ avec c < 1/3. Alors il existe des sous-ensembles d’arêtes E1, E2 de G, tels que |E1|, |E2| ≥ m/48 et si e1 ∈ E1, e2 ∈ E2 alors e1 et e2 n’ont pas d’extrémité commune. Démonstration. Si m ≤ 48, il suffit de trouver deux arêtes sans extrémité commune. De telles arêtes existent toujours puisque le degré maximum est borné par cm˙ , donc aucun sommet ne peut être une extrémité commune à toutes les arêtes. Sinon si m > 48, considérons une partition aléatoire uniforme (U, U0 ) des sommets. Pour toute paires d’arêtes distinctes e1, e2, considérons la variable aléatoire Xe1,e2 = 1 si (e1, e2) ∈ (U 2 × U 02 ) ∪ (U 02 × U 2 ), et 0 sinon. Si e1 et e2 ont au moins une extrémité commune, alors Pr(Xe1,e2 = 1) = 0, sinon Pr(Xe1,e2 = 1) = 1/8. 403.1. GRAND BIPARTI DANS LES TRIGRAPHES DE BERGE APPRIVOISÉS Nous définissons alors : p = |{(e1, e2) ∈ E 2 |e1 et e2 n’ont pas d’extrémité commune}| pA = |{(e1, e2) ∈ E 2 |e1 et e2 n’ont pas d’extrémité commune dans A}| qA = |{(e1, e2) ∈ E 2 |e1 6= e2 et e1 et e2 n’ont pas d’extrémité commune dans A}| Nous définissons de la même manière pB et qB. Supposons que p ≥ 1 3  m 2  . Alors : E( X e1,e2∈E e16=e2 Xe1,e2 ) = X e1,e2∈E e16=e2 Pr(Xe1,e2 = 1) = p 8 ≥ 1 24 m 2 ! . Donc il existe une partition (U, U0 ) telle que : X e1,e2∈E e16=e2 Xe1,e2 ≥ 1 24 m 2 ! . Soit E1 = E ∩ U 2 et E2 = E ∩ U 02 . Alors |E1|, |E2| ≥ m/48, sinon : X e1,e2∈E e16=e2 Xe1,e2 = |E1| · |E2| < m 48 ·  1 − 1 48 m ≤ 1 24 m 2 ! , c’est une contradiction. Donc E1 et E2 vérifient les hypothèses du lemme. Il nous reste donc à démontrer que p ≥ 1 3  m 2  . Le résultat intermédiaire clé est que pA ≥ 2qA. Numérotons les sommets de A de 1 à |A| et rappelons que d(i) est le degré de i. Alors P|A| i=1 d(i) = m et : pA = 1/2   X |A| i=1 d(i)(m − d(i))   = 1/2     X |A| i=1 d(i)   2 − X |A| i=1 (d(i))2   = 1/2   X |A| i,j=1 i6=j d(i)d(j)   qA = 1/2   X |A| i=1 d(i)(d(i) − 1)   = 1/2   X |A| i=1 (d(i))2 − m   41CHAPITRE 3. PROPRIÉTÉ DU GRAND BIPARTI Par conséquent : 2pA − (4qA + 2m) = X |A| i=1 d(i)   X |A| j=1 j6=i d(j) − 2d(i)   = X |A| i=1 d(i)   X |A| j=1 d(j) − 3d(i)   = X |A| i=1 d(i) (m − 3d(i)) Mais pour tout i, d(i) ≤ cm˙ ≤ m/3 donc m − 3d(i) ≥ 0. Par conséquent, 2pA − (4qA + 2m) ≥ 0 et donc pA ≥ 2qA. Mais pA + qA =  m 2  donc qA ≤ 1 3  m 2  . De la même manière, pB ≥ 2qB et qB ≤ 1 3  m 2  . Finalement : p ≥ m 2 ! − qA − qB ≥ m 2 ! − 2 3 m 2 ! ≥ 1 3 m 2 ! . Démonstration du lemme 3.8. Soit w la fonction de poids associée à T 0 . Puisque le problème est auto-complémentaire, il suffit de démontrer le résultat si T est un trigraphe biparti, un trigraphe doublé ou un line trigraphe. Si T est un trigraphe doublé, alors T a une bonne partition (X, Y ). En fait, X est l’union de deux stables X1, X2 et Y est l’union de deux cliques Y1 et Y2. Donc T 0 est l’union de trois stables X1, X2, X3 (X3 est l’ensemble des sommets étiquetés “2-joint”) et de trois cliques Y1, Y2, Y3 (Y3 est l’ensemble des sommets étiquetés “complément de 2-joint”). Il existe un ensemble Z parmi ces six ensembles de taille au moins n/6. Puisque chaque sommet de Z a poids au plus cn˙ , on peut partitionner Z en (Z1, Z2) avec w(Z1), w(Z2) ≥ n/12 − cn˙ ≥ cn˙ et Z1 est ou bien fortement complet à Z2 (c’est le cas si Z est une clique forte) ou bien fortement anticomplet à Z2 (c’est le cas si Z est un stable fort). Le même argument s’applique si T est un trigraphe biparti, puisque c’est alors l’union de deux stables forts. La démonstration est plus compliquée si T est un line trigraphe. Soit X le stable fort des sommets étiquetés “2-joint” dans T, Y la clique forte des sommets étiquetés “complément de 2-joint”, et Z = V (T 0 ) \ (X ∪ Y ). Par définition du line trigraphe, la réalisation complète de T 0 |Z est le line graphe d’un graphe biparti G, et toute clique de T 0 |Z de taille au moins trois est une clique forte. Si vab ∈ X, 423.1. GRAND BIPARTI DANS LES TRIGRAPHES DE BERGE APPRIVOISÉS alors la réalisation complète de T 0 |(Z ∪ {vab}) est aussi le line graphe d’un graphe biparti : en effet, vab est semiadjacent à exactement a et b, et antiadjacent au reste des sommets. Par hypothèse sur les cliques de taille trois de T, il ne peut y avoir de sommet d ∈ Z adjacent à la fois à a et à b. Cela veut dire que l’extrémité commune x de a et b dans G a degré exactement deux. Ajoutons l’arête vab entre x et un nouveau sommet, alors la réalisation complète de T 0 |(Z ∪ {vab}) est le line graphe d’un graphe biparti. En itérant ce procédé, la réalisation complète T 0 |(Z ∪ X) est également le line graphe d’un graphe biparti. Distinguons alors deux cas : s’il existe une clique K de poids w(K) ≥ 4cn˙ dans T 0 , alors nous pouvons partitionner K en (K1, K2) avec w(K1), w(K2) ≥ 4cn/˙ 2 − cn˙ ≥ cn˙ et K1 est fortement complet à K2. Sinon, remarquons que dans Z∪X, toute composante optionnelle a au plus trois sommets. Pour chaque composante prenons le sommets de poids maximal pour obtenir un ensemble de sommets V 0 ⊆ Z ∪ X sans arête optionnelle entre eux, c’est à dire T 0 |V 0 est un graphe. De plus T 0 |V 0 est un sous-graphe de la réalisation complète de T 0 |(Z ∪ X) et donc est le line graphe d’un graphe biparti G. Au lieu de garder les poids strictement positifs sur les arêtes de G, nous transformons chaque arête xy de poids m en m arêtes xy. L’inégalité w(K) ≤ 4cn˙ pour toute clique K implique que d’une part le degré maximum d’un sommet de G est 4cn˙ , d’autre part, n 0 = w(V 0 ) ≥ (n − w(Y ))/3 ≥ n(1 − 4c)/3, puisque Y est une clique. Le lemme 3.9 prouve l’existence de deux sous-ensembles d’arêtes E1, E2 de G tels que |E1|, |E2| ≥ n 0/48 et si e1 ∈ E1, e2 ∈ E2 alors e1 et e2 n’ont pas d’extrémité commune. Cela correspond dans T 0 un témoin anticomplet de la propriété du grand biparti. Démonstration du Théorème 3.3. Soit c = 1/148. Si n < 1/c, il existe toujours une arête forte ou une antiarête forte uv par définition des trigraphes bigames, et nous définissons V1 = {u} et V2 = {v}. Sinon, donnons à T la fonction de poids w, telle que w(v) = 1 pour tout v ∈ V (T) (en particulier, w(V (T)) = n). Appliquons le lemme 3.6 à T pour obtenir ou bien la c propriété du grand biparti, ou bien pour contracter T : il existe un trigraphe basique T 0 tel que (T, w) ❀∗ (T 0 , w0 ). Appliquons alors le lemme 3.8 pour avoir la c-propriété du grand biparti dans l’extension de T 0 . Grâce au lemme 3.7 nous avons bien la c-propriété du grand biparti. 43CHAPITRE 3. PROPRIÉTÉ DU GRAND BIPARTI 3.2 Clôture par k-joints généralisés Dans cette section nous allons voir comment nous pouvons utiliser une géné- ralisation des 2-joints afin d’obtenir un résultat analogue. Les résulats 2.10, 2.11 et 2.12 montrent qu’en fait les trigraphes de Berge apprivoisés sont la clôture par 2-joints et complément de 2-joints des classes basiques (trigraphes biparti, line trigraphes, trigraphes doublés et leurs complémentaires). Ces classes basiques ont la propriété du grand biparti et le théorème 3.3 montre que prendre la clôture de cette classe préserve la propriété du grand biparti. Dans cette section nous allons voir comment à partir d’une classe de graphes héréditaire nous pouvons obtenir une classe de trigraphes, puis comment clore cette classe de trigraphes par des opérations similaires aux 2-joints et aux complémentaires de 2-joints. La clôture sera une classe de trigraphe. D’après le lemme 3.4 si la clôture a la propriété du grand biparti alors la classe de graphes des réalisations des trigraphes de la clôture aussi. En fait dans ce qui suit un k-joint avec k = 2 sera analogue à la fois à l’opération de 2-joint et à celle du complémentaire de 2-joint. Dans la section suivante nous verrons que si la classe de graphes basiques a la propriété du grand biparti, cette propriété est conservée dans la clôture par kjoints. Il est important de remarquer que prendre la clôture des graphes basiques du théorème de décomposition des graphes de Berge par k-joints avec k = 2 ne donne pas exactement la classe des trigraphes de Berge apprivoisés (en particulier les trigraphes de la clôture ne sont pas tous de Berge). De plus les constantes obtenues pour les k-joints sont moins bonnes que celles obtenue pour les trigraphes de Berge apprivoisés. Soit C une classe de graphes qui doit être vue comme une classe “basique” de graphes. Pour tout entier k ≥ 1, on construit la classe de trigraphes C ≤k de la manière suivante : un trigraphe T appartient à C ≤k si et seulement s’il existe une partition X1, . . . , Xr de V (T) telle que : — pour tout 1 ≤ i ≤ r, 1 ≤ |Xi | ≤ k. — pour tout 1 ≤ i ≤ r,  Xi 2  ⊆ E ∗ (T). — pour tout 1 ≤ i 6= j ≤ r, Xi × Xj ∩ E ∗ (T) = ∅. — il existe un graphe G dans C tel que G est une réalisation de T. En d’autre termes, on part du graphe G de C, on partitionne ses sommets en petites parties (de taille au plus k), et on change toutes les adjacences à l’intérieur de ces parties en arêtes optionnelles. On définit alors le k-joint généralisé entre deux trigraphes T1 et T2, qui géné- 443.3. GRAND BIPARTI DANS LES CLASSES CLOSES PAR K-JOINTS ralise le 2-joint et qui est similaire au H-joint [6]. Soit T1 et T2 deux trigraphes vérifiant les propriétés suivantes avec 1 ≤ r, s ≤ k : — V (T1) est partitionné en (A1, . . . , Ar, B = {b1, . . . , bs}) et Aj 6= ∅ pour tout 1 ≤ j ≤ r. — V (T2) est partitionné en (B1, . . . , Bs, A = {a1, . . . , ar}) et Bi 6= ∅ pour tout 1 ≤ i ≤ s. —  B 2  ⊆ E ∗ (T1) et  A 2  ⊆ E ∗ (T2), ce qui veut dire que A et B contiennent uniquement des arêtes optionnelles. — Pour tout 1 ≤ i ≤ s, 1 ≤ j ≤ r, bi et aj sont ou bien fortement complets, ou bien fortement anticomplets à respectivement Aj et Bi . En d’autre terme, il existe un graphe biparti qui décrit les adjacences entre B et (A1, . . . , Ar), et le même graphe biparti décrit les adjacences entre (B1, . . . , Bs) et A. Alors le k-joint généralisé de T1 et T2 est le trigraphe T ou V (T) = A1 ∪ . . . ∪ Ar ∪ B1 ∪ . . . ∪ Bs. Soit θ1 et θ2 les fonctions d’adjacences de respectivement T1 et T2. Autant que possible la fonction d’adjacence θ de T étend θ1 et θ2 (c’est à dire θ(uv) = θ1(uv) pour uv ∈  V (T1)∩V (T) 2  et θ(uv) = θ2(uv) pour uv ∈  V (T2)∩V (T) 2  ), et pour a ∈ Aj , b ∈ Bi , θ(ab) = 1 si bi et Aj sont fortement complets dans T1 (ou de manière équivalente, si aj et Bi) sont fortement complets dans T2), et −1 sinon. On définit finalement C ≤k comme la plus petite classe contenant C ≤k et close par k-joints généralisés. 3.3 Grand biparti dans les classes closes par kjoints En fait la méthode de contraction des 2-joints utilisée dans la section précédente peut être généralisée aux k-joints. Nous avons seulement besoin que la classe C des graphes basiques soit close par sous-graphes induits pour avoir le résultat sur C ≤k . Nous obtenons le résultat suivant : Théorème 3.10. Soit k ∈ N\{0}, 0 < c < 1/2 et C une classe de graphes telle que pour tout G ∈ C et pour toute fonction de poids w : V (G) → N \ {0} telle que w(v) < cn˙ pour tout v ∈ V (G), G a la c-propriété du grand biparti. Alors tout trigraphe T de C ≤k , avec au moins k/c sommets, a la (c/k)-propriété du grand biparti. Pour démontrer ce théorème, nous définissons la contraction d’un k-joint géné- ralisé. Paradoxalement cette contraction est plus simple que dans le cas du 2-joint 45CHAPITRE 3. PROPRIÉTÉ DU GRAND BIPARTI car il n’y a ni poids ni étiquette sur les arêtes optionnelles : soit T un trigraphe avec une fonction de poids w : V (T) → N\{0} et supposons que T est le k-joint généralisé de T1 et de T2. Nous suivons les notations introduites dans la définition des kjoints généralisés. En particulier, V (T) est partitionné en (A1, . . . , Ar, B1, . . . , Bs). Quitte à échanger T1 et T2, supposons que w(∪ r j=1Aj ) ≥ w(∪ s i=1Bi). Alors la contraction de T est le trigraphe T 0 = T1 avec les poids w 0 définis par w 0 (v) = w(v) si v ∈ ∪r j=1Aj , et w 0 (bi) = w(Bi) pour 1 ≤ i ≤ s. On note cette opération de contraction par (T, w) ❀ (T 0 , w0 ). Remarquons que le lemme 3.5 est toujours vrai dans ce contexte, et nous obtenons le lemme suivant : Lemme 3.11. Soit 0 < c < 1/(2k). Soit (T, w) un trigraphe pondéré de C ≤k tel que w(v) < cn˙ pour tout v ∈ V (T). Ou bien T a la c-propriété du grand biparti, ou bien il existe un trigraphe T 0 ∈ C≤k de poids w 0 tel que (T, w) ❀∗ (T 0 , w0 ) et pour tout v ∈ V (T 0 ), w(v) < cn˙ . Démonstration. La démonstration est similaire au lemme 3.6. On prouve le résultat par induction sur T. Avec les notations précédentes, T se partitionne en V (T) = A1 ∪ . . . ∪ Ar ∪ B1 ∪ . . . ∪ Bs Par symétrie, supposons que w(∪Aj ) ≥ w(∪Bi) et donc w(∪Aj ) ≥ n/2. Par définition de X1, max(w(Aj )) ≥ n/(2k) ≥ cn˙ . Donc si max(w(Bj )) ≥ cn˙ , on a la c-propriété du grand biparti. Sinon, (T, w) ❀ (T 0 , w0 ) avec w 0 (x) < cn˙ pour tout x ∈ V (T 0 )∪ E ∗ (T 0 ) et T 0 ∈ C ≤k par construction de la classe. On peut donc appliquer l’hypothèse d’induction et ; ou bien trouver un trigraphe basique T 00, tel que (T, w) ❀ (T 0 , w0 ) ❀∗ (T 00, w00) et w 00(x) < cn˙ pour tout x ∈ V (T 00) ∪ E ∗ (T 00); ou bien T 0 a la c-propriété du grand biparti, et donc T aussi d’après le lemme 3.5. Démonstration du Théorème 3.10. Soit T un trigraphe de C ≤k . On définit les poids w(v) = 1 pour tout v ∈ V (T). En appliquant le lemme 3.11 on a, ou bien la (c/k) propriété du grand biparti, ou bien il existe un trigraphe T 0 ∈ C≤k tel que (T, w) ❀∗ (T 0 , w0 ) et w 0 (v) < (c/k).n pour tout v ∈ V (T 0 ). Pour chaque composante optionnelle de T 0 , on choisit le sommet de plus grand poids et on supprime les autres. On obtient un graphe G ∈ C et on définit w 00(v) = w 0 (v) sur ses sommets. Remarquons que w 00(V (G)) ≥ w 0 (V (T 0 ))/k puisque toute composante optionnelle a taille au moins k, et pour tout v ∈ V (G), w 00(v) < (c/k).w0 (V (T 0 )) ≤ cw˙ 00(V (G)). Alors il existe V1, V2 ⊆ V (G) tels que w 00(V1), w00(V2) ≥ cw˙ 00(V (G)) et V1 est ou bien fortement complet ou bien fortement anticomplet à V2. Alors w 0 (V1), w0 (V2) ≥ (c/k).w0 (V (T 0 )) et V1 est ou bien fortement complet ou bien fortement anticomplet 463.3. GRAND BIPARTI DANS LES CLASSES CLOSES PAR K-JOINTS à V2 dans T 0 . Donc T 0 a la (c/k)-propriété du grand biparti. On conclut alors à l’existence d’une paire d’ensembles témoins de la propriété du grand biparti dans T avec le lemme 3.5. Les résultats de ce chapitre ne sont pas spécifiques aux trigraphes de Berge apprivoisés. En effet la partie la plus technique est la démonstration du cas basique (lemme 3.8). La partie induction, c’est à dire, la construction par joint est finalement assez simple. En fait, si le joint est équilibré, il est normal que l’on ait la propriété du grand biparti entre les deux côtés du joint. Cependant comme les graphes de Berge n’ont pas tous cette propriété, c’est une autre manière de voir que notre classe est une sous-classe stricte des graphes de Berge. 47CHAPITRE 3. PROPRIÉTÉ DU GRAND BIPARTI 48Chapitre 4 Clique-Stable séparateur Les résultats de ce chapitre ont été obtenus avec Aurélie Lagoutte, ils font l’objet d’un article [26] soumis à Discrete Mathematics. Une propriété très proche de la propriété du grand biparti est celle de la cliquestable séparation. Commençons par définir ce qu’est un clique-stable séparateur. Soit G un graphe, on dit qu’une partition en deux ensembles C = (U, V ) du graphe G est une coupe. Un ensemble de coupes est un clique-stable séparateur de G si pour toute clique K et tout stable S de G tels que K ∩ S = ∅ il existe une coupe C = (U, V ) du graphe G telle que K ⊆ U et S ⊆ V . Bien entendu, pour tout graphe G il existe toujours un clique-stable séparateur. La question intéressante est de savoir s’il existe un clique-stable séparateur contenant un nombre polynomial de coupes. On dit donc qu’une classe C graphes a la propriété de la clique-stable séparation s’il existe un polynôme P tel que pour tout graphe G ∈ C d’ordre n, G admette un clique-stable séparateur de taille P(n). Dans ce chapitre nous allons montrer que les graphes de Berge apprivoisés admettent un clique-stable séparateur de taille O(n 2 ). Nous généraliserons ce résultat aux classes de trigraphes construites par k-joints généralisés comme défini dans le chapitre 3. La propriété de la clique-stable séparation a été introduite par Yannakakis [38] dans les années 90 lorsqu’il étudiait le problème de l’existence d’une formulation étendue pour le polytope des stables (l’enveloppe convexe des fonctions caracté- ristiques de ses stables). C’est à dire un polytope plus simple en dimension plus grande mais tel que sa projection soit le polytope des stables. Il s’est intéressé à 49CHAPITRE 4. CLIQUE-STABLE SÉPARATEUR ce problème sur les graphes parfaits car ces graphes ont des propriétés permettant de définir plus simplement ce polytope. Cela l’a amené à définir un problème de communication qui est équivalent à celui de la clique-stable séparation. De fait l’existence d’un clique-stable séparateur de taille polynomial est une condition né- cessaire à l’existence d’une formulation étendue. Il a pu démontrer l’existence à la fois d’un clique-stable séparateur de taille polynomiale et d’une formulation étendue pour de nombreuses sous-classes de graphes parfaits comme par exemple, les graphes de comparabilité, les graphes triangulés (chordal graph c’est-à-dire les graphes sans C4 induit), et les compléments de ces classes. Lovász à également démontré ces propriétés pour les graphes t-parfaits [30]. Cependant ces deux problèmes restent ouverts pour les graphes parfaits en général. L’existence d’une formulation étendue n’étant pas toujours vraie pour les graphes [20], il est donc d’autant plus intéressant de voir si ces propriétés sont vérifiées pour les graphes parfaits. Remarquons bien que contrairement à la propriété du grand biparti qui n’est pas vérifiée pour les graphes de comparabilité, la propriété de la clique-stable séparation est vraie sur cette classe. Les graphes de comparabilité sont une classe de graphes parfaits assez bien comprise dans laquelle presque tous les graphes ont de nombreuses skew-partitions équilibrées, de ce point de vue c’est une classe bien étudiée de graphes de Berge non apprivoisés. Peut-être est-il donc possible d’utiliser la décomposition du théorème fort des graphes parfaits pour démontrer l’existence de clique-stable séparateur de taille polynomiale pour les graphes parfaits. Commençons par voir comment la propriété de la clique-stable séparation et la propriété du grand biparti sont liées. Dans le cas des classes de graphes héréditaires, la propriété du grand biparti implique celle de la clique-stable séparation. Comme le montre le théorème suivant (la démonstration est la même que celle de Bousquet, Lagoutte et Thomassé dans [4] qui à partir de la propriété du grand biparti prouvé dans [5] montre que les graphes sans chemin induit ni complémentaire de chemin induit de taille k ont la propriété de la clique-stable séparation) Lemme 4.1. Soit C une classe de graphes héréditaire ayant la c-propriété du grand biparti, alors C a la propriété de la clique-stable séparation. Démonstration. Le but est de démontrer que tout graphe G dans C admet un clique-stable séparateur de taille n cs avec cs = −1 log2(1−c) (pour rappel, c-est la constante de la propriété du grand biparti). Raisonnons par l’absurde et prenons G un contre-exemple minimal. Notons n = |V (G)|, comme G a la c-propriété du grand biparti, il existe deux sous-ensembles de sommets disjoints V1, V2 vérifiant, |V1| ≥ cn˙ , |V2| ≥ cn˙ et V1 est ou bien complet à V2 ou bien anticomplet à V2. 504.1. CLIQUE-STABLE SÉPARATEUR DANS LES TRIGRAPHES DE BERGE APPRIVOISÉS Notons V3 = V (G) \ (V1 ∪ V2). Par minimalité de G, G|(V1 ∪ V3) admet un cliquestable séparateur F1 de taille (|V1| + |V3|) cs et G|(V2 ∪ V3) admet un clique-stable séparateur F2 de taille (|V2| + |V3|) cs. Construisons F un clique stable séparateur de G. Nous devons distinguer deux cas suivant les adjacences entre V1 et V2. L’idée est de prendre chaque coupe de F1 et F2 et de la transformer en une coupe de G en ajoutant les sommets de V2 ou V1 du “bon” côté de la coupe suivant les adjacences entre V1 et V2. Formellement si V1 est complet à V2, F = {(U ∪ V2, W); (U, W) ∈ F1} ∪ {(U ∪ V1, W); (U, W) ∈ F2}. Si au contraire, V1 est anticomplet à V2, F = {(U, W ∪ V2); (U, W) ∈ F1} ∪ {(U, W ∪ V1); (U, W) ∈ F2}. Il est facile de voir que F est un clique-stable séparateur de G. En effet suivant les adjacences entre V1 et V2 une clique ou un stable de G ne peut pas intersecter à la fois V1 et V2. Pour toute clique K et stable S ne s’intersectant pas, il existe donc une coupe dans F qui les sépare. Enfin F a taille au plus 2((1 − c)n) cs ≤ n cn . Ce résultat ne règle pas vraiment le problème pour les graphes de Berge apprivoisés. En effet la classe des graphes de Berge apprivoisés n’est pas héréditaire (la suppression de sommet peut créer des skew-partitions). D’autre part, avec la constante obtenue dans le théorème 3.3 on obtiendrait un clique-stable séparateur de taille O(n 101) ce qui, étant donné que nous pouvons montrer qu’il existe un clique-stable séparateur de taille quadratique, est assez mauvais. 4.1 Clique-stable séparateur dans les trigraphes de Berge apprivoisés Comme dans le chapitre précédent, nous allons utiliser le théorème de décomposition et les blocs du chapitre 2. Nous devons donc étendre notre problème aux trigraphes. Le résultat principal sera le théorème 4.5 prouvant l’existence de cliquestable séparateur de taille quadratique pour les trigraphes de Berge apprivoisés. Commençons par définir les notions de clique-stable séparation dans les trigraphes. Soit T un trigraphe. Une coupe de T est une paire (U, W) ⊆ V (T) 2 , telle que U ∪W = V (T) et U ∩W = ∅. Elle sépare une clique K d’un stable S, si H ⊆ U et S ⊆ W. Parfois on dit que U est la partie clique de la coupe et W la partie stable de la coupe. Remarquons qu’une clique et un stable ne peuvent être séparés que s’ils ne s’intersectent pas. Remarquons également qu’ils ne peuvent s’intersecter que sur une composante optionnelle V (pour rappel pour tout u, v ∈ V , u = v ou uv ∈ E ∗ (T)). En particulier, si T est un trigraphe bigame, une clique et un 51CHAPITRE 4. CLIQUE-STABLE SÉPARATEUR stable s’intersectent sur au plus un sommet ou une arête optionnelle. On dit que la famille F de coupes est un clique-stable séparateur, si pour toute clique K et tout stable S qui ne s’intersectent pas, il existe une coupe dans F qui sépare K et S. Étant donnée une classe C de trigraphes, nous nous intéressons à la question suivante : existe-t-il une constante c, telle que pour tout trigraphe T de C, T admet un clique-stable séparateur de taille O(n c ) ? Supposons qu’il existe un clique-stable séparateur de taille m de T, alors on construit un clique-stable séparateur de taille m de T en construisant pour chaque coupe (U, W) la coupe (W, U). Le problème est donc bien auto-complémentaire. Montrons également qu’il est suffisant de considérer uniquement les cliques et les stables maximaux. Lemme 4.2. Si un trigraphe T bigame admet une famille F de coupes qui sépare toutes les cliques maximales (pour l’inclusion) de tous les stables maximaux, alors T admet un clique-stable séparateur de taille au plus |F| + O(n 2 ). Démonstration. Pour tout x ∈ V , prenons Cut1,x la coupe N[x], V \N[x] et Cut2,x la coupe (N(x), V \ N(x)). Pour toute arête optionnelle xy, prenons Cut1,xy (resp. Cut2,xy, Cut3,xy, Cut4,xy) la coupe (U = N[x] ∪ N[y], V \ U) (resp. (U = N[x] ∪ N(y), V \ U), (U = N(x) ∪ N[y], V \ U), (U = N(x) ∪ N(y), V \ U)). Soit F 0 l’union de F avec toutes les coupes que nous venons de définir pour tout x ∈ V , et xy ∈ E ∗ (T). Nous allons démontrer que F 0 est un clique-stable séparateur. Soit (K, S) une paire d’une clique et d’un stable qui ne s’intersectent pas. Étendons K et S en ajoutant des sommets jusqu’à avoir une clique maximale K0 et un stable maximal S 0 . Nous devons traiter trois cas. Ou bien K0 et S 0 ne s’intersectent pas, dans ce cas il y a une coupe de F qui sépare K0 de S 0 (et donc K et S). Ou bien K0 et S 0 s’intersectent sur un sommet x, dans ce cas si x ∈ K, alors Cut1,x sépare K de S, sinon Cut2,x les sépare. Ou bien K0 et S 0 s’intersectent sur une arête optionnelle xy (en effet une clique et un stable ne peuvent s’intersecter que sur au plus un sommet ou une arête optionnelle). Dans ce cas, par le même argument que pour le cas précédent, suivant l’intersection entre {x, y} et K0 une des coupes Cut1,xy,. . ., Cut4,xy sépare la clique K du stable S. En particulier, si T a au plus O(n c ) cliques maximales (ou stables maximaux) pour une constante c ≥ 2, alors il existe un clique-stable séparateur de taille O(n c ). (Il suffit de séparer toutes les cliques maximales puis d’appliquer le lemme précédent). 524.1. CLIQUE-STABLE SÉPARATEUR DANS LES TRIGRAPHES DE BERGE APPRIVOISÉS Nous prouvons maintenant que les trigraphes de Berge apprivoisés admettent un clique-stable séparateur de taille quadratique. Commençons par traiter le cas des trigraphes basiques. Lemme 4.3. Il existe une constante c, telle que tout trigraphe basique admet un clique-stable séparateur de taille cn˙ 2 . Démonstration. Puisque le problème est auto-complémentaire, nous traitons uniquement le cas des trigraphes bipartis, des line trigraphes et des trigraphes doublés. Une clique dans un trigraphe biparti est une arête forte, une arête optionnelle ou un sommet, il y a donc un nombre quadratique de cliques. Si T est un line trigraphe, alors sa réalisation complète est le line graphe d’un graphe biparti G et donc T a au plus un nombre linéaire de cliques car chaque clique correspond à un sommet de G. Grâce au lemme 4.2, les line trigraphes admettent un clique-stable séparateur de taille quadratique. Si T est un trigraphe doublé, alors soit (X, Y ) une bonne partition de T. Ajoutons à la coupe (Y, X) les coupes suivantes : pour tout Z = {x} avec x ∈ X ou Z = ∅, et pour tout Z 0 = {y} avec y ∈ Y ou Z 0 = ∅, prenons la coupe (Y ∪ Z \ Z 0 , X ∪ Z 0 \ Z) et pour toute paire x, y ∈ V , prenons la coupe ({x, y}, V \ {x, y}), et (V \ {x, y}, {x, y}). Ces coupes forment un clique-stable séparateur : soit K une clique et S un stable de T qui ne s’intersectent pas, alors |K ∩ X| ≤ 2 et |S ∩ Y | ≤ 2. Si |K ∩ X| = 2 (resp. |S ∩ Y |=2) alors K (resp. S) est seulement une arête (resp. une antiarête), car par définition, les sommets de K ∩ X n’ont pas de voisin commun avec Y . Donc la coupe (K, V \ K) (resp. V \ S, S) sépare K et S. Sinon, |K∩X| ≤ 1 et |S∩Y | ≤ 1 et alors (Y ∪(K∩X)\(S∩Y ), X∪(S∩Y )\(K∩X)) sépare K et S. Nous pouvons maintenant traiter le cas des 2-joints dans les trigraphes et montrer comment reconstruire un clique-stable séparateur à partir des clique-stable séparateurs des blocs de décompositions. Lemme 4.4. Soit T un trigraphe qui admet un 2-joint propre (X1, X2). Si les blocs de décomposition TX1 et TX2 admettent des clique-stable séparateurs de taille respectivement k1 et k2, alors T admet un clique-stable séparateur de taille k1 + k2. Démonstration. Soit (A1, B1, C1, A2, B2, C2) une affectation de (X1, X2), TXi (i = 1, 2) les blocs de décomposition avec les sommets marqués ai , bi et potentiellement 53CHAPITRE 4. CLIQUE-STABLE SÉPARATEUR ci suivant la parité du 2-joint. Remarquons que nous n’avons pas besoin de distinguer le cas du 2-joint pair de celui du 2-joint impair car ci ne joue aucun rôle. Soit F1 un clique-stable séparateur de TX1 de taille k1 et F2 un clique-stable séparateur de TX2 de taille k2. Construisons F un clique-stable séparateur de T. Pour chaque coupe (U, W) ∈ F1, construisons la coupe ((U ∩ X1) ∪ U 0 ,(W ∩ X1) ∪ W0 ∪ C2) avec U 0 ∪ W0 = A2 ∪ B2 et A2 ⊆ U 0 (resp. B2 ⊆ U 0 ) si a2 ∈ U (resp. b2 ∈ U), et A2 ⊆ W0 (resp. B2 ⊆ W0 ) sinon. En d’autres termes, A2 va du même côté de la coupe que a2, B2 va du même côté que b2 et C2 va toujours du côté du stable. Pour chaque coupe dans F2, nous faisons la même construction : A1 va du côté de a1, B1 va du côté de b1 et C1 va du côté du stable. Montrons maintenant que F est bien un clique-stable séparateur : soit K une clique et S un stable tels que K et S ne s’intersectent pas. Commençons par supposer que K ⊆ X1. Soit S 0 = (S ∩ X1) ∪ Sa2,b2 avec Sa1,b2 ⊆ {a2, b2} contient a2 (resp. b2) si et seulement si S intersecte A2 (resp. B2). S 0 est un stable de TX1 , donc il y a une coupe de F1 qui sépare K de S 0 . La coupe correspondante dans F sépare (K, S). Le cas K ⊆ X2 se traite de la même manière. Supposons alors que K intersecte à la fois X1 et X2. Alors K ∩ C1 = ∅ et K ⊆ A1 ∪ A2 ou K ⊆ B1 ∪ B2. Supposons sans perte de généralité que K ⊆ A1∪A2. Remarquons que S ne peut intersecter à la fois A1 et A2 qui sont fortement adjacent. Supposons donc que S n’intersecte pas A2. Soit K0 = (K ∩ A1) ∪ {a2} et S 0 = (S ∩ X1) ∪ Sb2 avec Sb2 = {b2}, si S intersecte B2, et Sb2 = ∅ sinon. K0 est une clique et S 0 est un stable de TX1 , donc il existe une coupe dans F1 qui les sépare. La coupe correspondante dans F sépare bien K de S, et donc F est bien un clique-stable séparateur. Nous pouvons maintenant démontrer le théorème principal de cette section : Théorème 4.5. Tout trigraphe de Berge apprivoisé admet un clique-stable séparateur de taille O(n 2 ). Démonstration. Soit c 0 la constante du lemme 4.3 et c = max(c 0 , 2 24). Nous allons démontrer par induction que tout trigraphe de T admet un clique-stable séparateur de taille cn˙ 2 . Nous avons deux cas de base, celui des trigraphes basiques, traités par le lemme 4.3 qui donne un clique-stable séparateur de taille c 0n 2 , et celui des petits trigraphes, c’est à dire celui des trigraphes d’ordre inférieur à 24. Pour ces trigraphes on peut simplement prendre tous les sous-ensembles de sommet U et prendre les coupes (U, V \ U) qui forment trivialement un clique-stable séparateur de taille au plus 2 24n 2 . 544.2. CLIQUE-STABLE SÉPARATEUR DANS LES CLASSES CLOSES PAR K-JOINTS Par conséquent, nous pouvons maintenant supposer que le trigraphe T n’est pas basique et a au moins 25 sommets. D’après le théorème 2.10, T admet un 2-joint propre (X1, X2) (ou le complément d’un 2-joint propre, mais dans ce cas comme le problème est auto-complémentaire, donc pouvons le résoudre sur T). Soit n1 = |X1|, d’après le lemme 2.9 nous pouvons supposer que 4 ≤ n1 ≤ n − 4. D’après le théorème 2.11, nous pouvons appliquer l’hypothèse d’induction sur les blocs de décomposition TX1 et TX2 afin d’obtenir un clique-stable séparateur de taille respectivement au plus k1 = c(n1 + 3)2 et k2 = c(n − n1 + 3)2 . D’après le lemme 4.4, T admet un clique-stable séparateur de taille k1 + k2. Prouvons maintenant que k1 + k2 ≤ cn˙ 2 . Soit P(n1) = c(n1 + 3)2 + c(n − n1 + 3)2 − cn˙ 2 . P est un polynôme de degré 2 de coefficient dominant 2c > 0. De plus P(4) = P(n − 4) = −2c(n − 25) ≤ 0 donc par convexité de P, P(n1) ≤ 0 pour tout 4 ≤ n1 ≤ n − 4, ce qui termine la démonstration. 4.2 Clique-stable séparateur dans les classes closes par k-joints Comme dans le chapitre 3 voyons maintenant comment étendre le résultat sur l’existence de clique-stable séparateurs de taille polynomial dans les classes de trigraphes closes par k-joints généralisés. Nous rappelons que nous partons d’une classe de graphes C héréditaire ayant la propriété de la clique stable séparation. Nous allons alors construire une classe de trigraphes C ≤k à partir de ces graphes, puis nous prenons la clôture C ≤k de cette classe par k-joints généralisés. Nous rappelons que toutes les définitions sont dans la section 3.2 du chapitre 3. Les remarques du chapitre précédent restent vraies : même si le théorème 2.10 montre que dans un certain sens les trigraphes de Berge apprivoisés sont la clô- ture par 2-joints et complémentaire de 2-joints des trigraphes basiques, prendre la clôture par k-joints généralisés des trigraphes basiques avec k = 2 ne donne pas la classe des trigraphes de Berge apprivoisés. En effet certains graphes de cette clôture ont des skew-partitions équilibrées. De plus les constantes obtenues dans le cas des k-joints généralisés sont moins bonnes que celles obtenues directement sur les trigraphes de Berge apprivoisés. Dans un premier temps montrons que la transformation des graphes en trigraphes (de la classe C à la classe C ≤k ), préserve la propriété de la clique-stable séparation. L’explosion de la taille du clique-stable séparateur est due au fait que 55CHAPITRE 4. CLIQUE-STABLE SÉPARATEUR dans un trigraphe T, une clique (et de même pour un stable) contenant k arêtes optionnelles devient une union d’au plus k cliques dans les réalisations de T. Lemme 4.6. Si chaque graphe G de C admet un clique-stable séparateur de taille m, alors chaque trigraphe T de C ≤k admet un clique-stable séparateur de taille mk 2 . Démonstration. Commençons par démontrer que s’il existe un clique-stable séparateur F de taille m alors F 0 = {(∩ k i=1Ui , ∪ k i=1Wi)|(U1, W1). . .(Uk, Wk) ∈ F} est une famille de coupes de taille mk qui sépare chaque clique de chaque union d’au plus k stables. En effet, si K est une clique et S1 . . . Sk sont k stables, tels qu’ils n’intersectent pas K, alors il existe dans F k partitions (U1, W1). . .(Uk, Wk) telles que (Ui , Wi) sépare K et Si . Maintenant (∩ k i=1Ui , ∪ k i=1Wi) est une partition qui sépare K de ∪ k i=1Si . Avec le même argument, on peut construire une famille F 00 de coupes de taille mk 2 qui sépare chaque union d’au plus k cliques d’union d’au plus k stables. Maintenant soit T un trigraphe de C ≤k et soit G ∈ C, tel que G est une réalisation de T. Remarquons qu’une clique K (resp. un stable S) dans T est une union d’au plus k cliques (resp. stables) dans G. Par exemple on peut voir que Σ(T) ∩ K (resp. Σ(T) ∩ S) est k-coloriable et chaque classe de couleur correspond à une clique (resp. un stable) dans G. Alors il existe un clique-stable séparateur de T de taille mk 2 . Nous pouvons maintenant montrer que le k-joint généralisé de deux trigraphes préserve la propriété de clique-stable séparation. En fait vu la structure assez forte du k-joint généralisé, les cliques traversantes (qui ont des sommets des deux côtés du k-joint généralisé) sont très contraintes. On peut donc presque prendre l’union des clique-stable séparateurs des deux trigraphes dont on est en train de prendre le k-joint généralisé. Lemme 4.7. Si T1 et T2 ∈ C ≤k admettent des clique-stable séparateurs de taille respectivement m1 et m2, alors le k-joint généralisé T de T1 et T2 admet un clique-stable séparateur de taille m1 + m2. Démonstration. La preuve est similaire à celle faite pour le lemme 4.4. On suit les notations introduites dans les définitions du k-joint généralisé. Soit F1 (resp, F2) un clique-stable séparateur de taille m1 (resp. m2) sur T1 (resp. T2). Construisons F un clique-stable séparateur de T. Pour chaque coupe (U, W) dans F1, construisons la coupe (U 0 , W0 ) suivant ces deux règles : pour tout a ∈ ∪r j=0Aj (resp. b ∈ Bi), 564.2. CLIQUE-STABLE SÉPARATEUR DANS LES CLASSES CLOSES PAR K-JOINTS a ∈ U 0 (resp. b ∈ U 0 ) si et seulement si a ∈ U (resp. bi ∈ U). En d’autres termes, on prend une coupe similaire à (U, W) en mettant Bi du même côté que bi . On fait l’opération symétrique pour chaque coupe (U, W) dans F2 en mettant Aj du même côté que aj . F est bien un clique-stable séparateur : soit K une clique et S un stable qui ne s’intersectent pas. Supposons pour commencer qu’un côté de la partition (A1, . . . , Ar, B1, . . . , Bs) intersecte à la fois K et S. Quitte à échanger T1 et T2 et à renuméroter les Aj , on peut supposer que A1 ∩ K 6= ∅ et A1 ∩ S 6= ∅. Puisque pour tout i, A1 est ou bien fortement complet ou bien fortement anticomplet à Bi , Bi ne peut intersecter à la fois K et S. Considérons dans T1 la paire (K0 = (K ∩ V (T)) ∪ Kb, S0 = (S ∩ V (T)) ∪ Sb) avec Kb = {bi |K ∩ Bi 6= ∅} et Sb = {bi |S ∩ Bi 6= ∅}. K0 est une clique dans T1, S 0 est un stable dans T1. Comme ils ne s’intersectent pas, il y a une coupe les séparant dans F1. La coupe correspondante dans F sépare K et S. L’autre cas est celui ou aucune partie de la partition n’intersecte à la fois K et S. Alors pour tout i, Bi n’intersectent pas non plus à la fois la clique K et le stable S : le même argument que ci-dessus s’applique encore. Nous pouvons maintenant démontrer le théorème principal de cette section. Théorème 4.8. Si tout graphe de C admet un clique-stable séparateur de taille O(n c ), alors tout trigraphe de C ≤k admet un clique-stable séparateur de taille O(n k 2 c ). En particulier, toute réalisation d’un trigraphe de C ≤k admet un clique-stable séparateur de taille O(n k 2 c ). Démonstration. On prouve par induction qu’il existe un clique-stable séparateur de taille pnk 2 c avec p = max(p 0 , 2 p0 ) où p 0 est la constante du O de la taille du clique-stable séparateur des graphes de C et p0 est une constante définie dans la suite. Le cas de base comporte deux cas : les trigraphes de C ≤k , pour lesquels la propriété est vérifiée d’après le lemme 4.6 et les trigraphes d’ordre au plus p0. Pour ces derniers, on peut considérer tous les sous-ensembles U de sommets et prendre les coupes (U, V \ U) qui forment un clique-stable séparateur trivial de taille au plus 2 p0 n k 2 c . Par conséquent, on peut supposer que T est le k-joint généralisé de T1 et T2 et qu’il a au moins p0 sommets. Soit n1 = |T1| et n2 = |T2| avec n1 + n2 = n + r + s et r + s + 1 ≤ n1, n2, ≤ n − 1. Par induction, il existe un clique-stable séparateur de taille pnk 2 c 1 sur T1 et un de taille pnk 2 c 2 sur T2. D’après le lemme 4.7, il existe 57CHAPITRE 4. CLIQUE-STABLE SÉPARATEUR un clique-stable séparateur sur T de taille pnk 2 c 1 + pnk 2 c 2 . On veut démontrer que pnk 2 c 1 + pnk 2 c 2 ≤ pnk 2 c . Remarquons que n1+n2 = n−1+r+s+1 donc par convexité de x 7→ x c sur R +, n k 2 c 1 +n k 2 c 2 ≤ (n−1)k 2 c+ (r+s+1)k 2 c . De plus, r+s+1 ≤ 2k+1. Définissons alors p0 suffisamment grand pour que pour tout n ≥ p0, n k 2 c − (n − 1)k 2 c ≥ (2k + 1)k 2 c . Alors n k 2 c 1 + n k 2 c 2 ≤ n k 2 c , ce qui conclut la démonstration. Dans ce chapitre également les résultats sortent du cadre des graphes de Berge apprivoisés pour s’étendre aux graphes clos par k-joint. Le point clé est de remarquer que les cliques ne peuvent pas vraiment traverser un k-joint. Le seul cas possible de traversée est entre deux ensembles complets du k-joint, mais dans ce cas comme les ensembles sont complets, le stable ne peut pas lui aussi intersecter ces deux ensembles. 58Chapitre 5 Calcul du stable maximum Les résultats de ce chapitre ont été obtenus avec Maria Chudnovsky, Nicolas Trotignon et Kristina Vušković, ils font l’objet d’un article [15] soumis à Journal of Combinatorial Theory, Series B. Nous allons dans ce chapitre montrer comment calculer en temps polynomial la taille du stable maximum dans les graphes de Berge apprivoisés. En toute rigueur ce problème est déjà résolu dans les graphes parfaits et donc a fortiori dans les graphes de Berge apprivoisés. Cependant la démonstration, due à Grötschel, Lovász et Schrijver [33] n’utilise pas du tout la structure (au sens du théorème de décomposition) des graphes parfaits (qui lui est postérieur) mais utilise principalement l’égalité entre les nombre chromatique χ et la taille d’une clique maximumω. En effet Lovász [29] introduit une quantité ϑ, résultat d’un problème d’optimisation convexe. Dans les graphes cette quantité vérifie l’inégalité α ≤ ϑ ≤ χ (χ est la taille d’une couverture par clique minimale). Dans les graphes parfaits, ϑ est donc égal à α la taille maximum d’un stable. Grâce à la méthode de l’ellipsoïde inventée par Grötschel, Lovász et Schrijver [23], il est alors possible de calculer α en temps polynomial. L’intérêt de notre résultat est qu’il est complétement combinatoire. À partir de ce résultat il est possible de colorier les graphes de Berge apprivoisés avec un surcout de O(n 2 ) en utilisant la démonstration classique de Grötschel, Lovász et Schrijver [33]. Comme dans les chapitres précédents nous allons décomposer le graphe grâce au théorème 2.10, et faire une induction. Malheureusement contrairement aux chapitres précédents notre méthode n’est pas générale vis à vis des 2-joints et donc ne se généralise pas aux classes closes par k-joints généralisées. 59CHAPITRE 5. CALCUL DU STABLE MAXIMUM En effet la structure des stables dépend de la parité du 2-joint (voir surtout le lemme 5.3 et le lemme 5.4). Afin de pouvoir faire l’induction nous avons besoin de travailler avec des trigraphes pondérés. Donc dans la suite de ce chapitre, le terme “trigraphe” signifie un trigraphe avec des poids sur ses sommets. Les poids sont des nombres de K avec K ou bien l’ensemble R+ des réels strictement positifs ou bien N+ l’ensemble des entiers strictement positifs. Les théorèmes sont vrais pour K = R+ mais les algorithmes sont implémentés avec K = N+. On voit un trigraphe sans poids sur ses sommets comme un trigraphe pondéré avec tous ses poids égaux à 1. Remarquons qu’un ensemble de sommets dans un trigraphe T est un stable fort, si et seulement si c’est un stable dans la réalisation complète de T. 5.1 Le cas des trigraphes basiques Commençons par montrer que nous pouvons reconnaître les classes basiques et calculer un stable pondéré maximum dans ces classes. Théorème 5.1. Il existe un algorithme en temps O(n 4 ) dont l’entrée est un trigraphe et dont la sortie est ou bien “T n’est pas basique”, ou bien le nom de la classe de base de T et le poids maximum d’un stable fort de T. Démonstration. Pour toute classe de trigraphe basique, nous fournissons un algorithme en temps O(n 4 ) qui décide si le trigraphe appartient à la classe et si c’est le cas, calcule le poids maximum d’un stable fort. Pour les trigraphes bipartis, on construit la réalisation complète G de T. Il est immédiat de voir que T est biparti si et seulement si G l’est. On peut donc reconnaitre un trigraphe biparti en temps linéaire en utilisant un parcours. Si T est biparti, un stable maximum de G est exactement un stable fort maximum de T, et on peut le calculer en temps O(n 3 ) voir [33]. Pour les compléments de trigraphes bipartis, nous procédons de manière analogue : nous commençons par prendre le complément T de notre trigraphe T, et nous reconnaissons si la réalisation complète de T est un graphe biparti. Nous calculons ensuite le poids maximum d’une clique de GT ∅ . Toutes ces opérations peuvent clairement se faire en temps O(n 2 ). Pour les line trigraphes, nous calculons la réalisation complète G et testons si G est un line graphe d’un graphe biparti par un algorithme classique comme [27] ou [32]. Notons que ces algorithmes fournissent également le graphe R tel que 605.1. LE CAS DES TRIGRAPHES BASIQUES G = L(R). En temps O(n 3 ) nous pouvons vérifier que les cliques de taille au moins 3 dans T sont bien des cliques fortes. On peut donc reconnaitre si T est un line trigraphe en temps O(n 3 ). Si c’est le cas, un stable maximum dans G peut être calculer en temps O(n 3 ) en calculant un couplage de poids maximum (voir [33]) dans le graphe biparti R tel que G = L(R). Pour les compléments de line trigraphes, nous procédons de manière similaire pour la reconnaissance en prenant la réalisation complète de T. Et le calcul du poids maximum d’un stable fort est simple : nous calculons la réalisation complète G de T, nous calculons ensuite le graphe biparti R tel que G = L(R) (il existe car d’après le lemme 2.1, les line trigraphes sont clos par réalisation) et nous calculons un stable de poids maximum de G (un tel stable est un ensemble maximal d’arête de R toutes deux à deux adjacentes, et il y a un nombre linéaire de tels ensembles). C’est alors un stable fort de poids maximum dans T. Pour les trigraphes doublés, nous ne pouvons pas utiliser de résultats classiques. Pour la reconnaissance, nous pourrions utiliser la liste des graphes non doublés minimaux décrite dans [2]. Cette liste de 44 graphes sur au plus 9 sommets donne un algorithme de reconnaissance en temps O(n 9 ). Plus récemment Maffray à donné un algorithme linéaire [31] pour la reconnaissance des graphes doublés. Malheureusement ce résultat ne semble pas se généraliser directement aux trigraphes. Nous proposons ici un algorithme en O(n 4 ) qui fonctionne également sur des trigraphes. Si une partition (X, Y ) des sommets d’un trigraphe est donné, on peut décider si c’est une bonne partition, par une génération exhaustive qui vérifie tous les points de la définition en temps O(n 2 ). Et si une arête ab de T|X est donnée, il est facile de reconstruire la bonne partition : tous les sommets fortement antiadjacents aux sommets a et b sont dans X et tous les sommets fortement adjacents à au moins a ou b sont dans Y . Donc en testant toutes les arêtes uv, on peut prédire laquelle est dans T|X, puis reconstruire (X, Y ) et donc décider en temps O(n 4 ) si le trigraphe T a une bonne partition (X, Y ), telle que X contient au moins une arête. De la même manière on peut tester si le trigraphe T a une bonne partition (X, Y ), telle que Y contient au moins une antiarête. Reste le cas de la reconnaissance des trigraphes doublés, tels que toute bonne partition est composée d’une clique forte et d’un stable fort. Dans ce cas le trigraphe est en fait un graphe, et ce type de graphe est connu comme un split graphe. Ils peuvent être reconnus en temps linéaire, voir [25] où il est prouvé qu’en regardant les degrés on peut facilement trouver une partition en clique et stable si une telle partition existe. Maintenant que nous savons reconnaitre si le trigraphe T est un trigraphe doublé, regardons comment calculer le poids maximum d’un stable fort de T. 61CHAPITRE 5. CALCUL DU STABLE MAXIMUM Calculons la réalisation complète G de T. D’après 2.2, G est un graphe doublé, et en fait (X, Y ) est une bonne partition pour G. Nous calculons alors un stable pondéré maximum dans G|X (qui est biparti), dans G|Y (dont le complément est biparti), et tous les stables formés d’un sommet de Y et de ses non-voisins dans X. Un de ces stables est de poids maximum dans G et donc est un stable fort de poids maximum dans T. 5.2 Stocker α dans les blocs Dans cette section, nous définissons plusieurs blocs de décomposition qui vont nous permettre de calculer le stable de poids maximum. Nous notons α(T) le poids maximum d’un stable fort de T. Dans la suite, T est un trigraphe de Berge apprivoisé, X est un fragment de T et Y = V (T) \ X (donc Y est aussi un fragment de T). Pour calculer α(T), il n’est pas suffisant de considérer les blocs TX et TY (comme défini dans le chapitre 2) séparément. Nous devons élargir les blocs afin d’encoder l’information de l’autre bloc. Dans cette section nous définissons quatre gadgets différents, TY,1, . . ., TY,4 et pour i = 1, . . . , 4, nous prouvons que α(T) peut être calculé à partir de α(TYi ). Nous définissons parfois plusieurs gadgets pour la même situation. En effet, dans la section 5.3 (notamment pour démontrer le lemme 5.9), nous avons besoin que nos gadgets préservent les classes de base, et suivant ces classes de base nous utilisons différents gadgets. Les gadgets que nous allons définir ne préservent pas la classe (certains introduisent des skew-partitions équilibrées). Ce n’est pas un problème dans cette section, mais il faudra y prendre garde dans la section suivante. Dans [37], un résultat de NP-complétude est prouvé, qui suggère que les 2- joints ne sont sans doute pas un outil utile pour calculer des stables maximum. En effet Trotignon et Vušković exhibent une classe de graphes C avec une théorème de décomposition du type : tout graphe de C est ou bien un line graphe, ou bien un graphe biparti ou bien admet un 2-joint. Cependant le calcul du stable maximum dans la classe C est NP=complet. Il semble donc que pour pouvoir utiliser les 2- joints, nous devons vraiment utiliser le fait que nos trigraphes sont de Berge. Ceci est fait en prouvant plusieurs inégalités. Si (X, Y ) est un 2-joint de T alors soit X1 = X, X2 = Y et soit (A1, B1, C1, A2, B2, C2) une affectation de (X1, X2). Nous définissons αAC = α(T|(A1 ∪ C1)), αBC = α(T|(B1 ∪ C1)), αC = α(T|C1) et αX = α(T|X1). Soit w la fonction de poids sur V (T), w(H) est la somme des poids sur les sommets de H. 625.2. STOCKER α DANS LES BLOCS Lemme 5.2. Soit S un stable fort de poids maximum de T. Alors exactement un des points suivants est vérifié : 1. S∩A1 6= ∅, S∩B1 = ∅, S∩X1 est un stable fort maximum de T|(A1∪C1) et w(S ∩ X1) = αAC ; 2. S∩A1 = ∅, S∩B1 6= ∅, S∩X1 est un stable fort maximum de T|(B1∪C1) et w(S ∩ X1) = αBC ; 3. S ∩ A1 = ∅, S ∩ B1 = ∅, S ∩ X1 est un stable fort maximum de T|C1 et w(S ∩ X1) = αC ; 4. S ∩ A1 6= ∅, S ∩ B1 6= ∅, S ∩ X1 est un stable fort maximum de T|X1 et w(S ∩ X1) = αX. Démonstration. Directe depuis la définition d’un 2-joint. Nous avons besoin de plusieurs inégalités à propos des intersections entre les stables forts et les 2-joints. Ces lemmes sont prouvés dans [37] pour les graphes. Les démonstrations sont similaires pour les trigraphes mais comme ce sont ces inégalités qui nous permettent d’utiliser le fait que les trigraphes sont de Berge nous les incluons ici. Lemme 5.3. 0 ≤ αC ≤ αAC, αBC ≤ αX ≤ αAC + αBC. Démonstration. Les inégalités 0 ≤ αC ≤ αAC, αBC ≤ αX sont trivialement vraies. Soit D un stable fort pondéré de poids maximum de T|X1. Nous avons : αX = w(D) = w(D ∩ A1) + w(D ∩ (C1 ∪ B1)) ≤ αAC + αBC. Lemme 5.4. Si (X1, X2) est un 2-joint impair de T, alors αC + αX ≤ αAC + αBC. Démonstration. Soit D un stable fort de T|X1 de poids αX et C un stable fort de T|C1 de poids αC. Dans le trigraphe biparti T|(C ∪ D), on note par YA (resp. YB) l’ensemble des sommets de C ∪ D pour lesquels il existe un chemin dans T|(C ∪ D) les reliant à des sommets de D ∩ A1 (resp. D ∩ B1). Remarquons que par définition, D ∩ A1 ⊆ YA, D ∩ B1 ⊆ YB et il n’y a pas d’arêtes entre YA ∪ YB et (C ∪ D) \ (YA ∪YB). Montrons que YA ∩YB = ∅, et YA est fortement complet à YB. 63CHAPITRE 5. CALCUL DU STABLE MAXIMUM Supposons le contraire, alors il existe un chemin P dans T|(C ∪ D) d’un sommet de D ∩ A1 à un sommet de D ∩ B1. On peut supposer que P est minimal vis à vis de cette propriété, et donc que l’intérieur de P est dans C1 ; Par conséquent, P est de longueur paire car T|(C ∪ D) est biparti. Ceci contredit l’hypothèse sur le fait que (X1, X2) était impair. Nous définissons alors : — ZA = (D ∩ YA) ∪ (C ∩ YB) ∪ (C \ (YA ∪ YB)); — ZB = (D ∩ YB) ∪ (C ∩ YA) ∪ (D \ (YA ∪ YB). Des définitions et des propriétés ci-dessus, ZA et ZB sont des stables forts et ZA ⊆ A1∪C1 et ZB ⊆ B1∪C1. Donc, αC +αX = w(ZA)+w(ZB) ≤ αAC +αBC. Lemme 5.5. Si (X1, X2) est un 2-joint pair de T, alors αAC +αBC ≤ αC +αX. Démonstration. Soit A un stable fort de T|(A1 ∪ C1) de poids αAC et B un stable fort de T|(B1 ∪ C1) de poids αBC. Dans le trigraphe biparti T|(A ∪ B), on note YA (resp. YB) l’ensemble des sommets de A ∪ B pour lesquels il existe un chemin P dans T|(A ∪ B) les reliant à un sommet de A ∩ A1 (resp. B ∩ B1). Remarquons que d’après les définitions, A ∩ A1 ⊆ YA, B ∩ B1 ⊆ YB, et YA ∪ YB est fortement anticomplet à (A ∪ B) \ (YA ∪ YB). Nous allons montrer que YA ∩ YB = ∅ et Y est fortement anticomplet à YB. Supposons que ce ne soit pas le cas, alors il existe un chemin P dans T|(A ∪ B) d’un sommet de A ∩ A1 à un sommet de B ∩ B1. On peut supposer que P est minimal vis à vis de cette propriété, et donc que son intérieur est dans C1 ; par conséquent, il est de longueur impaire car T|(A ∪ B) est biparti. Ceci contredit l’hypothèse sur le fait que (X1, X2) était pair. Nous définissons alors : — ZD = (A ∩ YA) ∪ (B ∩ YB) ∪ (A \ (YA ∪ YB)); — ZC = (A ∩ YB) ∪ (B ∩ YA) ∪ (B \ (YA ∪ YB)). À partir des définitions et des propriétés ci-dessus, ZD et ZC sont des stables forts et ZD ⊆ X1 et ZC ⊆ C1. Donc, αAC +αBC = w(ZC)+w(ZD) ≤ αC +αX. Nous pouvons maintenant construire nos gadgets. Si (X, Y ) est un complément de 2-joint propre de T, alors soit X1 = X, X2 = Y et (A1, B1, C1, A2, B2, C2) une affectation de(X1, X2). Nous construisons le gadget TY,1 comme suit. Nous partons de T|Y et nous ajoutons deux nouveaux sommets marqués a, b, tels que a est fortement complet à B2 ∪ C2, b est fortement complet à A2 ∪ C2 et ab est une arête forte. Nous donnons les poids αA = α(T|A1) et αB = α(T|B1) respectivement aux sommets a et b. Nous notons αX = α(T|X). 645.2. STOCKER α DANS LES BLOCS Lemme 5.6. Si (X, Y ) est un complément de 2-joint propre de T, alors TY,1 est de Berge et α(T) = max(α(TY,1), αX). Démonstration. Puisque TY,1 est une semiréalisation d’un sous-trigraphe induit du bloc TY , comme défini dans le chapitre 2, il est clairement de Berge d’après 2.11. Soit Z un stable fort pondéré de poids maximum dans T. Si Z ∩ X1 = ∅, alors Z est aussi un stable fort dans TY,1, donc α(T) ≤ α(TY,1) ≤ max(α(TY,1), αX). Si Z ∩ A1 6= ∅ et Z ∩ (B1 ∪ C1) = ∅, alors {a1} ∪ (Z ∩ X2) est un stable fort dans TY,1 de poids α(T), donc α(T) ≤ α(TY,1) ≤ max(α(TY,1), αX). Si Z ∩ B1 6= ∅ et Z ∩(A1 ∪ C1) = ∅, alors {b1} ∪(Z ∩ X2) est un stable fort dans TY,1 de poids α(T), donc α(T) ≤ α(TY,1) ≤ max(α(TY,1), αX). Si Z∩(A1∪C1) 6= ∅ et Z∩(B1∪C1) 6= ∅, alors α(T) = αX, donc α(T) ≤ max(α(TY,1), αX). Dans tous les cas nous avons prouvé que α(T) ≤ max(α(TY,1), αX). Réciproquement, soit α = max(α(TY,1), αX). Si α = αX, alors en considérant n’importe quel stable fort de T|X1, nous voyons que α = αX ≤ α(T). Nous pouvons donc supposer que α = α(TY,1) et soit Z un stable fort pondéré de poids maximum dans TY,1. Si a /∈ Z et b /∈ Z, alors Z est aussi un stable fort dans T, donc α ≤ α(T). Si a ∈ Z et b /∈ Z, alors Z 0 ∪ Z \ {a}, où Z 0 est un stable fort pondéré de poids maximum dans T|A1, est aussi un stable fort dans T de même poids que Z, donc α ≤ α(T). Si a /∈ Z et b ∈ Z, alors Z 0 ∪Z \ {b} quand Z 0 est un stable de poids maximum dans T|B1 est aussi un stable fort dans T de même poids que Z, donc α ≤ α(T). Dans tous les cas, nous avons prouvé que α ≤ α(T). Si (X1, X2) est un 2-joint propre impair de T, alors nous construisons le gadget TY,2 comme suit. Nous commençons avec T|Y . Nous ajoutons ensuite quatre nouveaux sommets marqués a, a 0 , b, b 0 , tels que a et a 0 soient fortement complets à A2, b et b 0 soient fortement complets à B2, et ab est une arête forte. Nous donnons les poids αAC + αBC − αC − αX, αX − αBC, αAC + αBC − αC − αX et αX − αAC à respectivement a, a 0 , b et b 0 . Remarquons que d’après 5.3 et 5.4, tous les poids sont positifs. Nous définissons un autre gadget de décomposition TY,3 pour la même situation, de la manière suivante. Nous commençons avec T|Y . Nous ajoutons ensuite trois nouveaux sommets marqués a, a 0 , b, tels que a et a 0 sont fortement complets à A2, b est fortement complets à B2, et a 0a et ab sont des arêtes fortes. Nous donnons les poids αAC − αC, αX − αBC et αBC − αC à respectivement a, a 0 et b. Remarquons que d’après 5.3, tous les poids sont positifs. 65CHAPITRE 5. CALCUL DU STABLE MAXIMUM Lemme 5.7. Si (X, Y ) est un 2-joint propre de T, alors TY,2 et TY,3 sont de Berge, et α(T) = α(TY,2) + αC = α(TY,3) + αC. Démonstration. Supposons que TY,2 contienne un trou impair H. Puisqu’un trou impair n’a pas de sommet fortement dominant, il contient au plus un sommet parmi a, a0 et au plus un sommet parmi b, b0 . Donc H est un trou impair d’une semiréalisation du bloc TY (comme défini dans le chapitre 2). Ceci contredit 2.11. De la même manière, TY,2 ne contient pas d’antitrou impair, et est donc de Berge. La démonstration que TY,3 est de Berge est similaire. Soit Z un stable fort dans T de poids α(T). Nous construisons un stable fort dans TY,2 en ajoutant à Z ∩ X2 un des ensembles suivants (en fonction de la conclusion du lemme 5.2) : {a, a0}, {b, b0}, ∅, ou {a, a0 , b0}. Dans chaque cas, nous obtenons un stable fort de TY,2 de poids α(T) − αC. Ce qui prouve que α(T) ≤ α(TY,2) + αC. Réciproquement, soit Z un stable dans TY,2 de poids α(TY,2). On peut supposer que Z ∩ {a, a0 , b, b0} est un des ensembles {a, a0}, {b, b0}, ∅, ou {a, a0 , b0}, et suivant chacun de ces cas, on construit un stable fort de T en ajoutant à Z ∩ X2 un stable fort pondéré de poids maximum d’un des trigraphes suivants : T|(A1∪C1), T|(B1∪ C1), T|C1, ou T|X1. Nous obtenons un stable fort dans T de poids α(TY,2) + αC, prouvant que α(TY,2) + αC ≤ α(T). Ceci complète la démonstration pour TY,2. Prouvons maintenant l’égalité pour TY,3. Soit Z un stable fort dans T de poids α(T). Nous construisons un stable fort dans TY,3 en ajoutant à Z ∩ X2 un des ensembles suivants (en fonction de la conclusion du lemme 5.2) : {a}, {b}, ∅, or {a 0 , b}. Dans chaque cas, nous obtenons un stable fort de TY,3 de poids α(T) − αC. Ceci prouve que α(T) ≤ α(TY,3) + αC. Réciproquement, soit Z un stable de TY,3 de poids α(TY,3). D’après 5.4, αAC − αC ≥ αX − αBC, on peut donc supposer que Z ∩ {a, a0 , b} est un des ensembles {a}, {b}, ∅, ou {a 0 , b}, et suivant chacun, nous construisons un stable fort de T en ajoutant à Z ∩ X2 un stable fort pondéré de poids maximum d’un des trigraphes suivants : T|(A1 ∪ C1), T|(B1 ∪ C1), T|C1, ou T|X1. Nous obtenons alors un stable fort dans T de poids α(TY,3) + αC, ce qui prouve que α(TY,3) + αC ≤ α(T). Ceci termine la démonstration pour TY,3. Si (X1, X2) est un 2-joint propre pair de T et X = X1, Y = X2, alors soit (A1, B1, C1, A2, B2, C2) une affectation de (X1, X2). Nous construisons le gadget TY,4 comme suit. Nous partons que T|Y . Nous ajoutons ensuite trois nouveaux sommets marqués a, b, c, tels que a est fortement complet à A2, b est fortement 665.3. CALCULER α complet à B2, et c est fortement adjacent aux sommets a et b et n’a aucun autre voisin. Nous donnons les poids αX − αBC, αX − αAC, et αX + αC − αAC − αBC à respectivement a, b et c. Remarquons que d’après 5.3 et 5.5, ces poids sont positifs. Lemme 5.8. Si (X, Y ) est un 2-joint propre pair de T, alors TY,4 est de Berge et α(T) = α(TY,4) + αAC + αBC − αX. Démonstration. Clairement, TY,4 est de Berge, car c’est la semiréalisation du bloc TY comme défini dans le chapitre 2, qui est de Berge d’après 2.11. Soit Z un stable fort dans T de poids α(T). Nous construisons un stable fort dans TY,4 en ajoutant à Z ∩ X2 un des ensembles suivant (suivant la conclusion de 5.2) : {a}, {b}, {c}, ou {a, b}. Dans chaque cas, nous obtenons un stable fort de TY,4 de poids α(T) − (αAC + αBC − αX). ce qui prouve que α(T) ≤ α(TY,4) + αAC + αBC − αX. Réciproquement, soit Z un stable fort dans TY,4 de poids α(TY,4). On peut supposer que Z ∩ {a, b, c} est un des ensembles {a}, {b}, {c}, ou {a, b}, et suivant ces cas, nous construisons un stable fort de T en ajoutant à Z ∩ X2 un stable fort pondéré de poids maximum d’un des trigraphes suivant : T|(A1∪C1), T|(B1∪C1), T|C1, ou T|X1. Nous obtenons un stable fort de T de poids α(TY,4) +αAC +αBC − αX, ce qui prouve que α(TY,4) + αAC + αBC − αX ≤ α(T). 5.3 Calculer α Nous sommes maintenant prêt à décrire notre algorithme de calcul d’un stable fort pondéré de poids maximal. La difficulté principale est que les blocs de décomposition définis dans le chapitre 2 doivent être utilisés pour rester dans la classe, alors que les gadgets définis dans la section 5.2 doivent être utilisés pour calculer α. Notre idée est de commencer par utiliser les blocs de décomposition dans un premier temps, puis de les remplacer par les gadgets lors d’une deuxième étape. Pour transformer un bloc en un gadget (ce que nous appelons réaliser une expansion), nous devons effacer une composante optionnelle, et la remplacer par un groupe de sommets de poids bien choisi. Pour cela nous avons besoin de deux informations ; la première est le type de décomposition utilisé originellement pour créer cette composante optionnelle, ainsi que les poids associés ; ces informations sont encodées dans ce que nous appelons un pré-label. La seconde information nécessaire est le type de la classe basique dans laquelle cette composante optionnelle va finir (car tous les gadgets ne préservent pas toutes les classes basiques) ; cette information 67CHAPITRE 5. CALCUL DU STABLE MAXIMUM est encodée dans ce que nous appelons un label. Remarquons que le pré-label est connu juste après la décomposition du trigraphe, alors que le label n’est connu qu’après, lorsque le trigraphe est complétement décomposé. Formalisons cela. Soit S une composante optionnelle d’un trigraphe bigame T. Un pré-label de S est défini par : — (“Complément de 2-joint impair”, αA, αB, αX) avec αA, αB et αX des entiers, si S est une arête optionnelle. — (“2-joint impair”, αAC, αBC, αC, αX) avec αAC, αBC, αC et αX des entiers, si S est une arête optionnelle et qu’aucun sommet de T n’est complet à S. — (“Complément de 2-joint pair”, αA, αB, αX) avec αA, αB et αX des entiers, si S est un composante lourde. — (“2-joint pair”, αAC, αBC, αC, αX) avec αAC, αBC, αC et αX des entiers, si S est une composante légère. Remarquons que certains types de composantes optionnelles sont “éligibles” à la fois au premier et au deuxième pré-label. Un pré-label doit être vu comme “la décomposition à partir de laquelle la composante optionnelle est construite”. Quand T est un trigraphe et S est un ensemble des composantes optionnelles de T, une fonction de pré-label pour (T, S) est une fonction qui associe à chaque S ∈ S un pré-label. Il est important de remarquer que S est seulement un ensemble de composantes optionnelles, donc certaines composantes optionnelles peuvent ne pas avoir de pré-label. Notons que la définition suivante des labels est un peu ambigüe. En effet, lorsqu’on parle de “la classe basique contenant le trigraphe” il est possible que certains trigraphes soient contenus dans plusieurs classes basiques. C’est typiquement le cas des petits trigraphes et des trigraphes complets par exemple. Il est important de voir que cela n’est pas un problème ; si un trigraphe appartient à plusieurs classes de base, notre algorithme choisi arbitrairement une de ces classes, et produit un résultat correct. Pour ne pas compliquer les notations c’est pas n’est pas complétement formalisé dans notre algorithme. Pour les trigraphes doublés, il y a une autre ambiguïté. Si T est un trigraphe doublé et (X, Y ) est une bonne partition de T, une arête optionnelle uv de T est une arête de couplage si u, v ∈ X et une antiarête de couplage si u, v ∈ Y . Dans certains cas dégénérés, une arête optionnelle d’un trigraphe doublé peut être à la fois une arête de couplage et une antiarête de couplage suivant la bonne partition choisie, cependant une fois la bonne partition choisie il n’y a plus d’ambiguïté. Là encore ce n’est pas un problème : lorsqu’une arête optionnelle est ambigüe, notre algorithme choisi arbitrairement une bonne partition. 685.3. CALCULER α Soit S une arête optionnelle d’un trigraphe bigame. Un label pour S est une paire L 0 = (L, N), tel que L est un pré-label et N est une des étiquettes suivantes : “biparti”, “complément de bipartite”, “line”, “complément de line”, “couplage doublé”, “anticouplage doublé”. On dit que L 0 étend L. l’étiquette ajoutée au pré-label d’une composante optionnelle S doit être pensée comme “la classe basique dans laquelle la composante S finit une fois que le trigraphe est complétement décomposé”. Quand T est un trigraphe et S est un ensemble de composantes optionnelles de T, une fonction de label pour (T, S) est une fonction qui associe à chaque S ∈ S un label. Dans ces conditions, on dit que T est étiqueté. Comme pour les pré-labels, les composantes optionnelles qui ne sont pas dans S ne reçoivent pas de label. Soit T un trigraphe étiqueté, S un ensemble de composantes optionnelles de T et L une fonction de label pour (T, S). L’expansion de (T, S,L), est le trigraphe obtenu à partir de T après avoir effectué pour chaque S ∈ S de label L l’opération suivante : 1. Si L = ((“Complément de 2-joint impair”, αA, αB, αX), N) pour une étiquette N (donc S est une arête optionnelle ab) : transformer ab en arête forte, donner le poids αA au sommet a et le poids αB au sommet b. 2. Si L = ((“2-joint impair”, αAC, αBC, αC, αX), N) pour une étiquette N (donc S est une arête optionnelle ab) : transformer ab en arête forte et : — Si N est une des étiquettes suivantes : “biparti”, “complément de line”, ou “couplage doublé”, alors ajouter un sommet a 0 , un sommet b 0 , rendre a 0 fortement complet à N(a) \ {b}, rendre b fortement complet à N(b) \ {a}, et donner les poids αAC +αBC −αC −αX, αX −αBC, αAC +αBC −αC −αX et αX − αAC à respectivement a, a 0 , b et b 0 — Si N est une des étiquettes suivantes : “complément de biparti”, “line” ou “anticouplage doublé”, alors ajouter un sommet a 0 , rendre a 0 fortement complet à {a} ∪ N(a) \ {b} et donner les poids αAC − αC, αX − αBC et αBC − αC à respectivement a, a 0 et b 3. Si L = ((“Complément de 2-joint pair”, αA, αB, αX), N) pour une étiquette N (donc S est composée de deux arêtes optionnelles ac et cb et c est lourd) : supprimer le sommet c, et donner les poids αA au sommet a et αB au sommet b. 4. Si L = ((“2-joint pair”, αAC, αBC, αC, αX), N) pour une étiquette N (donc S est composée de deux arêtes optionnelles ac et cb, et c est léger) : transformer ac et cb en arête forte, donner les poids αX − αBC, αX − αAC, et αX + αC − αAC − αBC à respectivement a, b et c. 69CHAPITRE 5. CALCUL DU STABLE MAXIMUM L’expansion doit être vue comme “ce qui est obtenu si on utilise les gadgets comme défini dans la section 5.2 au lieu des blocs de décomposition comme défini dans le chapitre 2”. Théorème 5.9. Supposons que T est un trigraphe qui est dans une classe basique d’étiquette N, S est l’ensemble des composantes optionnelles de T et L est une fonction de label pour T, tels que pour tout S ∈ S de label L, un des points suivants est vérifié : — L = (. . . , N) avec N une des étiquettes suivantes : “biparti”, “complé- ment de biparti”, “line” ou “complément de line” ; ou — N = “doublé”, S est une arête de couplage de T et L = (. . . , “couplage doublé”); ou — N = “doublé”, S est une antiarête de couplage de T et L = (. . . , “anticouplage doublé”). Alors l’expansion de (T, S,L) est un trigraphe basique. Démonstration. À partir de nos hypothèses, T est basique. Donc il suffit de dé- montrer que l’expansion d’une composante optionnelle S préserve le fait d’être basique, et le résultat suivra par induction sur S. Soit T 0 l’expansion. Dans plusieurs cas (c’est à dire dans les cas 1, 3 et 4) l’expansion consiste simplement à transformer des arêtes optionnelles en arêtes fortes et potentiellement supprimer des sommets. D’après 2.3, ces opérations préservent le fait d’être basique. Nous avons donc simplement à étudier le cas 2 de la définition des expansions. Nous pouvons donc supposer que S est une arête optionnelle ab. Il est facile de voir que l’expansion comme définie dans le cas 2 préserve les trigraphes bipartis et les compléments de trigraphes bipartis ; donc si N ∈ {“biparti”, “complément de biparti”} le résultat est prouvé. Supposons que N =“line”, et donc T est un line trigraphe. Soit G la réalisation complète de T, et R le graphe biparti, tel que G = L(R). Donc a est une arête xaya dans R, et b est un arête yaxb. Puisque T est un line trigraphe, toute clique de taille au moins 3 dans T est une clique forte, et donc a et b n’ont pas de voisin commun dans T. Donc tous les voisins de a, à l’exception de b, sont des arêtes incidentes à xa et pas à ya. Soit R0 le graphe obtenu à partir de R en ajoutant une arête pendante e à xa. Remarquons que L(R0 ) est isomorphe à la réalisation complète de T 0 (l’arête e correspond au nouveau sommet a 0 ), et donc T 0 est un line trigraphe. Supposons maintenant que N =“complément de line”, donc T est le complé- ment d’un line trigraphe. Puisque toute clique de taille au moins 3 dans T est une 705.3. CALCULER α clique forte, V (T) = N(a) ∪ N(b). Supposons qu’il existe u, v ∈ N(a) \ N(b), tels que u est adjacent à v. Puisque T est un line trigraphe, et que uv est une semiarête, alors {u, v, b} est une clique de taille 3 dans T. Soit R le graphe biparti, tel que la réalisation complète de T soit L(R). Alors dans R aucune arête de u, v, a ne partage d’extrémité, cependant b partage une extrémité avec ces trois arêtes, c’est une contradiction. Ceci prouve que N(a) \ N(b) (et par symétrie que N(b) \ N(a)) est un stable fort dans T. Comme N(a) ∩ N(b) = ∅, alors T est biparti, et donc comme précédemment, T 0 est basique. On peut donc supposer que T est un trigraphe doublé avec (X, Y ) une bonne partition. Si S = ab est une arête de couplage de T, alors ajouter les sommets a 0 , b0 à X produit une bonne partition de T 0 . Si S = ab est une antiarête de couplage de T, alors ajouter le sommet a 0 à Y produit une bonne partition de T 0 . Dans tous les cas T 0 est basique et le résultat est prouvé. Soit T un trigraphe, S un ensemble de composantes optionnelles de T, L une fonction de label de (T, S) et T 0 l’expansion de (T, S,L). Soit X ⊆ V (T). On définit l’expansion de X ⊆ V (T) en X0 comme suit. On part avec X0 = X et on effectue les opérations suivantes pour tout S ∈ S. 1. Si L = ((“Complément de 2-joint impair”, αA, αB, αX), N) pour une étiquette N (donc S est une arête optionnelle ab), ne pas modifier X0 . 2. Si L = ((“2-joint impair”, αAC, αBC, αC, αX), N) pour une étiquette N(donc S est une arête optionnelle ab) : — Si N est une des étiquettes suivantes : “biparti”, “complément de line”, ou “couplage doublé”, faire : si a ∈ X alors ajouter a 0 à X0 , et si b ∈ X alors ajouter b 0 à X0 . — Si N est une des étiquettes suivantes : “complément de biparti”, “line” ou “anticouplage doublé”, faire : si a ∈ X alors ajouter a 0 à X0 . 3. Si L = ((“Complément de 2-joint pair”, αA, αB, αX), N) pour une étiquette N (donc S est composée de deux arêtes optionnelles ac et cb et c est lourd), faire : si c ∈ X, alors supprimer c de X0 . 4. Si L = ((“2-joint pair”, αAC, αBC, αC, αX), N) pour une étiquette N (donc S est composée de deux arêtes optionnelles ac et cb et c est léger) ne pas modifier X0 . 71CHAPITRE 5. CALCUL DU STABLE MAXIMUM Théorème 5.10. Avec les notations ci-dessus, si (X1, X2) est un 2- joint propre (resp. le complément d’un 2-joint propre) de T avec (A1, B1, C1, A2, B2, C2) une affectation de (X1, X2), alors (X0 1 , X0 2 ) est un 725.3. CALCULER α 2-joint propre (resp. le complément d’un 2-joint propre) de T 0 avec (A0 1 , B0 1 , C0 1 , A0 2 , B0 2 , C0 2 ) une affectation de (X0 1 , X0 2 ), de même parité que (X1, X2). (Remarquons que la notion de parité fait sens pour T 0 , puisque T 0 est de Berge d’après 5.6, 5.7 et 5.8). Démonstration. Directe à partir des définitions. Théorème 5.11. Il existe un algorithme avec les spécifications suivantes : Entrée : Un triplet (T, S,L), tels que T est un trigraphe de Berge apprivoisé, S est un ensemble de composantes optionnelles de T et L est une fonction de pré-label pour (T, S). Sortie : Une fonction de label L 0 pour (T, S) qui étend L, et un stable fort pondéré de poids maximum de l’expansion de (T, S,L 0 ). Complexité en temps : O(n 5 ) Démonstration. Nous décrivons un algorithme récursif. La première étape de l’algorithme utilise 5.1 pour vérifier que T est basique. Remarquons que si T est un trigraphe doublé, l’algorithme de 5.1 calcule également quelles arêtes optionnelles sont des arêtes de couplage, et quelles arêtes optionnelles sont des antiarêtes de couplage. Commençons par supposer que T est dans une classe basique de nom N (c’est en particulier le cas lorsque |V (T)| = 1). Nous étendons la fonction de pré-label L en une fonction de label L 0 comme suit : si N 6=“doublé”, alors on ajoute N à tous les pré-labels et sinon, pour tout S ∈ S de label L, on ajoute “couplage doublé” (resp. “anticouplage doublé”) à L si S est une arête de couplage (resp. antiarête de couplage). La fonction de label obtenue vérifie les hypothèses de 5.9, donc l’expansion T 0 de (T, S,L 0 ) est basique, et en exécutant l’algorithme de 5.1 sur T 0 , nous obtenons un stable fort pondéré de poids maximum de T 0 en temps O(n 4 ). Donc, nous pouvons bien calculer une fonction de label L 0 pour T, S qui étend L, et un stable fort pondéré de poids maximum de l’expansion de (T, S,L 0 ). Supposons maintenant que T n’est pas basique. Puisque T est un trigraphe de Berge apprivoisé, d’après 2.10, nous savons que T se décompose par un 2-joint ou le complément d’un 2-joint. Dans [7], un algorithme fonctionnant en temps O(n 4 ) est donné pour calculer un 2-joint dans un graphe quelconque. En fait les 2-joints comme décrit dans [7] ne sont pas exactement ceux que nous utilisons : Le dernier point de notre définition n’a pas besoin d’être vérifié (celui qui assure 73CHAPITRE 5. CALCUL DU STABLE MAXIMUM qu’aucun côté du 2-joint n’est réduit à un chemin de longueur exactement 2). Cependant la méthode du Théorème 4.1 de [7] montre comment ajouter ce type de contraintes sans perte de temps. Il est facile d’adapter cette méthode pour la dé- tection de 2-joint dans un trigraphe (nous présentons d’ailleurs dans le chapitre 6 un algorithme similaire). Nous pouvons donc calculer la décomposition nécessaire en temps O(n 4 ). Nous calculons alors les blocs TX et TY comme défini dans le chapitre 2. Remarquons que tout membre de S est une arête optionnelle de seulement un des blocs TX ou TY . Nous appelons SX (resp. SY ) l’ensemble des éléments de S qui sont dans TX (resp. TY ). Soit S la composante optionnelle marquée utilisée pour créer le bloc TY . Observons que pour tout u ∈ S, il existe un sommet v ∈ X tel que NT (v)∩Y = NTY (u)∩Y . De même pour TX. Donc, la fonction de pré-label L pour (T, S) induit naturellement une fonction de pré-label LX pour (TX, SX) et une fonction de pré-label LY pour (TY , SY ) (chaque S ∈ SX reçoit le même pré-label que celui dans L, et de même pour SY . Dans la suite, la décomposition réfère à la décomposition qui a été utilisée pour construire TX et TY , (c’est une des étiquettes “complément de 2-joint pair”, “complément de 2-joint impair”, “2- joint pair” ou “2-joint impair”) et nous utilisons nos notations habituelles pour l’affectation de cette décomposition. Sans perte de généralité, on peut supposer que |V (TX)| ≤ |V (TY )|. D’après 2.11, TX, TY sont des trigraphes de Berge bigames, et d’après 2.12, ils sont apprivoisés. Soit S la composante optionnelle marquée qui est utilisée pour créer le bloc TY . On définit S 0 Y = SY ∪ {S}. On peut maintenant construire une fonction de pré-label LY pour S 0 Y comme suit. Toutes les composantes optionnelles dans SY conservent le pré-label qu’elles ont dans S. La composante marquée S reçoit le pré-label suivant : — Si la décomposition est un complément de 2-joint impair, alors calculer récursivement αA = α(TX|A1), αB = α(TX|B1) et αX = α(TX|X), et définir le pré-label de S comme (“Complément de 2-joint impair”, αA, αB, αX). Remarquons que dans ce cas |S| = 2. — Si la décomposition est un 2-joint impair, alors calculer récursivement αAC = α(TX|(A1 ∪ C1)), αBC = α(TX|(B1 ∪ C1)), αC = α(TX|C1) et αX = α(TX|X) et définir le pré-label de S comme (“2-joint impair”, αAC, αBC, αC, αX). Remarquons que dans ce cas |S| = 2 et aucun sommet de T 0 Y \ S n’est fortement complet à S. — Si la décomposition est un complément de 2-joint pair, alors calculer récursivement αA = α(TX|A1), αB = α(TX|B1) et αX = α(TX|X), et définir le 745.3. CALCULER α pré-label de S comme (“Complément de 2-joint pair”, αA, αB, αX). Remarquons que dans ce cas, |S| = 3 et S est léger. — Si la décomposition est un 2-joint pair, alors calculer récursivement αAC = α(TX|(A1 ∪ C1)), αBC = α(TX|(B1∪C1)), αC = α(TX|C1) et αX = α(TX|X) et définir le pré-label de S comme (“2-joint pair”, αAC, αBC, αC, αX). Remarquons que dans ce cas, |S| = 3 et S est lourd. Maintenant TY , S 0 Y a une fonction de pré-label LY . Nous exécutons récursivement notre algorithme pour (TY , S 0 Y ,LY ). Nous obtenons une extension L 0 Y de LY et un stable fort pondéré de poids maximum de l’expansion T 0 Y de (TY , S 0 Y ,L 0 Y ). Nous utilisons L 0 Y pour terminer la construction de L 0 , en utilisant pour tout S ∈ SY la même extension que nous avions dans L 0 Y pour étendre LY . Nous avons donc maintenant, une extension L 0 de L. Soit T 0 l’extension de (T, S,L 0 ). Remarquons maintenant que d’après 5.10, T 0 Y est exactement le gadget pour T 0 , comme défini dans la section 5.2. Donc, α(T 0 ) peut être calculé à partir de α(T 0 Y ), comme expliqué dans 5.6, 5.7, ou 5.8. Donc, l’algorithme fonctionne correctement lorsqu’il renvoie L 0 et le stable fort pondéré de poids maximum que nous venons de calculer. Analyse de complexité : Avec notre manière de construire nos blocs de dé- composition, nous avons |V (TX)| − 3 + |V (TY )| − 3 ≤ n et d’après 2.9(viii) nous avons 6 ≤ |V (TX)|, |V (TY )| ≤ n − 1. Rappelons que nous avons supposé que |V (TX)| ≤ |V (TY )|. Soit T(n) la complexité de notre algorithme. Pour chaque type de décomposition, nous effectuons au plus quatre appels récursifs sur le petit bloc, c’est à dire TX, et un appel récursif sur le grand bloc TY . Nous avons donc T(n) ≤ dn4 lorsque le trigraphe est basique et sinon T(n) ≤ 4T(|V (TX)|) + T(|V (TY )|) + dn4 , avec d la constante venant de la complexité de trouver un 2-joint ou un complément de 2-joint et de trouver α dans les trigraphes basiques. Nous pouvons maintenant démontrer qu’il existe une constante c, telle que T(n) ≤ c.n5 . Notre démonstration est par induction sur n. Nous montrons qu’il existe une constante N, telle que l’étape d’induction passe pour tout n ≥ N (cet argument et en particulier N ne dépend pas de c). Le cas de base de notre induction est alors sur les trigraphes basiques ou sur les trigraphes qui ont au plus N sommets. Pour ces trigraphes, il est clair que la constante c existe. Nous faisons la démonstration de l’induction uniquement dans le cas du 2- joint pair (potentiellement dans le complément). La démonstration pour le 2-joint impair est similaire. Nous définissons n1 = |V (TX)|. Nous avons T(n) ≤ 4T(n1) + 75CHAPITRE 5. CALCUL DU STABLE MAXIMUM T(n + 6 − n1) + dn4 pour tout n1 et n vérifiant b n 2 c + 3 ≥ n1 ≥ 7. Définissons f(n1) = n 5−4n 5 1−(n+6−n1) 5−dn4 . Nous montrons qu’il existe une constante N, telle que pour tout n ≥ N et pour tout n1 tel que 7 ≤ n1 ≤ bn 2 c + 3, f(n1) ≥ 0. Par hypothèse d’induction, ceci prouve notre résultat. Un simple calcul montre que : f 0 (n1) = −20n 4 1 + 5(n + 6 − n1) 4 f 00(n1) = −80n 3 1 − 20(n + 6 − n1) 3 Puisque n + 6 − n1 est positif, nous avons f 00 ≤ 0. Donc, f 0 est décroissante, et il est facile de voir que si n est suffisamment large, f 0 (n1) est positif pour n1 = 7 et négatif pour n1 = b n 2 c + 3. Maintenant f est minimum pour n1 = 7 ou n1 = b n 2 c + 3. Puisque f(7) = n 5 − (n − 1)5 − P(n) avec P un polynôme, tel que deg(P) ≤ 4, si n est suffisamment large, alors f(7) est positif. De même f(b n 2 c + 3) ≤ n 5 − 5(d n 2 e + 3)5 . Là encore, si n est suffisamment large, f(b n 2 c + 3) est positif. Donc, il existe une constante N, telle que pour tout n ≥ N, f(n1) ≥ 0. Cela montre que notre algorithme s’exécute en temps O(n 5 ). Théorème 5.12. On peut calculer un stable fort pondéré de poids maximum d’un trigraphe T de Berge apprivoisé en temps O(n 5 ). Démonstration. Exécuter l’algorithme de 5.11 pour (T, ∅, ∅). Théorème 5.13. On peut calculer un stable pondéré de poids maximum d’un graphe G de Berge apprivoisé en temps O(n 5 ). Démonstration. D’après 5.12 et le fait que les graphes de Berge peuvent être vu comme des trigraphes de Berge. Contrairement aux deux chapitres précédents, les résultats de ce chapitre ne peuvent s’étendre aux classes closes par k-joint. La raison principale est que les lemmes 5.3 et 5.4 qui sont nécessaires pour assurer que les poids dans les blocs sont tous positifs ou nuls ne sont vrais que dans le cas des graphes de Berge. Plus précisément nous avons besoin que tous les chemins entre les deux ensembles frontières du 2-joint (avec nos notations : Ai et Bi) soient de mêmes parités (ce qui est prouvé par le lemme 2.4 pour les graphes de Berge). 765.3. CALCULER α On peut voir notre preuve comme étant en deux temps. Nous commençons par construire l’arbre de décomposition. Lors de cette étape, nous devons n’utiliser que les blocs qui préservent la classe pour pouvoir continuer à décomposer. Puis une fois le trigraphe complètement décomposé nous modifions les blocs afin de pouvoir faire remonter l’information des blocs basiques au trigraphe complet. Un point important lors de la reconstruction est de bien choisir sur quel bloc poser le plus de questions, en effet si l’on ne fait pas attention l’algorithme devient exponentiel. Dans les algorithmes présentés nous nous sommes focalisé sur le calcul de la valeur du stable maximum. Il est facile et classique de convertir un tel algorithme en un algorithme qui explicite un stable maximum pour un surcout linéaire. Cependant comme il est possible pour les classes de base d’expliciter à chaque fois un stable maximum, il est possible en utilisant de bonnes structures de données de suivre les sommets du stable maximum à chaque étape de l’induction et ainsi d’expliciter un stable maximum sans surcout. Enfin il faut noter qu’en utilisant le théorème 1.1 nous obtenons un algorithme de coloration en temps O(n 7 ). 77CHAPITRE 5. CALCUL DU STABLE MAXIMUM 78Chapitre 6 Décompositions extrêmes Les résultats de ce chapitre ont été obtenus avec Maria Chudnovsky, Nicolas Trotignon et Kristina Vušković, ils font l’objet d’un article [15] soumis à Journal of Combinatorial Theory, Series B. 6.1 Décompositions extrêmes Dans cette section nous allons démontrer que les trigraphes de Berge apprivoisés qui ne sont pas basiques, admettent des décompositions extrêmes. C’est à dire des décompositions telles qu’un des blocs de décomposition est basique. Ce résultat n’est pas trivial, puisque qu’il existe dans [37] un exemple montrant que les graphes de Berge généraux n’admettent pas toujours de 2-joint extrême. Les décompositions extrêmes sont parfois très utiles pour faire des preuves par induction. En fait nous ne sommes pas capable de démontrer que tous les trigraphes de Berge apprivoisés admettent un 2-joint ou un complément de 2-joint extrême. Pour démontrer l’existence de décompositions extrêmes nous devons inclure un nouveau type de décomposition, la paire homogène à notre ensemble de décomposition. Il est intéressant de remarquer que cette nouvelle décomposition est déjà utilisée dans de nombreuses variantes du théorème 2.5. Il est intéressant de voir que nous avons besoin d’étendre notre ensemble de décompositions afin d’obtenir un théorème de structure extrême. Il est donc possible que les star-cutsets ou plus généralement les skew-partitions ne soient pas des décompositions assez générales mais qu’en 79CHAPITRE 6. DÉCOMPOSITIONS EXTRÊMES Figure 6.1 – Un exemple de graphe d’admettant pas de 2-joint extrême les étendant il soit possible d’obtenir un théorème de décomposition extrême. Par exemple dans [1], nous avons pu avec Aboulker, Radovanović Trotignon et Vuš- ković, en étendant la définition du star-cutset avoir une décomposition extrême d’une classe de graphes particulière. Grâce à cette décomposition extrême nous avons alors pu démontrer notre hypothèse d’induction. Une paire homogène propre d’un trigraphe T est une paire de sous-ensembles disjoints (A, B) de V (T), telle que si A1, A2 sont respectivement les ensembles de tous les sommets fortement complets à A et fortement anticomplets à A et que B1, B2 sont définis de manière similaire, alors : — |A| > 1 et |B| > 1 ; — A1 ∪ A2 = B1 ∪ B2 = V (T) \ (A ∪ B) (et en particulier tout sommet de A a un voisin et un antivoisin dans B et vice versa) ; et — les quatre ensembles A1∩B1, A1∩B2, A2∩B1, A2∩B2 sont tous non-vides. Dans ces circonstances, on dit que (A, B, A1 ∩ B2, A2 ∩ B1, A1 ∩ B1, A2 ∩ B2) est une affectation de la paire homogène. Une manière de démontrer l’existence d’une décomposition extrême est de considérer un “côté” de la décomposition et de le minimiser, pour obtenir ce que nous appelons une fin. Cependant pour les paires homogènes, les deux côtés (qui sont A ∪ B et V (T) \ (A ∪ B) avec nos notations habituelles) ne sont pas symé- triques comme le sont les deux côtés d’un 2-joint. Nous devons donc décider quel côté minimiser. Nous choisissons de minimiser le côté A ∪ B. Formellement nous devons faire la distinction entre un fragment, qui est n’importe quel côté d’une décomposition et un fragment propre qui est le côté qui va être minimisé et qui ne peut donc pas être le côté V (T) \ (A ∪ B) d’une paire homogène. Toutes les définitions sont données formellement ci-dessous. 806.1. DÉCOMPOSITIONS EXTRÊMES A B Figure 6.2 – Paire Homogène Nous commençons par modifier la définition des fragments pour inclure les paires homogènes. À partir de maintenant, un ensemble X ⊆ V (T) est un fragment d’un trigraphe T si une des propriétés suivantes est vérifiée : 1. (X, V (T) \ X) est un 2-joint propre de T ; 2. (X, V (T) \ X) est un complément de 2-joint propre de T ; 3. il existe une paire homogène propre (A, B) de T telle que X = A ∪ B ou X = V (T) \ (A ∪ B). Un ensemble X ⊆ V (T) est un fragment propre d’un trigraphe T si une des propriétés suivantes est vérifiée : 1. (X, V (T) \ X) est un 2-joint propre de T ; 2. (X, V (T) \ X) est un complément de 2-joint propre de T ; 3. il existe une paire homogène propre (A, B) de T telle que X = A ∪ B. Une fin de T est un fragment propre X de T tel qu’aucun sous-trigraphe induit propre de X est un fragment propre de T. Remarquons qu’un fragment propre de T est un fragment propre de T, et une fin de T est une fin de T. De plus un fragment dans T est encore un fragment dans T. Nous avons déjà défini les blocs de décomposition d’un 2-joint et d’un complément de 2-joint. Nous définissons maintenant les blocs de décomposition d’une paire homogène. Si X = A ∪ B avec (A, B, C, D, E, F) une affectation d’une paire homogène propre (A, B) de T, alors nous construisons le bloc de décomposition TX en rapport avec X de la manière suivante. Nous partons de T|(A ∪ B). Nous ajoutons ensuite 81CHAPITRE 6. DÉCOMPOSITIONS EXTRÊMES deux nouveaux sommets marqués c et d tels que c est fortement complet à A, d est fortement complet à B, cd est une arête optionnelle et il n’y a aucune autre arête entre {c, d} et A ∪ B. Ici encore, {c, d} est appelé la composante marquée de TX. Si X = C ∪ D ∪ E ∪ F avec (A, B, C, D, E, F) une affectation d’une paire homogène propre (A, B) de T, alors nous construisons le bloc de décomposition TX en rapport avec X de la manière suivante. Nous partons de T|X. Nous ajoutons alors deux nouveaux sommets marqués a et b tels que a est fortement complet à C ∪ E, b est fortement complet à D ∪ E, ab est une arête optionnelle et il n’y a aucune autre arête entre {a, b} et C ∪ D ∪ E ∪ F. Ici encore, {a, b} est appelé la composante marquée de TX. Théorème 6.1. Si X est un fragment d’un trigraphe T de Berge bigame, alors TX est un trigraphe de Berge bigame. Démonstration. D’après la définition de TX, il est clair que tous ses sommets sont dans au plus une arête optionnelle ou sont lourds ou sont légers. Il ne nous reste donc plus qu’à démontrer que TX est de Berge. Si le fragment vient d’un 2-joint ou d’un complément de 2-joint, alors nous avons le résultat d’après le lemme 2.11. Si X = A ∪ B et (A, B) est une paire homogène propre de T, alors soit H un trou ou un antitrou de TX. En passant si besoin au complémentaire, on peut supposer que H est un trou. S’il contient les deux sommets marqués c, d, alors c’est un trou sur quatre sommets ou il doit contenir deux voisins forts de d dans B, donc H a longueur 6. Par conséquent, on peut supposer que H contient au plus un sommet parmi c, d et donc un trou de longueur strictement identique peut être obtenu dans T en remplaçant c ou d par un sommet de C ou de D. Par conséquent, H a longueur paire. S’il existe une paire homogène propre (A, B) de T telle que X = V (T)\(A∪B), alors puisque pour tout sommet de A a un voisin et un antivoisin dans B, nous pouvons remarquer que toute réalisation de TX est un sous-trigraphe induit d’une réalisation de T. Il suit que TX reste de Berge. Théorème 6.2. Si X est un fragment d’un trigraphe T de Berge apprivoisé, alors le bloc de décomposition TX n’a pas de skew-partition équilibrée. Démonstration. Pour démontrer ce résultat, supposons que TX ait une skewpartition équilibrée (A0 , B0 ) avec une affectation (A0 1 , A0 2 , B0 1 , B0 2 ). À partir de cette 826.1. DÉCOMPOSITIONS EXTRÊMES skew-partition équilibrée, nous allons trouver une skew-partition dans T. Nous allons alors utiliser le lemme 2.8 afin de démontrer l’existence d’une skew-partition équilibrée dans T. Ceci nous fournira une contradiction, et démontrera le résultat. Si le fragment vient d’un 2-joint ou d’un complément de 2-joint, nous avons le résultat d’après le lemme 2.12. Si X = A ∪ B et (A, B) est une paire homogène de T, alors soit (A, B, C, D, E, F) une affectation de (A, B). Puisque cd est une arête optionnelle, les sommets marqués c et d n’ont pas de voisin commun et cd domine TX. Sans perte de généralité il n’y a qu’un cas à traiter : c ∈ A0 1 et d ∈ B0 1 . Puisque B0 2 est complet à d et A0 2 est anticomplet à c, il suit que A0 2 , B0 2 ⊆ B. Maintenant (A0 1 \ {c}∪C ∪F, A0 2 , B0 1 \ {d}∪D∪E, B0 2 ) est une affectation d’une skew-partition dans T. La paire (A0 2 , B0 2 ) est équilibrée dans T car elle l’est dans TX. Par conséquent, d’après le lemme 2.8, T admet une skew-partition équilibrée, c’est une contradiction. Si X = V (T)\(A∪B) et (A, B) est une paire homogène propre de T, alors soit (A, B, C, D, E, F) une affectation de (A, B). Puisque ab est une arête optionnelle, on peut à symétrie et complémentation près supposer que a ∈ A0 1 et b ∈ A0 1 ∪ B0 1 . Si b ∈ A0 1 alors (A ∪ B ∪ A0 1 \ {a, b}, A0 2 , B0 1 , B0 2 ) est une affectation d’une skewpartition dans T. Dans tous les cas, la paire (A0 2 , B0 2 ) est équilibrée dans T car elle l’est dans TX. Par conséquent, d’après le lemme 2.8, T admet une skew-partition équilibrée, c’est une contradiction. Théorème 6.3. Si X est une fin d’un trigraphe T de Berge apprivoisé, alors le bloc de décomposition TX est basique. Démonstration. Soit T un trigraphe de Berge apprivoisé et X une fin de T. D’après le lemme 6.1, TX est un trigraphe bigame de Berge et d’après le lemme 6.2, TX n’a pas de skew-partition équilibrée. D’après le théorème 2.10, il suffit de montrer que TX n’a ni 2-joint propre, ni complément de 2-joint propre. En passant au complémentaire de T si nécessaire, on peut supposer qu’une des propriétés suivantes est vraie : — X = A ∪ B et (A, B) est une paire homogène propre de T ; — (X, V (T) \ X) est un 2-joint pair propre de T ; ou — (X, V (T) \ X) est un 2-joint impair propre de T. Cas 1 : X = A ∪ B avec (A, B) une paire homogène propre de T. Soit (A, B, C, D, E, F) une affectation de (A, B). 83CHAPITRE 6. DÉCOMPOSITIONS EXTRÊMES Supposons que TX admette un 2-joint propre (X1, X2). Soit (A1, B1, C1, A2, B2, C2) une affectation de (X1, X2). Puisque cd est une arête optionnelle, nous pouvons supposer que les deux sommets c, d sont tous les deux dans X2. Comme {c, d} domine fortement TX, on peut supposer que c ∈ A2 et d ∈ B2, et donc C1 = ∅. Puisque c est fortement complet à A, A1 ⊆ A et de manière similaire, B1 ⊆ B. D’après le lemme 6.2 et le lemme 2.9, |A1| ≥ 2 et |B1| ≥ 2 et comme C1 = ∅, tous les sommets de A1 ont un voisin et un antivoisin dans B1 et vice versa. Maintenant (A1, B1, C ∪ A2 \ {c}, D ∪ B2 \ {d}, E, F ∪ C2) est une affectation d’une paire homogène propre de T. Comme |X2| ≥ 3, A1 ∪ B1 est strictement inclus dans A ∪ B, c’est une contradiction. Comme A∪B est aussi une paire homogène de T, en utilisant le même argument que ci-dessus, TX n’admet pas de complément de 2-joint propre. Cas 2 : (X, V (T) \ X) est un 2-joint pair propre (X1, X2) de T, avec X = X1. Soit (A1, B1, C1, A2, B2, C2) une affectation de (X, V (T) \ X). Supposons que TX admette un 2-joint propre (X0 1 , X0 2 ). Soit (A0 1 , B0 1 , C0 1 , A0 2 , B0 2 , C0 2 ) une affectation de (X0 1 , X0 2 ). Puisque ac et bc sont des arêtes optionnelles, on peut supposer que a, b, c ∈ X0 2 . Nous affirmons maintenant que (X0 1 , V (T) \ X0 1 ) est un 2-joint propre de T et que X0 1 est strictement inclus dans X, ce qui est une contradiction. Remarquons que d’après la définition d’un 2-joint et le fait que c n’ait pas de voisin fort, X0 2 ne peut pas être réduit à {a, b, c} et par conséquent, X0 1 est strictement inclus dans X. Puisque c n’a pas de voisin fort, c ∈ C 0 2 . Puisque a et b n’ont pas de voisin fort commun dans TX1 , il n’y a à symétrie près que trois cas : ou bien a ∈ A0 2 , b ∈ B0 2 , ou bien a ∈ A0 2 , b ∈ C 0 2 , ou bien a, b ∈ C 0 2 . Si a ∈ A0 2 et b ∈ B0 2 , alors (A0 1 , B0 1 , C0 1 , A2 ∪A0 2 \ {a}, B2 ∪B0 2 \ {b}, C2 ∪C 0 2 \ {c}) est une affectation d’un 2-joint de T. Si a ∈ A0 2 et b ∈ C 0 2 , alors (A0 1 , B0 1 , C0 1 , A2 ∪ A0 2 \ {a}, B0 2 , B2 ∪ C2 ∪ C 0 2 \ {b, c}) est une affectation d’un 2-joint de T. Si a ∈ C 0 2 et b ∈ C 0 2 , alors (A0 1 , B0 1 , C0 1 , A0 2 , B0 2 , X2 ∪ C 0 2 \ {a, b, c}) est une affectation d’un 2-joint de T. D’après le lemme 6.2 et le lemme 2.9 tous ces 2-joints sont propres, et nous avons donc une contradiction. Supposons que TX admette un complément de 2-joint propre (X0 1 , X0 2 ). Puisque c n’a pas de voisin fort nous avons une contradiction. Case 3 : (X, V (T) \ X) est un 2-joint propre (X1, X2) de T, avec X = X1. Soit (A1, B1, C1, A2, B2, C2) une affectation de (X, V (T) \ X). Supposons que TX admette un 2-joint propre (X0 1 , X0 2 ). Soit 846.2. CALCULER UNE FIN (A0 1 , B0 1 , C0 1 , A0 2 , B0 2 , C0 2 ) une affectation de (X0 1 , X0 2 ). Puisque ab est une arête optionnelle, on peut supposer que a, b ∈ X0 2 . Maintenant nous affirmons que (X0 1 , V (T) \ X0 1 ) est un 2-joint propre de T, ce qui est une contradiction, car X0 2 ne peut pas être réduit à seulement {a, b} (d’après la définition d’un 2-joint), donc X0 1 est strictement inclus dans X. Comme a et b n’ont pas de voisin fort commun dans TX1 , il y a à symétrie près seulement trois cas : ou bien a ∈ A0 2 , b ∈ B0 2 , ou bien a ∈ A0 2 , b ∈ C 0 2 , ou bien a, b ∈ C 0 2 . Si a ∈ A0 2 et b ∈ B0 2 , alors (A0 1 , B0 1 , C0 1 , A2 ∪ A0 2 \ {a}, B2 ∪ B0 2 \ {b}, C2 ∪ C 0 2 ) est une affectation d’un 2-joint de T. Si a ∈ A0 2 et b ∈ C 0 2 , alors (A0 1 , B0 1 , C0 1 , A2 ∪ A0 2 \ {a}, B0 2 , B2 ∪ C2 ∪ C 0 2 \ {b}) est une affectation d’un 2-joint de T. Si a ∈ C 0 2 et b ∈ C 0 2 , alors (A0 1 , B0 1 , C0 1 , A0 2 , B0 2 , X2∪C 0 2 \{a, b}) est une affectation d’un 2-joint de T. D’après le lemme 6.2 et le lemme 2.9 tous ces 2-joints sont propres, et nous avons donc une contradiction. Supposons que TX admette un complément de 2-joint propre (X0 1 , X0 2 ). Soit (A0 1 , B0 1 , C0 1 , A0 2 , B0 2 , C0 2 ) une affectation de (X0 1 , X0 2 ). Puisque ab est une arête optionnelle, on peut supposer que a, b ∈ X0 2 . Puisque a et b n’ont pas de voisin fort commun, on peut supposer que a ∈ A0 2 , b ∈ B0 2 et C 0 1 = ∅. Si C2 et C 0 2 sont nonvides, alors (A0 1 , B0 1 , B2 ∪ B0 2 \ {b}, A2 ∪ A0 2 \ {a}, C0 2 , C2) est une affectation d’une paire homogène propre de T et A0 1 ∪ B0 1 est strictement inclus dans X, c’est une contradiction (remarquons que d’après le lemme 6.2 et le lemme 2.9, |A0 1 | ≥ 2, |B0 1 | ≥ 2, et tout sommet de A0 1 a un voisin et un antivoisin dans B0 1 et vice versa). Si C2 est non-vide et que C 0 2 est vide, alors (A0 1 , B0 1 , ∅, B2∪B0 2\{b}, A2∪A0 2\{a}, C2) est une affectation d’un 2-joint propre de T (il est propre d’après le lemme 6.2 et le lemme 2.9). Si C2 est vide, alors (A0 1 , B0 1 , ∅, A2 ∪ A0 2 \ {a}, B2 ∪ B0 2 \ {b}, C0 2 ) est une affectation d’un complément de 2-join de T (ici encore il est propre d’après le lemme 6.2 et le lemme 2.9). 6.2 Calculer une fin Le but de cette section est de donner un algorithme polynomial qui étant donné un trigraphe, calcule une fin s’il en existe une. Pour résoudre ce problème nous pourrions utiliser les algorithmes existants de détection de 2-joint et de paire homogène. Le plus rapide est dans [7] pour les 2-joints et dans [24] pour les paires homogènes. Cependant cette approche pose plusieurs problèmes. Premièrement ces algorithmes fonctionnent pour les graphes et non pour les trigraphes. Il est 85CHAPITRE 6. DÉCOMPOSITIONS EXTRÊMES cependant possible de les adapter pour les faire fonctionner sur les trigraphes. Cependant il est difficile de démontrer que cette conversion des graphes aux trigraphes est correcte sans rentrer dans les détails des algorithmes. Le problème plus grave est que ces algorithmes calculent un fragment et non une fin. En fait pour les 2-joints l’algorithme de [7] calcule un ensemble X minimal pour l’inclusion tel que (X, V (G)\X) est un 2-joint, mais il peut toujours y avoir une paire homogène dans X. Pour régler ces deux problèmes nous donnons ici notre propre algorithme même si la plupart des idées de cet algorithme existaient déjà dans des travaux précédents. Notre algorithme cherche un fragment propre X. À cause de tous les points techniques des définitions des 2-joints et des paires homogènes, nous introduisons une nouvelle notion. Un fragment faible d’un trigraphe T est un ensemble X ⊆ V (T) tel qu’il existe des ensembles disjoints A1, B1, C1, D1, A2, B2, C2, D2 vérifiant : — X = A1 ∪ B1 ∪ C1 ∪ D1 ; — V (T) \ X = A2 ∪ B2 ∪ C2 ∪ D2 ; — A1 est fortement complet à A2 ∪ D2 et fortement anticomplet à B2 ∪ C2 ; — B1 est fortement complet à B2 ∪ D2 et fortement anticomplet à A2 ∪ C2 ; — C1 est fortement anticomplet à A2 ∪ B2 ∪ C2 ; — D1 est fortement anticomplet à A2 ∪ B2 ∪ D2 ; — |X| ≥ 4 et |V (T) \ X| ≥ 4 ; — |Ai | ≥ 1 et |Bi | ≥ 1, i = 1, 2 ; — et au moins une des propriétés suivantes est vérifiée : — C1 = D1 = ∅, C2 6= ∅, et D2 6= ∅, ou — D1 = D2 = ∅, ou — C1 = C2 = ∅. Dans ces circonstances, on dit que (A1, B1, C1, D1, A2, B2, C2, D2) est une affectation de X. Étant donné un fragment faible, on dit qu’il est de type paire homogène si C1 = D1 = ∅, C2 6= ∅, et D2 6= ∅, de type 2-joint si D1 = D2 = ∅ et de type complément de 2-joint si C1 = C2 = ∅. Remarquons qu’un fragment peut être à la fois de type 2-joint et de type complément de 2-joint (lorsque C1 = D1 = C2 = D2 = ∅). Théorème 6.4. Si T est un trigraphe de Berge apprivoisé, alors X est un fragment faible de T si et seulement si X est un fragment propre de T. Démonstration. Si X est un fragment propre, alors c’est clairement un fragment faible (les conditions |X| ≥ 4 et |V (T) \ X| ≥ 4 sont satisfaites lorsque X est un côté d’un 2-joint d’après le lemme 2.9). Prouvons l’autre sens de l’équivalence. Soit 866.2. CALCULER UNE FIN X un fragment faible et soit (A1, B1, C1, D1, A2, B2, C2, D2) une affectation de X. Si X est de type 2-joint ou complément de 2-joint, alors il est propre d’après le lemme 2.9. Par conséquent nous n’avons à traiter que le cas du fragment faible de type paire homogène et donc C1 = D1 = ∅, C2 6= ∅, et D2 6= ∅. Puisque les quatre ensembles A1, A2, B1, B2 sont tous non-vides, il suffit de vérifier que les conditions suivantes sont vérifiées : (i) Tout sommet de A1(B1) a un voisin et un antivoisin dans B1(A1). (ii) |A1| > 1 et |B1| > 1. Supposons que la condition (i) n’est pas vérifiée. En prenant le complémentaire de T si nécessaire, on peut supposer qu’il existe un sommet v ∈ A1 fortement complet à B1. Puisque {v} ∪ B1 ∪ A2 ∪ D2 n’est pas un star cutset dans T d’après le lemme 2.6, on a que A1 = {v}. Maintenant tout sommet de B1 est fortement complet à A1, et donc par le même argument, |B1| = 1. Ceci contredit l’hypothèse que |X| ≥ 4. Par conséquent la propriété (i) est vérifiée. Pour démontrer (ii) supposons que |A1| = 1. Puisque |X| ≥ 4, on a que |B1| ≥ 3. D’après (i), tout sommet de B1 est semi-adjacent à un unique sommet de A1, ce qui est impossible puisque |B1| ≥ 3 et que T est un trigraphe bigame. Par conséquent la propriété (ii) est vérifiée. Un quadruplet (a1, b1, a2, b2) de sommets d’un trigraphe T est propre s’il vérifie les propriétés suivantes : — a1, b1, a2, b2 sont deux à deux distincts ; — a1a2, b1b2 ∈ E(T); — a1b2, b1a2 ∈ E(T). Un quadruplet propre (a1, b1, a2, b2) est compatible avec un fragment faible X s’il existe une affectation (A1, B1, C1, D1, A2, B2, C2, D2) de X telle que a1 ∈ A1, b1 ∈ B1, a2 ∈ A2 et b2 ∈ B2. Nous utilisons les notations suivantes. Si x est un sommet d’un trigraphe T, N(x) désigne l’ensemble des voisins de x, N(x) l’ensemble des antivoisins de x, E(x) l’ensemble des voisins forts de x, et E ∗ (x) l’ensemble des sommets v tels que xv ∈ E ∗ (T). 87CHAPITRE 6. DÉCOMPOSITIONS EXTRÊMES Théorème 6.5. Soit T un trigraphe et Z = (a1, b1, a2, b2) un quadruplet propre de T. Il y a un algorithme en temps O(n 2 ) qui étant donné un ensemble R0 ⊆ V (T) de taille au moins 4 tel que Z ∩ R0 = {a1, b1}, trouve un fragment faible X compatible avec Z et tel que R0 ⊆ X, ou renvoie la propriété vraie suivante : “Il n’existe pas de fragment faible X compatible avec Z et tel que R0 ⊆ X” De plus le fragment trouvé X est minimal vis à vis de ces propriétés. C’est 886.2. CALCULER UNE FIN à dire que X ⊂ X0 pour tout fragment faible X0 vérifiant ces propriétés. Démonstration. Nous utilisons la procédure décrite dans la table 6.1. Elle essaie de construire un fragment faible R, en commençant avec R = R0 et S = V (T) \ R0, Ensuite plusieurs règles de forçage sont implémentées, disant que certains ensembles de sommets doivent être déplacés de S vers R. La variable “State” contient le type du fragment faible en train d’être considéré. Au début de la procédure le type n’est pas connu et la variable “State” contient “Unknown”. Il est facile de vérifier que les propriétés suivantes sont des invariants d’exécution (c’est à dire qu’elles sont vérifiées après chaque appel à Explore) : — R et S forment une partition de V (T), R0 ⊆ R et a2, b2 ∈ S. — Pour tout sommet v ∈ R non-marqué et tout sommet u ∈ S, uv n’est pas une arête optionnelle. — Tous les sommets non-marqués appartenant à R ∩ (E(a2) \ E(b2)) ont le même voisinage dans S, c’est à dire A (et A est un voisinage fort). — Tous les sommets non-marqués appartenant à R ∩ (E(b2) \ E(a2)) ont le même voisinage dans S, c’est à dire B (et B est un voisinage fort). — Tous les sommets non-marqués appartenant à R ∩ (E(b2) ∩ E(a2)) ont le même voisinage dans S, c’est à dire A ∪ B. — Tous les sommets non-marqués appartenant à R et non-adjacent ni à a2 ni à b2 sont fortement anticomplet à S. — Pour tout fragment faible X tel que R0 ⊆ X et a2, b2 ∈ V (T) \ X, on a que R ⊆ X et V (T) \ X ⊆ S. D’après le dernier point, tous les déplacements de sommets de S vers R sont nécessaires. Donc si un sommet de R est fortement adjacent à a2 et à b2, n’importe quel fragment faible compatible avec Z qui contient R doit être un fragment faible de type complément de 2-joint. C’est pourquoi la variable “State” est alors assignée avec la valeur 2-joint et que tous les sommets de S \ (A ∪ B) sont déplacés vers R. De la même manière, si un sommet de R est fortement antiadjacent à a2 et à b2, n’importe quel fragment faible compatible avec Z qui contient R doit être un fragment faible de type 2-joint. C’est pourquoi la variable “State” est alors assignée à la valeur 2-joint et que tous les sommets de A ∩ B sont déplacés vers R. Lorsque State = “2-joint” et qu’un sommet de R est à la fois fortement adjacent à a2 et à b2, il y a une contradiction avec la définition du complément de 2-joint, donc l’algorithme doit s’arrêter. Lorsque State = 2-joint et qu’un sommet de R est à la fois fortement adjacent à a2 et à b2, il y a une contradiction avec la définition du 2-joint donc l’algorithme doit s’arrêter. Lorsque la fonction Move essaie de déplacer a2 ou b2 dans R (ceci peut arriver si un sommet dans R est semiadjacent 89CHAPITRE 6. DÉCOMPOSITIONS EXTRÊMES Entrée : R0 un ensemble de sommets d’un trigraphe T et un quadruplet propre Z = (a1, b1, a2, b2) tel que a1, b1 ∈ R0 and a2, b2 ∈/ R0. Initialisation : R ← R0 ; S ← V (T) \ R0 ; A ← E(a1) ∩ S ; B ← E(b1) ∩ S ; State ← Unknown ; Les sommets a1, b1, a2, b2 sont laissés non-marqués. Pour les autres sommets de T : Mark(x) ← αβ pour tout sommet x ∈ E(a2) ∩ E(b2); Mark(x) ← α pour tout sommet x ∈ E(a2) \ E(b2); Mark(x) ← β pour tout sommet x ∈ E(b2) \ E(a2); tous les autres sommets de T sont marqués par ε ; Move(E ∗ (a1) ∩ S); Move(E ∗ (b1) ∩ S); Boucle principale : Tant que il existe un sommet x ∈ R marqué Faire Explore(x); Unmark(x); Fonction Explore(x) : Si Mark(x) = αβ et State = Unknown alors State ← 2-joint, Move(S \ (A ∪ B)); Si Mark(x) = αβ et State = 2-joint alors Move(N(x) ∩ S); Si Mark(x) = αβ et State = 2-joint alors Sortie Pas de fragment faible trouvé, Stop ; Si Mark(x) = α alors Move(A∆(E(x) ∩ S)), Move(E ∗ (x) ∩ S); Si Mark(x) = β alors Move(B∆(E(x) ∩ S)), Move(E ∗ (x) ∩ S); Si Mark(x) = ε et State = Unknown alors State ← 2-joint, Move(A ∩ B); Si Mark(x) = ε et State = 2-joint alors Move(N(x) ∩ S); Si Mark(x) = ε et State = 2-joint alors Sortie Pas de fragment faible trouvé, Stop ; Fonction Move(Y) : Cette fonction déplace simplement un sous-ensemble Y ⊂ S de S vers R. Si Y ∩ {a2, b2} 6= ∅ alors Sortie Pas de fragment faible trouvé, Stop ; R ← R ∪ Y ; A ← A \ Y ; B ← B \ Y ; S ← S \ Y ; Table 6.1 – La procédure utilisée dans le théorème 6.5 906.2. CALCULER UNE FIN à a2 et à b2), alors R ne peut pas être contenu dans un fragment compatible avec Z. Si le processus ne s’arrête pas pour une des raisons ci-dessus, alors tous les sommets de R ont été explorés et donc sont non-marqués. Donc, si |S| ≥ 4 à la fin, R est un fragment faible compatible avec Z. Plus spécifiquement, (R∩(E(a2)\ E(b2)), R ∩(E(b2) \ E(a2)), R \ (E(a2)∪ E(b2)), R ∩(E(a2)∩ E(b2)), A \ B, B \ A, S \ (A ∪ B), A ∩ B) est une affectation du fragment faible R. Puisque tous les déplacements de S vers R sont nécessaires, le fragment est minimal comme affirmé. Cela implique également que si |S| ≤ 3, alors il n’existe pas de fragment faible vérifiant les propriétés, dans ce cas l’algorithme renvoie qu’il n’existe pas de fragment vérifiant les propriétés requises. Complexité : Le voisinage et l’antivoisinage d’un sommet de R n’est considéré qu’au plus une fois. Donc, globalement l’algorithme utilise un temps O(n 2 ). Théorème 6.6. Il existe un algorithme fonctionnant en temps O(n 5 ), qui prend en entré un trigraphe T de Berge apprivoisé, et qui renvoie une fin X de T (si une fin existe) et le bloc de décomposition TX. Démonstration. Rappelons que d’après le lemme 6.4, les fragments faibles de T sont des fragments propres. Nous commençons par décrire un algorithme de complexité O(n 8 ), nous expliquerons ensuite comment l’accélérer. Nous supposons que |V (T)| ≥ 8 car sinon il n’existe pas de fragment propre. Pour tout quadruplet propre Z = (a1, a2, b1, b2) et pour toute paire de sommets u, v de V (T) \ {a1, a2, b1, b2}, nous appliquons le lemme 6.5 à R0 = {a1, b1, u, v}. Cette méthode détecte pour tout Z et tout u, v un fragment propre (s’il en existe) compatible avec Z, contenant u, v et minimal vis à vis de ces propriétés. Parmi tous ces fragments, nous choisissons celui de cardinalité minimum, c’est une fin. Une fois que la fin est donnée, il est facile de connaitre le type de décomposition utilisé et de construire le bloc correspondant (en utilisant en particulier, le lemme 2.4, on peut simplement tester avec un chemin si le 2-joint est pair ou impair). Voyons maintenant comment accélérer cet algorithme. Nous traitons séparément le cas du 2-joint et de la paire homogène. Nous allons décrire un algorithme en temps O(n 5 ) qui renvoie un fragment faible de type 2-joint, un algorithme en temps O(n 5 ) qui renvoie un fragment faible de type complément de 2-joint et un algorithme en temps O(n 5 ) qui renvoie un fragment faible 91CHAPITRE 6. DÉCOMPOSITIONS EXTRÊMES de type paire homogène. Chacun de ces fragments sera de cardinalité minimum parmi tous les fragments faibles du même type. Par conséquent, le fragment de cardinalité minimum parmi ces trois fragments sera une fin. Commençons par traiter le cas des 2-joints. Un ensemble Z de quadruplets propres est universel, si pour tout 2-joint propre d’affectation (A1, B1, C1, A2, B2, C2), il existe (a1, a2, b1, b2) ∈ Z tel que a1 ∈ A1, a2 ∈ A2, b1 ∈ B1, b2 ∈ B2. Au lieu de tester tous les quadruplets comme dans la version en O(n 8 ) de notre algorithme, il est clairement suffisant de restreindre notre recherche à un ensemble universel de quadruplet. Comme prouvé dans [7], il existe un algorithme qui génère en temps O(n 2 ) un ensemble universel de quadruplet de taille au plus O(n 2 ) pour n’importe quel graphe. Il est facile d’obtenir un algorithme similaire pour les trigraphes. L’idée suivante pour les 2-joints est d’appliquer la méthode de la Table 6.1 à R0 = {a1, b1, u} pour tout sommet u au lieu de l’appliquer pour R0 = {a1, b1, u, v} pour tout couple de sommets u, v. Comme nous allons le voir ceci trouve un 2- joint compatible avec Z = (a1, a2, b1, b2) s’il en existe un. Supposons que (X1, X2) soit un tel 2-joint. Si X1 contient un sommet u dont le voisinage (dans T) est différent de N(a1)∪N(b1), alors d’après le lemme 2.9, u a au moins un voisin dans X1 \{a1, b1}. Donc quand la boucle considère u, la méthode de la Table 6.1 déplace de nouveaux sommets dans R. Donc, à la fin, |R| ≥ 4 et le 2-joint est détecté. Par conséquent, l’algorithme échoue à détecter un 2-joint uniquement dans le cas où u a degré 2 et a1-v-b1 est un chemin alors qu’il existe un 2-joint compatible avec Z avec u du même côté que a1 et b1. En fait, puisque tous les sommets sont essayés, ce n’est un problème que s’il apparait pour tout choix possible de u. C’est à dire si le 2-joint cherché a un côté fait de a1, b1, et d’un ensemble de sommets u1, . . . , uk de degré 2 tous adjacents à la fois à a1 et à b1. Mais dans ce cas, ou bien un des ui est fortement complet à {a1, b1} et c’est le centre d’un star cutset, ou bien tous les ui sont adjacents à un des sommets a1 ou b1 par une arête optionnelle. Dans ce cas tous les ui sont déplacés vers R lorsque nous appliquons les règles décrites par la Table 6.1, et donc le 2-joint est en fait bien détecté. Les compléments de 2-joints sont traités de la même manière dans le complé- ment. Considérons maintenant les paires homogènes. Il est utile de définir une paire homogène faible exactement comme les paires homogènes propres, à l’exception que nous demandons que “|A| ≥ 1, |B| ≥ 1 et |A ∪ B| ≥ 3” au lieu de “|A| > 1 et |B| > 1”. Un lemme similaire au lemme 6.5 existe pour lequel l’entrée de l’algorithme est un graphe G, un triplet (a1, b1, a2) ∈ V (G) 3 et un ensemble R0 ⊆ 926.2. CALCULER UNE FIN V (G) qui contient a1, b1 mais pas a2, et dont la sortie est une paire homogène faible (A, B) telle que R0 ⊆ A ∪ B, a1 ∈ A, b1 ∈ B et a2 ∈/ A ∪ B, et telle que a2 est complet à A et anticomplet à B, si une telle paire homogène faible existe. Comme dans le lemme 6.5, le temps d’exécution est O(n 2 ) et la paire homogène faible est minimale parmi toutes les paires homogènes faibles possibles. Ce résultat est prouvé dans [17]. Comme pour les 2-joints, nous définissons la notion d’ensemble universel de triplets (a1, b1, a2). Comme prouvé dans [24], il existe un algorithme qui génère en temps O(n 2 ) un ensemble universel de triplet de taille au plus O(n 2 ) pour n’importe quel graphe. Ici encore il est facile d’adapter cet algorithme pour les trigraphes. Comme dans le cas du 2-joint, nous appliquons un lemme analogue au lemme 6.5 à tous les sommets u au lieu de tous les couples de sommets u, v. Le seul problème est qu’après l’appel au lemme similaire au lemme 6.5, nous avons une paire homogène faible et pas propre (donc |A ∪ B| = 3). Mais dans ce cas on peut vérifier qu’alors le trigraphe contient un star cutset ou un star cutset dans son complément. Nous avons vu dans ce chapitre comment étendre notre ensemble de décompositions afin d’assurer l’existence d’une décomposition extrême. Nous avons également donné un algorithme polynomial permettant d’exhiber une telle décomposition. Nous n’avons pas eu besoin de ce type de décomposition pour prouver le reste des résultats de cette thèse, mais savoir qu’elle existe donne une preuve conceptuellement plus simple des résultats concernant le calcul du stable maximum. En effet, si l’on sait qu’à chaque étape de décomposition un des côtés est basique on peut facilement calculer les valeurs qui nous intéressent dans ce coté sans craindre d’explosion combinatoire, de fait la récursion devient presque immé- diatement récursive terminale. Malheureusement, la preuve est techniquement plus compliquée puisqu’il faut également traiter les cas des paires homogènes. Il n’est pas possible de transformer automatique un théorème de décomposition en son équivalent extrême (cf la figure 6.1). Il serait intéressant de savoir s’il est toujours possible d’ajouter des décompositions (et comment les calculer) pour obtenir une décomposition extrême. 93CHAPITRE 6. DÉCOMPOSITIONS EXTRÊMES 94Chapitre 7 Conclusion Dans cette thèse, nous avons démontré un certain nombre de résultats sur les trigraphes de Berge apprivoisés et par conséquent sur les graphes de Berge sans skew-partition. En fait, le point important dans tous ces résultats est de pouvoir décomposer le graphe par 2-joint ou complémentaire de 2-joint. L’interdiction des skew-partitions permet simplement d’assurer que le graphe ou son complémentaire admet bien un 2-joint et que les blocs vont eux aussi être inductivement décomposables. La classe la plus générale pour laquelle ces résultats s’appliquent est donc celle des graphes de Berge décomposables par 2-joint ou complémentaire de 2-joint. Malheureusement cette classe est une sous-classe stricte des graphes de Berge, le graphe en figure 2.8 est un graphe de Berge qui (en utilisant les décompositions mentionnées dans cette thèse) n’est décomposable que par skew-partition. La reconnaissance des graphes de Berge apprivoisés est possible en temps polynomial, mais est compliquée. En toute généralité la détection de skew-partition équilibrée est un problème NP-complet heureusement ce problème devient polynomial sur les graphes de Berge [35]. On peut donc utiliser l’algorithme de Chudnovsky, Cornuéjols, Liu, Seymour et Vušković [12] pour s’assurer que le graphe est bien de Berge puis on peut utiliser les résultats de Trotignon [35] pour s’assurer que le graphe ne possède pas de skew-partition équilibrée. Avec ces résultats il est alors possible, étant donné un graphe quelconque de le colorier en temps polynomial ou d’assurer que ce n’est pas un graphe de Berge sans skew-partition. La reconnaissance de la classe plus large des graphes de Berge totalement décomposable par 2-joint et complémentaire de 2-joint est plus difficile. On peut chercher décomposer jusqu’à arriver sur des blocs basiques, mais en cas d’échec, a priori rien ne garantit qu’en ayant choisi un autre 2-joint nous n’aurions pas pu le décomposer totalement. 95CHAPITRE 7. CONCLUSION Notre classe est auto-complémentaire mais pas héréditaire, ce qui peut paraître surprenant pour les personnes familières avec les graphes de Berge. Il est assez difficile de trouver une classe proche qui soit héréditaire. De manière artifi- cielle, ce serait la classe des graphes de Berge sans sous-graphe induit uniquement décomposable par skew-partition (en utilisant comme ensemble de décompositions celles présentées dans cette thèse), en particulier il faut interdire le graphe de la figure 2.8. Il faudrait alors vérifier que les blocs de décompositions restent dans la classe. Dans ce cas cette classe serait alors totalement décomposable par 2-joint et par complémentaire de 2-joint et tous les résultats de cette thèse s’appliqueraient. Une manière d’étendre ces résultats serait de réussir pour un graphe de Berge donné à lui ajouter des sommets pour le rendre sans skew-partition. Dans ce cas en affectant des poids nuls à ces sommets ajoutés il est possible d’utiliser l’algorithme de stable maximum pondéré présenté ici pour calculer un stable maximum du graphe de départ. Pour que cet algorithme reste un algorithme en temps polynomial il faut que le nombre de sommets à ajouter soit polynomial en la taille du graphe de départ. Le risque d’une telle méthode est bien sûr que les sommets ajoutés créent de nouvelles skew-partition. Si ces résultats ne permettent pas de résoudre le problème pour les graphe de Berge en général, ils peuvent néanmoins fournir des outils utiles. En effet pour ces problèmes, on peut désormais voir notre classe comme une classe basique. De futurs algorithmes pourraient donc traiter les graphe de Berge tant qu’ils admettent des skew-partition puis utiliser nos algorithmes. Enfin le problème ouvert qui est sans doute le plus intéressant est celui de la reconnaissance des trigraphes de Berge. Ce problème est résolu pour les graphes [12] mais est toujours ouvert pour les trigraphes. Étant donné l’intérêt de l’utilisation des trigraphes pour l’étude des graphes de Berge, il serait très intéressant de savoir reconnaître les trigraphes de Berge. De manière plus générale, si l’on définit une classe de trigraphe C ∗ à partir de la classe de graphe C de la manière suivante : Un trigraphe T est dans la classe C ∗ si et seulement si toutes ses réalisations sont dans la classe C, quel est le lien entre la reconnaissance de la classe C et celle de la la classe C ∗ . 96Bibliographie [1] Pierre Aboulker, Marko Radovanović, Nicolas Trotignon, Théophile Trunck et Kristina Vušković : Linear balanceable and subcubic balanceable graphs. Journal of Graph Theory, 75(2):150–166, 2014. [2] Boris Alexeev, Alexandra Fradkin et Ilhee Kim : Forbidden induced subgraphs of double-split graphs. SIAM Journal on Discrete Mathematics, 26(1):1–14, 2012. [3] Noga Alon, János Pach, Rom Pinchasi, Radoš Radoičić et Micha Sharir : Crossing patterns of semi-algebraic sets. Journal of Combinatorial Theory, Series A, 111(2):310–326, 2005. [4] Nicolas Bousquet, Aurélie Lagoutte et Stéphan Thomassé : Clique versus independent set. arXiv preprint arXiv :1301.2474, 2013. [5] Nicolas Bousquet, Aurélie Lagoutte et Stéphan Thomassé : The ErdősHajnal conjecture for paths and antipaths. arXiv preprint arXiv :1303.5205, 2013. [6] Binh-Minh Bui-Xuan, Jan Arne Telle et Martin Vatshelle : H-join decomposable graphs and algorithms with runtime single exponential in rankwidth. Discrete Applied Mathematics, 158(7):809–819, 2010. [7] Pierre Charbit, Michel Habib, Nicolas Trotignon et Kristina Vušković : Detecting 2-joins faster. Journal of discrete algorithms, 17:60–66, 2012. [8] Maria Chudnovsky : Berge trigraphs and their applications. Thèse de doctorat, Princeton University, 2003. [9] Maria Chudnovsky : Berge trigraphs. Journal of Graph Theory, 53(1):1–55, 2006. [10] Maria Chudnovsky : The structure of bull-free graphs II and III—A summary. Journal of Combinatorial Theory, Series B, 102(1):252–282, 2012. 97BIBLIOGRAPHIE [11] Maria Chudnovsky : The structure of bull-free graphs I—Three-edge-paths with centers and anticenters. Journal of Combinatorial Theory, Series B, 102(1):233–251, 2012. [12] Maria Chudnovsky, Gérard Cornuéjols, Xinming Liu, Paul Seymour et Kristina Vušković : Recognizing berge graphs. Combinatorica, 25(2):143– 186, 2005. [13] Maria Chudnovsky, Neil Robertson, Paul Seymour et Robin Thomas : The strong perfect graph theorem. Annals of Mathematics, pages 51–229, 2006. [14] Maria Chudnovsky et Paul D Seymour : The structure of claw-free graphs. Surveys in combinatorics, 327:153–171, 2005. [15] Maria Chudnovsky, Nicolas Trotignon, Théophile Trunck et Kristina Vuskovic : Coloring perfect graphs with no balanced skew-partitions. arXiv preprint arXiv :1308.6444, 2013. [16] Vasek Chvátal : Star-cutsets and perfect graphs. Journal of Combinatorial Theory, Series B, 39(3):189–199, 1985. [17] Hazel Everett, Sulamita Klein et Bruce Reed : An algorithm for finding homogeneous pairs. Discrete Applied Mathematics, 72(3):209–218, 1997. [18] Tomás Feder, Pavol Hell, David G Schell et Juraj Stacho : Dichotomy for tree-structured trigraph list homomorphism problems. Discrete Applied Mathematics, 159(12):1217–1224, 2011. [19] Tomás Feder, Pavol Hell et Kim Tucker-Nally : Digraph matrix partitions and trigraph homomorphisms. Discrete Applied Mathematics, 154(17):2458–2469, 2006. [20] Samuel Fiorini, Serge Massar, Sebastian Pokutta, Hans Raj Tiwary et Ronald de Wolf : Linear vs. semidefinite extended formulations : exponential separation and strong lower bounds. In Proceedings of the 44th symposium on Theory of Computing, pages 95–106. ACM, 2012. [21] Jacob Fox : A bipartite analogue of Dilworth’s theorem. Order, 23(2-3):197– 209, 2006. [22] Jacob Fox et János Pach : Erdős-Hajnal-type results on intersection patterns of geometric objects. In Horizons of combinatorics, pages 79–103. Springer, 2008. [23] Martin Grötschel, László Lovász et Alexander Schrijver : Geometric algorithms and combinatorial optimization. 1988. 98BIBLIOGRAPHIE [24] Michel Habib, Antoine Mamcarz et Fabien de Montgolfier : Algorithms for some H-join decompositions. In LATIN 2012 : Theoretical Informatics, pages 446–457. Springer, 2012. [25] Peter L Hammer et Bruno Simeone : The splittance of a graph. Combinatorica, 1(3):275–284, 1981. [26] Aurélie Lagoutte et Théophile Trunck : Clique-stable set separation in perfect graphs with no balanced skew-partitions. arXiv preprint arXiv :1312.2730, 2013. [27] Philippe GH Lehot : An optimal algorithm to detect a line graph and output its root graph. Journal of the ACM (JACM), 21(4):569–575, 1974. [28] Lásló Lovász : A characterization of perfect graphs. Journal of Combinatorial Theory, Series B, 13(2):95–98, 1972. [29] László Lovász : On the shannon capacity of a graph. Information Theory, IEEE Transactions on, 25(1):1–7, 1979. [30] László Lovász : Stable sets and polynomials. Discrete mathematics, 124(1): 137–153, 1994. [31] Frédéric Maffray : Fast recognition of doubled graphs. Theoretical Computer Science, 516:96–100, 2014. [32] Nicholas D Roussopoulos : A max(n, m) algorithm for determining the graph H from its line graph G. Information Processing Letters, 2(4):108–112, 1973. [33] Alexander Schrijver : Combinatorial optimization : polyhedra and effi- ciency, volume 24. Springer, 2003. [34] Paul Seymour : How the proof of the strong perfect graph conjecture was found. Gazette des Mathematiciens, 109:69–83, 2006. [35] Nicolas Trotignon : Decomposing berge graphs and detecting balanced skew partitions. Journal of Combinatorial Theory, Series B, 98(1):173–225, 2008. [36] Nicolas Trotignon : Perfect graphs : a survey. arXiv preprint arXiv :1301.5149, 2013. [37] Nicolas Trotignon et Kristina Vušković : Combinatorial optimization with 2-joins. Journal of Combinatorial Theory, Series B, 102(1):153–185, 2012. 99BIBLIOGRAPHIE [38] Mihalis Yannakakis : Expressing combinatorial optimization problems by linear programs. In Proceedings of the twentieth annual ACM symposium on Theory of computing, pages 223–228. ACM, 1988. 100 Reconnaissance de comportements complexes par traitement en ligne de flux d’ev`enements Ariane Piel To cite this version: Ariane Piel. Reconnaissance de comportements complexes par traitement en ligne de flux d’ev`enements. Computer Science. Universit´e Paris Nord - Paris 13, 2014. French. HAL Id: tel-01093015 https://hal.archives-ouvertes.fr/tel-01093015 Submitted on 9 Dec 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destin´ee au d´epˆot et `a la diffusion de documents scientifiques de niveau recherche, publi´es ou non, ´emanant des ´etablissements d’enseignement et de recherche fran¸cais ou ´etrangers, des laboratoires publics ou priv´es.Université Paris 13 Thèse pour obtenir le grade de Docteur de l’Université Paris 13 Discipline : Informatique présentée et soutenue publiquement par Ariane PIEL le 27 octobre 2014 à l’Office National d’Études et de Recherches Aérospatiales Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements (Online Event Flow Processing for Complex Behaviour Recognition) Jury Présidente : Laure Petrucci Professeur, Université Paris 13 Rapporteurs : Serge Haddad Professeur, École Normale Supérieure de Cachan Audine Subias Maître de conférences, HDR, Institut National des Sciences Appliquées de Toulouse Examinateurs : Philippe Bidaud Professeur, Université Paris 6 – Pierre et Marie Curie Christine Choppy Professeur, Université Paris 13 Romain Kervarc Docteur ingénieur, Onera Invités : Jean Bourrely Docteur ingénieur, Onera Patrice Carle Docteur ingénieur, Onera Directrice de thèse : Christine Choppy Encadrants Onera : Romain Kervarc, avec Jean Bourrely et Patrice Carle Laboratoires : Office National d’Études et de Recherches Aérospatiales Laboratoire d’Informatique de Paris Nord N◦ d’ordre :Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements 2Remerciements Nombreux sont ceux qui ont contribué à l’aboutissement de cette thèse, et je souhaite à présent les remercier très sincèrement. Cette thèse s’est déroulée à l’Onera au sein du Département de Conception et évaluation des Performances des Systèmes (DCPS). Je tiens à remercier Thérèse Donath, directrice du département, de m’avoir accueillie et de m’avoir permis de réaliser ces travaux dans un environnement stimulant au sein de l’Onera. Je souhaite vivement remercier l’Unité de Techniques pour la Conception et la Simulation de systèmes (TCS) dans laquelle j’ai été chaleureusement intégrée. Je remercie Christine Choppy, Professeur à l’Université Paris 13, d’avoir dirigé ma thèse durant ces trois années avec tant d’implication et de disponibilité. Son sens de l’organisation et sa ténacité nous a permis d’entreprendre de nombreux projets. Je garde un excellent souvenir d’un workshop organisé à Singapour où nous avons pu allier les plaisirs de la recherche et ceux des voyages. Je suis très reconnaissante envers Romain Kervarc, ingénieur-chercheur à l’Onera, d’avoir coencadré cette thèse et de m’avoir toujours soutenue tant sur les idées que sur la forme où nous nous retrouvions entre logiciens. Je remercie aussi Romain de m’avoir offert de nombreuses opportunités pour rencontrer des chercheurs à travers le monde, et de m’avoir accordé sa confiance, laissant ainsi libre cours à la poursuite de mes idées. J’exprime toute ma gratitude à Patrice Carle, chef de l’unité TCS, pour m’avoir aidée sur tous les aspects de ce travail, avec des remarques toujours pertinentes. L’expérience et le recul sur le sujet qu’a apportés Patrice allaient de pair avec un enthousiasme débordant, ce qui a su avoir raison de mes doutes et de mon regard parfois trop critique. Sa certitude dans la qualité et l’intérêt de notre projet a été d’un grand soutien. Je tiens à remercier tout particulièrement Jean Bourrely, ancien directeur adjoint du DCPS, pour sa clairvoyance et son sens pratique dans tous les moments de questionnement, et pour son aide incommensurable dans l’implémentation de CRL. Les nombreuses heures de travail en commun m’ont donné l’énergie et la stimulation nécessaires à l’aboutissement de cette thèse et comptent dans les instants qui resteront parmi les meilleurs de ces trois ans. J’exprime toute ma gratitude à Serge Haddad, Professeur à l’École Normale Supérieure de Cachan, et à Audine Subias, Maître de Conférences habilité à l’Institut National des Sciences Appliquées de Toulouse, pour m’avoir honorée d’être les rapporteurs de cette thèse. Leurs remarques ont été précieuses pour l’amélioration de ce travail et ont ouvert quantité de nouvelles perspectives. 3Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements Je souhaite étendre cette gratitude à Laure Petrucci, Professeur à l’Université Paris 13, et à Philippe Bidaud, Professeur à l’Université Paris 6, pour m’avoir suivie durant ces trois années puis avoir accepté de participer au jury de cette thèse. Mes remerciements vont également à Thibault Lang qui m’a généreusement aidée dans le développement des cas d’application traités dans cette thèse, et à Nicolas Huynh pour sa contribution à leur mise en place. J’adresse aussi mes profonds remerciements à Stéphanie Prudhomme pour l’atmosphère chaleureuse qu’elle sait entretenir au sein de l’unité TCS, et pour ses conseils avisés. Je dois beaucoup à mes collègues et à la bonne ambiance qu’ils font régner au sein du département. J’ai eu la chance d’être particulièrement bien entourée ce qui est inestimable dans le travail de longue haleine que représente un doctorat, travail qui peut être emprunt de frustrations et de solitude. Je remercie tous ceux qui passaient de temps en temps dans mon bureau et j’adresse une pensée particulière à Mathieu et son Apple-attitude, Pierre avec son tweed et son bronzage uniques, Rata et sa bonne humeur, Damien et sa décontraction sudiste, Arthur lorsqu’il n’est pas en vacances, Evrard lorsqu’il est réveillé, Sarah avec sa force surhumaine et ses conseils esthé- tiques, Pawit et ses bons plats thaïs, Yohan avec son gâteau au chocolat et son fromage blanc à Saulx-les-Chartreux, Joseph avec le petit compartiment de sa machine à pain et son thé, Dalal et son rire, et tous les autres (même ceux du DTIM). Je souhaite remercier plus particulièrement Antoine notamment pour nos tours de ring et nos échanges autour de bons thés (exempts ou non de grand-mères) qui m’ont permis de surmonter l’étape douloureuse que peut être la rédaction ; Christelle, pour son amitié et son soutien, avec nos nombreuses et longues discussions qui m’ont portée au travers des moments les plus difficiles, aussi bien professionnels que personnels ; et Loïc, qui dès son arrivée en thèse, a su m’offrir une oreille attentive et des conseils éclairés tout en entretenant ma motivation dès qu’elle était trop fragile. Je ne pourrais oublier de remercier également mes amis Lucille, Ambre, Mathieu, Lise, Estelle, Laetitia, Julie, Anaïs, Charlotte, Yelena, Alice et les Glénanais qui m’ont encouragée jusqu’au jour de ma soutenance. Enfin, mais non des moindres, je remercie mes frères, David et Thomas, ainsi que ma mère, Sandra et Coline pour avoir toujours cru en moi et avoir fait preuve d’un encouragement sans faille. 4Table des matières Introduction 9 1 Systèmes de traitement formel d’évènements complexes 13 1.1 Principaux enjeux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.2 Event Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.3 Le langage ETALIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.4 Le langage des chroniques de Dousson et al. . . . . . . . . . . . . . . . . . . . . . . 23 1.5 Le langage des chroniques Onera . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 1.5.1 Une première implémentation : CRS/Onera . . . . . . . . . . . . . . . . . . . 29 1.5.2 Définition d’une sémantique du langage des chroniques . . . . . . . . . . . . 30 1.5.3 Détail de la sémantique ensembliste du langage des chroniques de CRS/Onera 31 1.6 D’autres modes de représentation et de reconnaissance de comportements . . . . . 34 2 Construction d’un cadre théorique pour la reconnaissance de comportements : le langage des chroniques 41 2.1 Définitions générales préalables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.1.1 Évènements et leurs attributs . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.1.2 Opérations sur les attributs . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.2 Définition d’une syntaxe étendue du langage des chroniques : ajout de contraintes sur des attributs d’évènement et de constructions temporelles . . . . . . . . . . . . 46 2.3 Définition de la sémantique du langage à travers la notion de reconnaissance de chronique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.3.1 Passage à une représentation arborescente des reconnaissances . . . . . . . 53 2.3.2 Formalisation de la notion de reconnaissance de chronique . . . . . . . . . . 55 2.4 Propriétés du langage des chroniques . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2.4.1 Définition d’une relation d’équivalence sur les chroniques . . . . . . . . . . 60 2.4.2 Relations entre ensembles de reconnaissances . . . . . . . . . . . . . . . . . 60 2.4.3 Associativité, commutativité, distributivité . . . . . . . . . . . . . . . . . . 62 2.5 Gestion du temps continu à l’aide d’une fonction Look-ahead . . . . . . . . . . . . 64 2.6 Tableau récapitulatif informel des propriétés du langage des chroniques . . . . . . . 66 2.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 5Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements 3 Un modèle de reconnaissance en réseaux de Petri colorés dit « à un seul jeton » 69 3.1 Définition du formalisme des réseaux de Petri colorés . . . . . . . . . . . . . . . . . 70 3.1.1 Types et expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.1.2 Réseaux de Petri colorés . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.1.3 La fusion de places . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.1.4 Arcs inhibiteurs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.2 Construction formelle des réseaux dits « à un seul jeton » . . . . . . . . . . . . . . 77 3.2.1 Types et expressions utilisés dans le modèle . . . . . . . . . . . . . . . . . . 77 3.2.2 Structure générale des réseaux « à un seul jeton » . . . . . . . . . . . . . . 79 3.2.3 Briques de base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.2.4 Construction par induction . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.3 Formalisation et description de l’exécution des réseaux . . . . . . . . . . . . . . . . 89 3.3.1 Reconnaissance d’un évènement simple . . . . . . . . . . . . . . . . . . . . . 89 3.3.2 Reconnaissance d’une séquence . . . . . . . . . . . . . . . . . . . . . . . . . 91 3.3.3 Reconnaissance d’une disjonction . . . . . . . . . . . . . . . . . . . . . . . . 94 3.3.4 Reconnaissance d’une conjonction . . . . . . . . . . . . . . . . . . . . . . . 95 3.3.5 Reconnaissance d’une absence . . . . . . . . . . . . . . . . . . . . . . . . . . 99 3.3.6 Définition formelle de la stratégie de tirage . . . . . . . . . . . . . . . . . . 106 3.4 Démonstration de la correction du modèle « à un seul jeton » . . . . . . . . . . . . 107 3.5 Étude de la taille des réseaux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 4 Un modèle de reconnaissance contrôlé en réseaux de Petri colorés 117 4.1 Construction et fonctionnement des réseaux dits « multi-jetons » . . . . . . . . . . 118 4.1.1 Types et expressions utilisés dans les réseaux multi-jetons . . . . . . . . . . 119 4.1.2 Structure globale des réseaux multi-jetons . . . . . . . . . . . . . . . . . . . 119 4.1.3 Briques de base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 4.1.4 Construction par induction . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.1.5 Bilan sur le degré de contrôle acquis et stratégie de tirage . . . . . . . . . . 133 4.2 Construction et fonctionnement des réseaux « contrôlés » . . . . . . . . . . . . . . 134 4.2.1 Types et expressions utilisés . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 4.2.2 Structure globale des réseaux . . . . . . . . . . . . . . . . . . . . . . . . . . 137 4.2.3 Briques de base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 4.2.4 Un séparateur de jetons générique . . . . . . . . . . . . . . . . . . . . . . . 144 4.2.5 Construction par induction des réseaux contrôlés . . . . . . . . . . . . . . . 146 4.2.6 Graphes d’espace d’états des réseaux contrôlés . . . . . . . . . . . . . . . . 159 4.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 5 Bibliothèque C++ de reconnaissance de comportements et applications à la surveillance de la sécurité d’avions sans pilote 165 5.1 Développement d’une bibliothèque C++ implémentant la reconnaissance de chroniques : Chronicle Recognition Library (CRL) . . . . . . . . . . . . . . . . . . . . . 166 6TABLE DES MATIÈRES 5.2 Surveillance de cohérence au sein d’un UAS en cas de pannes . . . . . . . . . . . . 171 5.2.1 Description de l’architecture du système d’avion sans pilote étudié . . . . . 172 5.2.2 Modélisation du problème . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 5.2.3 Objectifs de la reconnaissance de comportements dans ce cas d’étude . . . . 180 5.2.4 Écriture et formalisation des situations incohérentes à détecter . . . . . . . 181 5.2.5 Utilisation de CRL pour reconnaître les situations incohérentes . . . . . . . 182 5.3 Surveillance du bon respect de procédures de sécurité à suivre par un drone . . . . 185 5.3.1 Cadre du problème . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 5.3.2 Mise en place du système de surveillance : écriture des chroniques critiques à reconnaître . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 5.3.3 Application à des scénarios de simulation avec CRL . . . . . . . . . . . . . . 189 5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Conclusion et perspectives 193 Bibliographie 197 A Démonstrations de propriétés du langage des chroniques 207 A.0.1 Associativité . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 A.0.2 Commutativité . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 A.0.3 Distributivité . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Table des figures et des tableaux 211 Table des symboles 213 Table des acronymes 215 7Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements 8Introduction Dans de nombreux domaines, notamment l’aérospatial, le médical, la finance ou le nucléaire, des quantités très importantes de données sont produites par des systèmes pouvant être réels ou simulés. Pour la manipulation de ces masses énormes de données, des outils d’aide à l’analyse sont nécessaires. Par exemple, l’industrie aérospatiale a recours de façon systématique à la simulation afin de pouvoir étudier l’ensemble des caractéristiques de systèmes de grande envergure ; et il est nécessaire de pouvoir exploiter les données produites. Par ailleurs, au vu de l’automatisation croissante des tâches, les systèmes mis en cause sont de plus en plus critiques et de plus en plus complexes. Ils mettent en interaction hommes, machines et environnements, rendant ainsi les risques humains et matériels de très grande envergure. Ceci rend nécessaire l’emploi de méthodes formelles pour s’assurer de la correction des outils d’aide à l’analyse utilisés. C’est dans ce contexte que s’inscrit la problématique de la reconnaissance formelle de comportements dans le cadre de problèmes complexes, domaine du Complex Event Processing (CEP). Il s’agit de développer des outils fiables d’aide à l’analyse de flux d’évènements permettant de reconnaître des activités pouvant être aussi bien normales qu’anormales dans des flux complexes d’évènements. Parmi les formalisations de description et de reconnaissance de comportements, on peut citer les suivantes : — L’Event Calculus (EC), dont les travaux récents sont menés principalement par A. Artikis, est fondé sur la programmation logique. Des séries de prédicats, qui peuvent être dérivés par apprentissage [ASPP12], définissent les comportements à reconnaître. Initialement, leur reconnaissance ne pouvait se faire en ligne, mais une solution est proposée dans [ASP12]. De plus, un raisonnement probabiliste peut être introduit dans l’EC [SPVA11]. — Les chroniques de Dousson et al. [DLM07] sont des ensembles de formules décrivant des associations d’évènements observables et sont représentées par des graphes de contraintes. — Les chroniques de [Ber09, CCK11] développées à l’Onera sont un langage temporel bien distinct des chroniques de Dousson et permettant la description formelle de comportements puis leur reconnaissance en ligne, et ce avec historisation des évènements (c’est-à-dire avec la possibilité de remonter précisément à la source des reconnaissances). Elles sont définies par induction à partir d’opérateurs exprimant la séquence, la conjonction, la disjonction et l’absence. Un modèle de reconnaissance est proposé en réseaux de Petri colorés, où un 9Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements réseau calculant les ensembles de reconnaissances peut être construit pour chaque chronique que l’on souhaite reconnaître. Les applications de ces travaux sont très variées et touchent notamment à la médecine (monitoring cardiaque par exemple), la gestion d’alarmes, la sécurité informatique, l’évaluation de la satisfaction de passagers dans des transports publics, la surveillance vidéo, etc. Cette thèse consiste à formaliser et étendre un langage de description de comportements, le langage des chroniques de [CCK11], tout en développant des modèles d’implémentation du processus de reconnaissance pour ensuite traiter des applications variées. La démarche consiste dans un premier temps à formaliser et étendre le langage des chroniques défini dans [CCK11] afin de répondre à un besoin d’expressivité plus important. On commence par définir inductivement la syntaxe du langage, puis, afin de lui donner un sens, on explicite la notion de reconnaissance de chronique par une fonction. Cette dernière est définie par induction pour chaque chronique : à un flux d’évènements, elle associe l’ensemble des reconnaissances de la chronique dans ce flux. Dans [CCK11], le formalisme de représentation des reconnaissances de chroniques est ensembliste (chaque reconnaissance est représentée par un ensemble d’évènements), mais cela ne permet pas de distinguer certaines reconnaissances car il n’est pas possible de savoir à partir de ces ensembles à quelle partie de la reconnaissance de la chronique correspond chaque évènement de l’ensemble de reconnaissance. Pour cela, on modifie ce formalisme en remplaçant la représentation ensembliste par une représentation arborescente des reconnaissances qui permet de conserver plus d’informations et de distinguer toutes les reconnaissances possibles. Après cette formalisation, on peut étendre l’expressivité du langage avec de nouvelles constructions temporelles et introduire la manipulation d’attributs d’évènements. Davantage de comportements peuvent ainsi être exprimés et une plus grande variété d’applications peut être traitée. Les constructions temporelles choisies sont de deux types : d’une part les constructions contraignant temporellement une chronique par rapport à une autre, et d’autre part celles contraignant une chronique par rapport à un délai. Ces constructions découlent classiquement des relations d’Allen [All81, All83] qui décrivent tous les agencements possibles d’intervalles et sont compatibles avec un modèle de temps continu adapté à des applications réelles. Parallèlement, la notion d’attribut d’évènement est formalisée de manière à pouvoir manipuler des données liées aux évènements du flux, puis, plus généralement, des données liées aux chroniques elles-mêmes. Ceci permet d’introduire plusieurs niveaux de complexité et de considérer des chroniques comme des évènements intermédiaires. La nécessité de pouvoir manipuler de telles données apparaît dès lors que l’on essaie de traiter des exemples d’application d’une légère complexité. En effet, il est primordial de pouvoir alors exprimer des corrélations entre des évènements différents, par exemple, de pouvoir spécifier qu’ils proviennent d’une même source. Pour cela, des chroniques exprimant des contraintes sur les attributs sont définies. De plus, afin de pouvoir considérer plusieurs niveaux d’évènements-chroniques, une fonction permettant de construire des attributs de chroniques est définie. L’ensemble de ces travaux est présenté dans le Chapitre 2. On obtient alors un langage des chroniques expressif formalisé. Afin de pouvoir utiliser ce formalisme pour effectuer la reconnaissance de comportements, il faut définir des modèles de ce processus qui permettent l’utilisation du langage dans des applications quelconques. On doit pouvoir montrer que l’exécution de ces modèles respecte la sémantique des chroniques. Pour ce faire, on choisit de 10Introduction développer deux modèles d’implémentations différentes. Un premier modèle, dans la continuité de celui présenté dans [CCK11], permet de reconnaître les chroniques initiales de [CCK11] à l’aide de réseaux de Petri colorés, un langage de spécification formelle adapté à la reconnaissance de comportements. Pour compléter la construction formelle du modèle de [CCK11], on définit par induction cinq places centrales formant une structure clé de chaque réseau, permettant ensuite de composer ensemble les réseaux et donc de définir une construction formelle par induction des réseaux de reconnaissance. Le marquage des réseaux de Petri colorés construits pour chaque chronique évolue au fur et à mesure du flux d’évènements. Pour répondre au problème de la vérification de la correction du système vis-à-vis de la sémantique, on démontre que ce marquage correspond exactement à la définition formelle de [PBC+]. Cette formalisation et cette preuve sont présentées dans [CCKP12a] et développés dans le Chapitre 3. Cependant, l’intégralité des transitions des réseaux de Petri de ce modèle sont toujours tirables. Le modèle de reconnaissance est ainsi non déterministe dans le sens où il est nécessaire de tirer à la main les transitions dans un certain ordre, en suivant une stratégie de tirage bien définie, pour obtenir le résultat souhaité, c’est-à-dire l’ensemble de reconnaissance défini par la sémantique. On souhaite donc modifier ce modèle pour obtenir un marquage final unique, tout en conservant la double contrainte d’avoir d’une part une construction modulaire des réseaux calquée sur le langage, et d’autre part de conserver de la concurrence dans l’exécution des réseaux. On procède en deux temps. Tout d’abord, contrairement au premier modèle proposé dont les jetons contiennent des listes d’instances de reconnaissance, on passe à un modèle de réseau multi-jetons : un jeton pour chaque instance de reconnaissance afin d’initier l’implémentation d’un contrôle du réseau via des gardes sur les transitions. Dans un second temps, une structure de contrôle du flux d’évènements est implémentée pour achever la déterminisation du modèle tout en préservant la structure modulaire. La concurrence présente dans l’exécution des réseaux et l’unicité du marquage final après le traitement de chaque évènement du flux sont ensuite mis en avant en développant le graphe des marquages accessibles à l’aide du logiciel CPN Tools. Ces travaux sont exposés dans [CCKP13a] et développés dans le Chapitre 4. Ce premier modèle de reconnaissance de chronique est cependant limité aux chroniques initiales de [CCK11] et ne permet donc pas de reconnaître des constructions temporelles ni de manipuler des attributs. Un second modèle de reconnaissance sur l’ensemble du langage étendu des chroniques est donc développé en C++ permettant ainsi une application directe. Ses algorithmes sont fidèlement calqués sur la sémantique du langage, justifiant ainsi le fonctionnement du modèle, CRL, qui est déposé sous la licence GNU LGPL. La bibliothèque CRL est utilisée dans des applications du domaine des drones. On considère d’abord un système de drone (pilote, drone, Air Traffic Control (ATC)). On souhaite évaluer les procédures d’urgence liées à des pannes de liaisons entre les différents acteurs du systèmes. Ces procédures sont en cours de développement et l’on souhaite mettre en avant les éventuelles failles corrigibles, et proposer un système d’alarmes pour les failles humaines dont il est impossible de s’affranchir. On modélise le système et ses procédures par un diagramme UML implémenté en C++ puis on le soumet à des pannes de liaisons éventuellement multiples pour reconnaître les situations incohérentes pouvant donner lieu à un danger. crl et cette application sont présentées dans [CCKP12b] et [CCKP13b], et développés dans le Chapitre 5. 11Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements On réalise également une seconde application dans le domaine des drones. On considère un drone partant de l’aéroport d’Ajaccio pour une mission d’observation d’un feu en dehors de l’espace aérien contrôlé. Il doit passer successivement dans des zones de l’espace aérien contrôlé puis en sortir, et à chacune de ces actions sont associées des procédures de sécurité (points de passage, clearances, . . . ). L’objectif est de proposer un moyen de surveillance du drone assurant le respect des procédures. Pour ce faire, on écrit plusieurs niveaux de chroniques, où interviennent des constructions temporelles et des contraintes sur des attributs d’évènements complexes. On utilise crl pour reconnaître ces chroniques dans des flux d’évènements tests comprenant un fi- chier de positions du drone produit par le battlelab blade de l’Onera [CBP10] ainsi qu’un fichier d’évènements élémentaires (clearances, changement de fréquence radio, . . . ). 12Chapitre 1 Systèmes de traitement formel d’évènements complexes Sommaire 2.1 Définitions générales préalables . . . . . . . . . . . . . . . . . . . . . . 43 2.1.1 Évènements et leurs attributs . . . . . . . . . . . . . . . . . . . . . . . . 44 2.1.2 Opérations sur les attributs . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.2 Définition d’une syntaxe étendue du langage des chroniques : ajout de contraintes sur des attributs d’évènement et de constructions temporelles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.3 Définition de la sémantique du langage à travers la notion de reconnaissance de chronique . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.3.1 Passage à une représentation arborescente des reconnaissances . . . . . 53 2.3.2 Formalisation de la notion de reconnaissance de chronique . . . . . . . . 55 2.4 Propriétés du langage des chroniques . . . . . . . . . . . . . . . . . . 60 2.4.1 Définition d’une relation d’équivalence sur les chroniques . . . . . . . . 60 2.4.2 Relations entre ensembles de reconnaissances . . . . . . . . . . . . . . . 60 2.4.3 Associativité, commutativité, distributivité . . . . . . . . . . . . . . . . 62 2.5 Gestion du temps continu à l’aide d’une fonction Look-ahead . . . . 64 2.6 Tableau récapitulatif informel des propriétés du langage des chroniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 2.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 On va s’intéresser dans ce chapitre à exposer les principales méthodes de traitement formel d’évènements complexes – Complex Event Processing (CEP) et plus généralement de l’Information Flow Processing (IFP) 1 . Ces systèmes se présentent en deux parties principales : il y a 1. La distinction entre CEP et IFP est réalisée dans [CM12], le CEP est présenté comme étant une partie de l’IFP qui contient également d’autres domaines comme celui des bases de données actives. 13Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements d’une part le moyen employé pour la description formelle des comportements à reconnaître, et d’autre part le processus de reconnaissance de ces comportements dans un flux d’évènements. Pour une introduction globale au domaine du traitement d’évènements complexes, nous renvoyons aux livres [Luc02, EN10]. D’autre part, [LSA+11] présente un glossaire des termes employés dans le domaine. Nous allons commencer par dégager dans la Section 1.1 les principales problématiques de l’IFP. Dans les sections suivantes, nous présentons ensuite successivement quatre méthodes de reconnaissance d’évènements complexes qui sont représentatives de notre problématique : l’Event Calculus (EC) dans la Section 1.2, le langage ETALIS dans la Section 1.3, le langage des chroniques de Dousson et al. dans la Section 1.4, et le langage des chroniques Onera dans la Section 1.5. Dans une dernière section (1.6), nous présentons succinctement une sélection d’autres méthodes autour de l’IFP et donnons un aperçu des domaines d’application. 1.1 Principaux enjeux On s’attache dans cette section à mettre en avant les principaux enjeux permettant de distinguer les différentes approches formelles de l’IFP. Expressivité Il est nécessaire que le langage utilisé pour la description des comportements à reconnaître soit suffisamment expressif pour exprimer toutes les nuances désirées selon l’application envisagée. Cependant, il est clair qu’une grande expressivité ira inévitablement de pair avec une plus grande complexité du processus de reconnaissance. Il s’agit donc de trouver l’équilibre adéquat. La section 3.8 de [CM12] présente les principaux opérateurs et constructions que l’on rencontre dans la littérature. On retrouve notamment la conjonction, la disjonction, la séquence, la négation ou l’absence, l’itération, l’expression de contraintes temporelles, la manipulation de paramètres. . . La notion de négation ou d’absence est particulièrement épineuse : il est nécessaire de délimiter l’intervalle de temps précis sur lequel porte une négation ou une absence pour pouvoir déterminer si une telle construction est reconnue. Par ailleurs, la question d’avoir une syntaxe ouverte ou fermée se pose. Il s’agit d’autoriser, ou non, la possibilité d’écrire des formules n’ayant pas de sens. Praticité d’écriture & lisibilité Dans le contexte du dialogue avec des experts pour la réalisation d’une application, la praticité d’écriture du langage est importante. On s’intéresse à la concision et à la lisibilité du langage qui faciliteront la discussion avec les spécialistes du domaine et leur utilisation du système de reconnaissance. Il s’agit là encore de trouver un équilibre entre lisibilité et expressivité : une lisibilité excessive peut conduire à une simplifi- cation extrême du langage et donc à une expressivité réduite. Efficacité L’efficacité du processus de reconnaissance est cruciale. En effet, l’analyse des données se veut souvent être réalisée en ligne. Ceci implique la nécessité d’un temps de calcul réduit permettant de traiter les évènements du flux suffisamment rapidement par rapport à leur fréquence d’arrivée. Naturellement, la rapidité du processus est directement liée à la promptitude de la réponse au problème étudié qui peut être capitale par exemple s’il s’agit de produire une alarme lorsqu’une situation critique se produit. 14CHAPITRE 1. SYSTÈMES DE TRAITEMENT FORMEL D’ÉVÈNEMENTS COMPLEXES Multiplicité On peut s’intéresser à établir toutes les reconnaissances d’un comportement donné, et non uniquement l’information que le comportement a eu lieu au moins une fois. La multiplicité des reconnaissances est nécessaire dans la perspective de recherche de comportements dangereux où il peut être essentiel de savoir qu’un comportement s’est produit plusieurs fois. Par exemple, dans certains domaines d’application comme la supervision de réseaux de télécommunications [Dou02], certaines pannes ne sont identifiables qu’à travers la multiplicité d’un comportement donné qui n’est en revanche pas significatif individuellement. En revanche, la multiplicité peut aller à l’encontre de l’efficacité du processus qui doit reconnaître toutes les occurrences et n’a donc pas le droit à l’oubli de reconnaissances intermédiaires. Pour pouvoir s’adapter à toutes les situations, on peut définir différents degrés d’exhaustivité des reconnaissances. Par exemple, [CM12] pose les contextes suivants qui établissent différentes gestions de la multiplicité des reconnaissances : — dans le contexte « récent », seule l’occurrence la plus récente d’un événement qui débute une reconnaissance est considéré (ainsi, pour il y a toujours un évènement initiateur unique) ; — dans le contexte « chronique », les plus anciennes occurrences sont utilisées en premier, et sont supprimées dès leur utilisation (ainsi, les évènements initiateur et terminal sont uniques) ; — dans le contexte « continu », chaque évènement initiateur est considéré ; — dans le contexte « cumulatif », les occurrences d’évènements sont accumulées, puis supprimées à chaque reconnaissance (ainsi, un évènement ne peut pas participer à plusieurs reconnaissances). Historisation Il s’agit de pouvoir spécifier, pour chaque reconnaissance d’un comportement, quels sont les évènements du flux qui ont mené à cette reconnaissance. L’historisation permet alors de distinguer les différentes reconnaissances et donc d’y réagir plus finement. Il s’agit d’une forme de traçabilité qui apporte également la possibilité d’un retour d’expérience sur les causes du comportement détecté. Traitement des évènements Plusieurs caractéristiques peuvent être requises par le système sur le flux d’évènements à traiter. Un flux unique totalement et strictement ordonné et dont les évènements arrivent dans l’ordre sans retard constitue un flux idéal, et l’algorithme de reconnaissance associé en sera a priori simplifié. Cependant les domaines d’applications peuvent exiger d’avoir la possibilité de considérer notamment : — un ordre uniquement partiel entre les évènements à analyser ; — des sources distribuées d’évènements – il faut alors définir correctement l’ordre entre les évènements provenant de deux flux différents ; — un ordre non strict entre les évènements pour prendre en compte l’arrivée simultanée d’évènements ; — des évènements arrivant en retard ou étant corrigés a posteriori – il s’agit alors de pouvoir malgré tout les traiter correctement. La gestion d’événements révisés est utile par exemple dans les cas d’application où un événement a été envoyé puis reçu, mais ensuite révisé suite à l’échec d’une transaction. 15Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements Centralisation ou distribution Le flux d’évènements à analyser peut provenir d’une source centralisée ou distribuée. Dans le cas d’un système distribué se pose la question de l’ordonnancement des évènements provenant des différentes parties du système. Par exemple, l’hypothèse d’une horloge synchronisée peut être prise. Modèle de temps Le modèle de temps adopté est en général linéaire. Il peut être discret, avec une granularité adaptable, ou bien continu. Dans certains systèmes comme les Systèmes de Gestion de Flux de Données – Data Stream Management Systems (DSMSs), aucun modèle de temps particulier n’est utilisé et seul l’ordre entre les évènements est considéré mais cela a alors une incidence sur l’expressivité du langage de description de comportements utilisé. Actions Il peut être possible de déclencher des actions à différents instants dans le processus de reconnaissance. Parmi les actions, on peut considérer entre autres la production d’un évènement associé à la reconnaissance d’un comportement et son intégration au flux. Se pose alors la question de l’ordre de traitement de ces nouveaux évènements par rapport aux autres évènements du flux. On peut également imaginer des actions d’une complexité plus importante : par exemple, ajouter dynamiquement un nouveau comportement à reconnaître. Méthodes d’écriture L’écriture, dans le langage formel utilisé, des comportements à reconnaître présente une double difficulté qui dépend de l’expressivité et de la lisibilité du langage. D’une part, il faut exactement identifier toutes les situations à reconnaître pour répondre au problème étudié. D’autre part, il faut correctement transcrire ces comportements dans le langage utilisé. Des méthodes plus ou moins automatisées et principalement fondées sur des statistiques peuvent être proposées aux experts chargés d’écrire les comportements à reconnaître. Gestion d’incertitudes Lors de l’analyse d’activités réelles, de nombreuses indéterminations apparaissent naturellement : incertitudes sur les comportements à reconnaître, incertitudes sur la date d’occurrence ou même sur l’occurrence elle-même d’évènements. . . Des mécanismes de gestion d’incertitudes peuvent donc être établis au niveau du langage de description et/ou au niveau de l’algorithme de reconnaissance, selon les indéterminations considérées. 1.2 Event Calculus Le calcul d’évènements, ou EC, est une formalisation permettant la représentation et le raisonnement sur des évènements et des actions et pouvant s’exécuter sous la forme d’un programme logique. Il s’intéresse à établir la valeur de propositions logiques dans le temps. L’EC est introduit par Kowalski et Sergot dans [KS86] et tire son nom du calcul de situation (Situation Calculus) dont il se distingue par le fait qu’il traite d’évènements locaux plutôt que d’états globaux. Le but est de s’affranchir du problème du cadre dans un souci d’efficacité. L’EC se veut constituer une analyse formelle des concepts mis en jeu, à savoir les évènements et les actions. Il peut être exprimé à partir de clauses de Horn 2 auxquelles est ajoutée la négation par l’échec 3 , 2. On rappelle qu’une clause de Horn est une formule du calcul propositionnel de la forme (p1 ∧ . . . ∧ pn) ⇒ q où n ∈ N, p1, . . . , pn sont des littéraux positifs et q est un littéral quelconque. 3. La négation par l’échec se fonde sur le principe que la base de connaissances est complète et que donc, si tous 16CHAPITRE 1. SYSTÈMES DE TRAITEMENT FORMEL D’ÉVÈNEMENTS COMPLEXES se plaçant ainsi dans le cadre de la programmation logique. Les principes initiaux majeurs de l’EC sont les suivants : — les évènements peuvent être traités dans un ordre quelconque non nécessairement lié à leur ordre d’occurrence ce qui fait que le passé et le futur sont considérés symétriquement ; — en particulier, des évènements peuvent être concurrents et ne sont pas forcément considérés comme ponctuels ; — les mises à jour sont toujours additives dans le sens où elles ne peuvent pas retirer d’informations passées ; — ce n’est pas tant la date des évènements qui est importante que leur ordre relatif. Il existe de nombreux dialectes de l’EC fondés sur des axiomatiques proches et permettant de traiter diverses spécificités telles que la gestion d’actions différées ou de changements continus entre états. De telles axiomatiques sont recensées dans [MS99] et on peut également citer [PKB07, PB08]. Nous nous intéressons plus particulièrement à l’EC élaboré par Artikis et al. pour la reconnaissance de comportements. Ce dialecte, implémenté en programmation logique, est dédié à la reconnaissance de comportements complexes de haut niveau à partir d’évènements de bas niveau. Les comportements composites à reconnaître sont définis à l’aide de prédicats présentés Tableau 1.1 et permettant l’expression de contraintes temporelles sur un modèle de temps linéaire. Ces prédicats sont définis par des axiomes dont certains pouvant être indépendants du domaine d’application. Ce formalisme est expressif. Il permet d’exprimer des contraintes complexes aussi bien temporelles qu’atemporelles et de décrire des situations dans lesquelles un certain comportement ne doit pas avoir lieu dans un certain laps de temps [AP09]. Cette dernière caractéristique est liée au mécanisme de l’absence dans le formalisme des chroniques. Selon les demandes de l’utilisateur, un comportement de haut niveau à reconnaître peut être défini comme un évènement ponctuel simple ou complexe à l’aide du prédicat happensAt ou comme un fluent – i.e. une propriété non ponctuelle pouvant prendre différentes valeurs dans le temps – à l’aide de initially, initiatedAt, holdsFor. . . Pour les activités non ponctuelles, il s’agit de calculer les intervalles maximaux durant lesquels un fluent possède une certaine valeur. Pour ce faire, tous les instants de début d’intervalle puis chaque instant de fin correspondant sont évalués. En effet, intuitivement, un principe d’inertie (exprimant qu’une valeur n’a pas changé tant qu’elle n’a pas été modifiée) est respecté dans le sens où l’on considère qu’un fluent vérifie une certaine valeur si celle-ci a été fixée par un évènement et que, depuis, aucun autre évènement ne l’a modifiée. La question de l’écriture des comportements à reconnaître est étudiée dans [ASP10b, AEFF12]. En effet, l’écriture par des experts des activités à reconnaître est fastidieuse et fortement susceptible de donner lieu à des erreurs, donc il est intéressant de développer un procédé automatique pour engendrer des définitions à partir de données temporelles. Une telle méthode d’apprentissage fondée sur une combinaison d’abductions et d’inductions est utilisée dans [ASP10b, AEFF12] pour inférer les comportements à reconnaître. Une problématique majeure de la reconnaissance de comportements est la possibilité ou non d’exécuter le processus de reconnaissance en temps réel. L’algorithme de base de ce dialecte d’EC fonctionne avec une méthode d’interrogation du système : le raisonnement ne se fait pas au fur les moyens pour montrer une propriété échouent, c’est que sa négation est vérifiée (hypothèse du monde clos). 17Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements Tableau 1.1 – Principaux prédicats de l’EC Prédicat Correspondance intuitive happensAt(E, T) Occurrence de l’évènement E à l’instant T. initially(F = V ) Le fluent F vaut V à l’instant 0. holdsAt(F = V, T) Le fluent F vaut V à l’instant T. holdsFor(F = V, I) I est la liste des intervalles maximaux où F vaut V . initiatedAt(F = V, T) À l’instant T, un intervalle de temps où F vaut V débute. terminatedAt(F = V, T) À l’instant T, un intervalle de temps où F vaut V s’achève. union_all(L, I) I est la liste des intervalles maximaux correspondant à l’union des ensembles d’intervalles de la liste L. intersect_all(L, I) I est la liste des intervalles maximaux correspondant à l’intersection mutuelle des ensembles d’intervalles de la liste L. relative_complement _all(I 0 , L, I) I est la liste des intervalles maximaux correspondant à la différence ensembliste entre la liste d’intervalles I 0 et chaque ensemble d’intervalles de la liste L. et à mesure mais à la demande de reconnaissance d’une activité de haut niveau [APPS10]. Pour réaliser une analyse en ligne, il est nécessaire de constamment interroger le programme ; et sans système de mémoire-cache, il faut à chaque fois recommencer les calculs à zéro. De plus, l’un des principes de base de l’EC, à savoir le fait que la reconnaissance de comportements ne doit pas être affectée par l’ordre dans lequel arrivent les évènements à analyser, ne contribue pas à diminuer la complexité de calcul. Dans [CM96], Chittaro et al. présentent une version de l’EC dénommée Cached Event Calculus (CEC) et dont l’implémentation inclut la gestion de mémoire-cache, réduisant ainsi significativement la complexité du processus. Cependant, comme le CEC n’a pas de système d’oubli ni de péremption et qu’il accepte des évènements datés plus tôt que d’autres évènements ayant déjà été traités, les temps de reconnaissance augmentent au fur et à mesure de l’arrivée des évènements de bas niveau à traiter et, après un certain temps, peuvent finir par ne plus respecter les temps de 18CHAPITRE 1. SYSTÈMES DE TRAITEMENT FORMEL D’ÉVÈNEMENTS COMPLEXES calcul minimaux nécessaires pour une reconnaissance en ligne correcte. Pour résoudre ces problèmes, Artikis et al. proposent dans [ASP12] une implémentation efficace de leur processus de reconnaissance, nommée Event Calculus for Run-Time reasoning (RTEC) 4 et réalisée sous YAProlog 5 . Le programme fonctionne toujours par interrogations successives du système, et comme pour le CEC, un dispositif de mémoire-cache est mis en place pour stocker les intervalles maximaux calculés par les prédicats holdsFor pour chaque fluent F. Ceci permet de n’avoir pas à recalculer ces intervalles à chaque itération. La problématique du calcul vient ensuite du fait que des évènements peuvent ou bien arriver en retard, ou bien être corrigés a posteriori. L’algorithme est élaboré de façon à pouvoir traiter correctement ce genre de situation. Deux paramètres du programme sont à fixer : — d’une part, le pas Qi+1 − Qi entre deux interrogations Qi et Qi+1 du système ; — d’autre part, la taille WM (Working Memory) de la fenêtre des évènements à considérer. À chaque itération Qi les évènements datés dans l’intervalle ]Qi − WM, Qi ] sont analysés. Ainsi, dans le choix des paramètres, le signe de la différence WM − (Qi+1 − Qi) est significatif : — si WM < Qi+1 − Qi , alors les évènements de la fenêtre ]Qi , Qi+1 − WM] ne seront jamais analysés, et si un évènement arrive en retard ou est corrigé, il ne sera pas considéré ; — si WM = Qi+1 − Qi , alors exactement tous les évènements arrivés à temps seront analysés ; — si WM > Qi+1 − Qi , alors le système peut traiter des évènements arrivant ou bien avec un certain délai ou bien corrigés dans un certain délai. L’algorithme fonctionne alors comme suit. Les parties des intervalles maximaux calculés auparavant et s’intersectant avec la fenêtre ]Qi−WM, Qi ] sont tronqués. Ensuite, les évènements de la fenêtre ]Qi−WM, Qi ] sont analysés pour recalculer des intervalles maximaux sur cette fenêtre, éventuellement certains identiques à ceux ayant été tronqués. Enfin, les morceaux sont « recollés » afin de recréer des intervalles maximaux globaux. L’algorithme plus détaillé de RTEC est présenté dans [ASP12], avec une analyse des performances du système sur des données réelles et sur des données synthétisées. Dans [AMPP12, PAMP12], une architecture de système pour la reconnaissance de comportements, Event Processing for Intelligent Ressource Management (EP-IRM), est proposée. Elle peut être dotée de nombreux composants pouvant facilement être ajoutés ou supprimés. Certaines applications sont indépendantes du domaine et peuvent être utilisées quelle que soit l’étude, comme par exemple l’affichage de cartes MAP. Le système est également composé d’un moteur détectant les évènements de bas niveau qui sont ensuite envoyés au moteur de reconnaissance de comportements complexes. Par ailleurs, plusieurs approches probabilistes de l’EC sont développées. Tout d’abord, Artikis et al. étendent dans [SPVA11] le formalisme de l’EC au raisonnement probabiliste à l’aide de Réseaux Logiques de Markov [DL09] qui combinent l’expressivité de la logique du premier ordre avec la sémantique formelle probabiliste des réseaux de Markov. D’autre part, dans [SAFP14], Artikis et al. proposent une adaptation de l’EC au cadre de la programmation logique probabiliste en utilisant ProbLog [KDDR+11]. Ceci permet de répondre au problème de détections incorrectes des 4. RTEC est disponible en open-source sur le web : http://users.iit.demokritos.gr/~a.artikis/ RTEC-3examples.zip. 5. http://www.dcc.fc.up.pt/~vsc/Yap/ 19Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements évènements de bas niveau en travaillant sur un flux d’évènements où chaque évènement est doté d’un indice de confiance. En revanche, la reconnaissance en ligne n’est pas encore réalisable avec ce formalisme, et il ne permet pas de traiter le cas des définitions imprécises des comportements à reconnaitre. Une autre approche pour la gestion d’incertitude est présentée dans [AWG+13]. Elle est orthogonale à celle de [SAFP14] dans le sens où elles peuvent être combinées. Le principe consiste à utiliser la variété des sources d’évènements pour déterminer leur véracité. Un système d’« auto-adaptation » est implémenté en utilisant le processus de reconnaissance de comportements lui-même. En effet, des définitions de comportements complexes sont écrites pour identifier les domaines d’incertitude et y réagir : lorsque l’incertitude est significative, le système peut ignorer les évènements d’un certain laps de temps ou bien, momentanément, ne pas considérer certaines sources d’évènements. [AWS+14] répond à la même problématique en ayant en plus recours à du crowdsourcing pour trancher lorsque les désaccords entre les sources sont significatifs. 1.3 Le langage ETALIS Le langage de description de comportements Event-driven Transaction Logic Inference System (ETALIS) 6 [ARFS12, Ani11] est un langage de CEP muni d’une syntaxe et d’une sémantique formelles permettant simultanément de raisonner sur des connaissances temporelles (concernant des évènements) et sur des connaissances stables ou en évolution (règles, faits, ontologies, données encyclopédiques. . . ), et ce à l’aide d’un système réalisant l’analyse de comportements en ligne. ETALIS est un langage de programmation logique. Sa syntaxe est définie par des règles dont les principales constructions sont présentées dans le Tableau 1.2. Le modèle de temps adopté est linéaire et dense mais dénombrable (l’ensemble des rationnels Q) ; les évènements de base du flux à analyser peuvent être aussi bien des évènements instantanés que des évènements ayant une durée, ils sont datés par des intervalles de temps [T1, T2] où éventuellement T1 = T2. Le langage présente une expressivité forte : — l’ensemble des 13 relations d’Allen peut être décrit, — des contraintes peuvent être exprimées sur des propriétés d’évènements, — une notion d’absence est développée, mais limitée au cadre de la séquence, — une distinction précise est faite entre la conjonction en série et celle en parallèle, — il est possible de réaliser des définitions récursives de comportements ce qui permet, par exemple, d’accumuler à l’aide d’une fonction une valeur sur une suite d’évènements. Une première sémantique déclarative formelle est fournie [AFR+10, ARFS12] ; les interprétations de motifs (patterns) d’évènements (i.e. des comportements à reconnaître) est définie par induction à la manière de la théorie des modèles. Il s’agit d’ensembles de reconnaissances où une reconnaissance est représentée par un couple hq1, q2i, avec q1, q2 ∈ Q, délimitant l’intervalle de temps nécessaire et suffisant à la reconnaissance. Les informations relatives aux évènements ayant donné lieu à la reconnaissance ne sont pas conservées, il n’y a donc pas de possibilité d’historisation. Le système de reconnaissance ETALIS est implémenté en Prolog. Pour ce faire, une seconde sémantique, opérationnelle, est définie à l’aide de règles de programmation logique. Les compor- 6. ETALIS est disponible en open-source sur le web : http://code.google.com/p/etalis/. 20CHAPITRE 1. SYSTÈMES DE TRAITEMENT FORMEL D’ÉVÈNEMENTS COMPLEXES Tableau 1.2 – Principales constructions du langage ETALIS [AFR+10] Constructions Correspondance intuitive p where t Le comportement p est reconnu et le terme t prend la valeur vraie. q Pour tout q ∈ Q, correspond à l’instant absolu q. (p).q Le comportement p est reconnu et dure exactement q, avec q ∈ Q. p1 seq p2 Le comportement p1 est suivi, strictement (dans le temps), du comportement p2. p1 and p2 Les comportement p1 et p2 sont reconnus, sans aucune contrainte temporelle. p1 par p2 Les comportement p1 et p2 sont reconnus en parallèle, i.e. ils se chevauchent dans le temps. p1 or p2 L’un des deux comportements est reconnu. p1 equals p2 Les deux comportements sont reconnus sur exactement le même intervalle de temps. p1 meets p2 Le dernier instant de reconnaissance de p1 est exactement le premier instant de reconnaissance de p2. p1 during p2 Le comportement p1 est reconnu pendant le comportement p2. p1 starts p2 L’intervalle de reconnaissance de p1 est un segment initial de l’intervalle de reconnaissance de p2. p1 finishes p2 L’intervalle de reconnaissance de p1 est un segment final de l’intervalle de reconnaissance de p2. not(p1).[p2, p3] Les comportements p2 et p3 sont reconnus dans cet ordre, sans occurrence de p1 strictement entre les deux dans le temps. 21Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements tements complexes recherchés sont décomposés en des évènements intermédiaires appelés buts (goals). ETALIS compile les comportements complexes pour fournir un ensemble de règles permettant l’Event-Driven Backward Chaining (EDBC) – chaînage arrière piloté par les données. C’est le chaînage arrière piloté par les données qui rend possible un processus de reconnaissance en ligne. Deux types de règles résultent de la compilation [AFSS09] : — D’une part, des règles créant les buts à reconnaître pour avancer dans la reconnaissance du comportement complexe. Les buts créés représentent l’occurrence d’un évènement et l’attente d’un autre évènement pour reconnaître un évènement (complexe). Elles sont de la forme goal(b [−,−] , a[T1,T2] , ie[−,−] 1 ) qui exprime qu’à la reconnaissance d’un évènement (éventuellement complexe) a sur l’intervalle [T1, T2], le système est en attente d’un évènement b pour reconnaître l’évènement ie1. — D’autre part, des règles créant des évènements intermédiaires ou des patterns d’évènements. Celles-ci vérifient dans la base de données si un certain but existe déjà, et, s’il existe effectivement, déclenchent l’évènement qui a été reconnu par le but. Par exemple, si le but goal(b [T3,T4] , a[T1,T2] , ie[−,−] 1 ) figure dans la base de données, alors l’évènement ie[T1,T4] 1 est déclenché et ensuite propagé s’il s’agit d’un évènement intermédiaire ou utilisé pour déclencher une action s’il s’agit de l’un des comportements complexes recherchés. Les règles de ce type permettent également de supprimer de la base de données les buts qui ne sont plus nécessaires car obsolètes. En d’autres termes, chaque but correspond à un sous-comportement restreint à deux évènements (dans les exemples précédents il s’agit de a et b), et les buts sont chaînés pour aboutir à la reconnaissance du comportement complexe recherché. La structure de la décomposition d’un comportement complexe en buts correspond donc essentiellement à celle d’un arbre binaire. Il n’y a pas de démonstration d’équivalence entre les deux sémantiques. Les auteurs s’assurent en revanche que la sémantique opérationnelle est belle et bien déclarative [ARFS12], et que donc le comportement du système est à la fois prédictible et reproductible. En ce qui concerne la multiplicité des reconnaissances, ETALIS permet l’implémentation des politiques de consommation d’évènements suivantes (introduites dans la Section 1.1) : contexte récent, contexte chronique et contexte libre (i.e. sans restriction, multiplicité totale). Mais il faut noter que l’on perd l’aspect déclaratif si l’on utilise une autre politique que celle dite libre, et l’ordre d’évaluation des règles devient alors significatif. Une analyse des performances d’ETALIS est présentée dans [AFR+10] sur un cas d’étude analysant l’efficacité d’un système de livraison de fleurs (Fast Flower Delivery Use Case [EN10]). Par ailleurs, la nature logique de l’approche rend possible un raisonnement déductif en temps réel sur une base de connaissances fixe de données structurées de l’environnement. Celle-ci permet de définir une sémantique de l’information utilisable par un système. ETALIS permet la gestion d’évènements arrivant en retard dans un flux ordonné par rapport au temps [FAR11] grâce à l’ajout de deux types de règles : — Des règles de type goal_out(a [−,−] , b[T3,T4] , ie[−,−] 1 ) exprimant que l’évènement b a été reçu et que a est en attente, mais avant b, pour réaliser la reconnaissance de ie1. — Des règles de la forme if goal_out(. . . ) and T2 < T3, alors si un évènement a arrive avec effectivement T2 < T3, l’événement ie[T1,T4] 1 est déclenché. 22CHAPITRE 1. SYSTÈMES DE TRAITEMENT FORMEL D’ÉVÈNEMENTS COMPLEXES Un tel algorithme a la particularité de ne pas gérer les évènements en retard au détriment de l’ef- ficacité du processus de reconnaissance pour les évènements arrivant à temps : la reconnaissance est toujours quasi-immédiate si tous les évènements composant un comportement complexe sont arrivés à temps. En revanche, le système nécessite la mise en place d’une procédure de libération de mémoire assurant la suppression des règles de type goal_out après un certain laps de temps. [FAR11] précise que, pour « des raisons pratiques » (sûrement une question d’efficacité de reconnaissance par surcharge de règles), cette fonctionnalité n’a pas été implémentée pour la politique de consommation d’évènements dite libre ; on perd donc la multiplicité des reconnaissances. Pour mener les cas de retard d’un évènements interdit dans une absence, la gestion d’arrivée d’évènements en retard doit être couplée avec la gestion d’évènements révisés, présentée brièvement ci-dessous. En effet, si l’on considère une négation not(c).[a, b] et qu’une reconnaissance est invalidée par l’arrivée d’un c en retard, il faut alors réviser la reconnaissance erronée. ETALIS permet également la gestion d’évènements révisés [ARFS11]. Comme évoqué en 1.1, ceci peut s’avérer utile dans le cas d’un échec d’une procédure amenant ainsi la correction d’un évènement déjà envoyé. Des sortes de marqueurs sont ajoutés, permettant d’indiquer quels évè- nements (éventuellement intermédiaires) ont donné lieu à quels buts. Ceci permet d’identifier les instances d’évènements à réviser et de propager correctement la révision dans tout le système. Des règles rev sont ajoutées pour supprimer les buts (goal(. . . )) insérés mais révisés. 1.4 Le langage des chroniques de Dousson et al. Le langage des chroniques a été développé pour décrire formellement une signature évènementielle et offre un cadre pour le traitement d’évènements complexes – CEP. Le terme de « chronique » est un terme générique englobant plusieurs systèmes formels voués à la description et à la reconnaissance de comportements. On décrit, dans cette sous-section, un premier formalisme de la notion de chronique. Un langage des chroniques a été introduit notamment dans [Gha96] et développé principalement par Dousson et al. [DGG93, Dou02, DLM07]. Il permet de décrire formellement des agencements d’évènements. Une chronique est en quelque sorte un ordre partiel d’évènements observables dans un certain contexte. Le langage est doté d’un processus de reconnaissance en ligne efficace permettant d’analyser un flux d’évènements datés mais dont les évènements n’arrivent pas nécessairement dans leur ordre d’occurrence pour y reconnaître des situations et éventuellement déclencher des actions ou produire un évènement à une date définie relativement aux dates des évènements ayant donné lieu à la reconnaissance. On manipule des prédicats exprimant le fait qu’un attribut fixé ait une certaine valeur sur un intervalle de temps donné. Un motif d’évènement correspond à un changement de valeur d’un attribut, et un évènement est alors une instance datée ponctuelle de motif d’évènement [DGG93]. Le modèle de temps adopté est linéaire et discret, la résolution adoptée étant supposée suffisante pour le contexte étudié. Une distinction est opérée entre la date d’occurrence et la date de réception des évènements observés. Une borne sur le délai de réception d’un évènement est fournie par l’utilisateur pour chaque motif d’évènement. Ceci permet, comme dans l’Event Calculus, de considérer des 23Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements évènements arrivant avec un retard borné. Par ailleurs, de même que pour l’Event Calculus, on suppose la complétude du flux d’évènement, c’est-à-dire qu’entre deux activations d’une propriété il y a nécessairement un évènement de désactivation de cette propriété (par rapport aux dates d’occurrences). Un modèle de chronique à reconnaître [DGG93] est composé : — d’un ensemble de motifs d’évènements exprimés par les prédicats du Tableau 1.3, — de contraintes temporelles sur ces motifs d’évènements reliées par une conjonction implicite (pour des raisons de complexité, il n’est pas possible d’exprimer de disjonction), — d’un contexte général permettant d’exprimer des contraintes contextuelles, — d’éventuelles actions à réaliser à la reconnaissance (notamment la production d’évènements). Dans [Dou02], le prédicat occurs présenté dans le Tableau 1.3 est ajouté pour répondre à la question du comptage d’évènements qui est un problème typique du domaine de la gestion d’alarmes où certaines pannes ne sont identifiables que par comptage. occurs permet de faciliter l’écriture de ce genre de chroniques et d’optimiser le processus de reconnaissance associé. En effet, sinon, il faut écrire beaucoup de chroniques pour exprimer la même situation, or la complexité du processus de reconnaissance dépend du nombre de chroniques à étudier. Tableau 1.3 – Principaux prédicats et notations des chroniques [Dou96, Dou02] Prédicat Correspondance intuitive hold(P : v,(t1, t2)) Le nom d’attribut P a la valeur v sur l’intervalle [t1, t2], sans implication sur l’instant où il a pris cette valeur. event(P : (v1, v2), t) L’attribut P passe de la valeur v1 à la valeur v2 à l’instant t. noevent(P,(t, t0 )) P ne change pas de valeur sur l’intervalle [t, t0 [. occurs(n1, n2, a,(t1, t2)) L’évènement a a lieu exactement N fois, avec n1 ≤ N ≤ n2, dans l’intervalle de temps [t1, t2[. (permet l’unification du langage) Notation ? « ? » permet d’indiquer une valeur quelconque. Le processus de reconnaissance de chroniques est illustré par la Figure 1.4. Dans un premier temps, il s’agit de compiler hors ligne les modèles de chroniques fournis par l’utilisateur afin de transcrire les contraintes temporelles de chaque modèle en un graphe de contraintes minimal (la relation d’ordre pour laquelle ce graphe est minimal est définie dans [DD99]). Ce pré-traitement permet notamment de mettre en avant les incohérences éventuelles des modèles de chroniques (l’algorithme employé pour la compilation étant incrémental, il désigne même un sous-ensemble de contraintes incohérentes). L’algorithme de reconnaissance est fondé sur ces graphes. Le système doit ensuite être initialisé avec les états initiaux du monde considéré, datés à −∞, puis la reconnaissance de chroniques peut être lancée. Des instances partielles, i.e. des reconnaissances encore non complètes, sont manipulées. Elles sont chacune en attente d’évènements manquants pour compléter la reconnaissance. Ces évènements manquants doivent arriver dans un certain intervalle de temps, la fenêtre d’admissibilité, calculée grâce au graphe de contraintes et 24CHAPITRE 1. SYSTÈMES DE TRAITEMENT FORMEL D’ÉVÈNEMENTS COMPLEXES Figure 1.4 – Le système de reconnaissance de chroniques [Dou94] ce afin de vérifier les contraintes temporelles spécifiées dans la chronique correspondante. Si un évènement attendu n’arrive pas avant la fin de sa fenêtre d’admissibilité ou si l’une des conditions générales de la chronique n’est plus vérifiée, l’instance partielle est alors abandonnée et supprimée. Au contraire, lorsqu’une instance partielle s’apprête à être complétée par un évènement, elle est d’abord dupliquée afin de garantir la reconnaissance de toutes les situations : l’instance partielle complétée peut ainsi être de nouveau complétée par une autre occurrence d’un évènement similaire. La duplication d’instances est la principale source de complexité du processus de reconnaissance. Afin d’optimiser la manipulation des instances (partielles) de reconnaissance, celles-ci sont stockées dans un arbre. Au fur et à mesure de l’arrivée des évènements pertinents les contraintes temporelles exprimées par les fenêtres d’admissibilité du graphe de contraintes sont mises à jour et les modifications sont propagées dans le reste du graphe. Ceci permet de traiter directement l’arrivée de tout évènement. Une propagation des modifications est également effectuée lorsque le temps avance : le système calcule le prochain instant critique et une mise à jour est effectuée lorsque le temps courant atteint cet instant [DGG93]. Le processus de reconnaissance est donc exhaustif, et le plus efficace pour diminuer sa complexité est de limiter la durée de validité d’une instance de chronique [AD01]. Le système peut également prendre en entrée, en plus des évènements datés, l’assertion AssertNoMore(e, I) où e est un évènement et I est un intervalle étendu (i.e. une union disjointe d’intervalles) [DLM07]. Cette assertion indique qu’aucun évènement e n’aura lieu dans I. À la réception d’un tel message, les fenêtres d’admissibilité sont mises à jour en leur appliquant la différence ensembliste avec I et les modifications sont propagées dans le graphe. Cette assertion est introduite dans le but d’être utilisée à des fins d’optimisation, ce qui sera détaillé par la suite. 25Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements Pour le prédicat occurs(n1, n2, a,(t1, t2)), un compteur est associé au prédicat et tant que l’instant t2 n’est pas atteint, il n’est pas possible de déterminer si le prédicat est vérifié. À l’arrivée d’un évènement a à la date d, les instants t1 et t2 n’ont pas encore nécessairement de valeur mais sont contraints à des intervalles [I − t1 , I+ t1 ] et [I − t2 , I+ t2 ]. On étudie alors tous les cas d’ordonnancement de d, I − t1 , I + t1 , I − t2 , et I + t2 . Ce système de reconnaissance, appelé Chronicle Recognition System (CRS), est construit autour du gestionnaire de graphes temporels IxTeT [GL94]. Il permet ainsi de reconnaître en ligne et efficacement toutes les occurrences des chroniques spécifiées. De plus, ce système est prédictif. En effet, l’utilisateur peut savoir à tout moment quand une reconnaissance est attendue et quels évènements sont requis pour compléter chaque instance partielle. Une définition plus formelle de la notion de chronique et du système de reconnaissance associé est présentée dans [DG94]. Figure 1.5 – Architecture du système de reconnaissance avec focalisation temporelle [DLM07] Dans [DLM07], Dousson et al. proposent une technique d’optimisation de CRS. Il s’agit tout d’abord d’appliquer une méthode de focalisation temporelle qui permet d’optimiser les situations où certains évènements sont beaucoup plus fréquents que d’autres. Afin de limiter le nombre d’instances partielles créées suite à un évènement fréquent en attente d’un évènement rare, on définit un niveau pour chaque type d’évènement. Ceci permet d’établir un critère d’intégration au système de reconnaissance : un évènement de niveau n + 1 n’est pas intégré à une instance donnée 26CHAPITRE 1. SYSTÈMES DE TRAITEMENT FORMEL D’ÉVÈNEMENTS COMPLEXES tant que tous les évènements des niveaux 1 à n n’ont pas été intégrés. Lorsqu’un évènement est intégré à une instance, il est envoyé à toutes les autres instances même si la règle n’est pas vérifiée. Ceci permet de s’assurer que tout évènement est traité exactement une fois. La structure du système est présentée Figure 1.5. Un composant de gestion des évènements est introduit. Les évènements qui ne sont pas immédiatement intégrés sont envoyés au collectionneur qui les stocke dans des flux tampons (il y a un buffer par type d’évènement). Chaque flux tampon manipule trois fenêtres temporelles : la fenêtre assert no more où plus aucun évènement ne sera reçu, la fenêtre de filtrage qui contient les occurrences d’évènements ne convenant à aucune instance, et la fenêtre de focalisation qui contient les dates d’occurrence auxquelles un évènement est attendu et devrait donc être immédiatement envoyé à CRS. Ce mécanisme fournit exactement les mêmes reconnaissances qu’avec la version initiale de CRS et a uniquement un effet sur les performances. Celles-ci ne sont pas améliorées systématiquement ; cela dépend de la fréquence des évènements et de leur position dans le graphe de contraintes. La seconde partie de la méthode est d’introduire un principe de chroniques hiérarchiques qui permet de définir séparément des sous-chroniques et de les intégrer dans une chronique plus complexe. La méthode de focalisation temporelle peut ensuite être appliquée aux sous-chroniques elles-mêmes. Dans le formalisme des chroniques, le processus d’écriture des situations à reconnaître reste une difficulté centrale. Plusieurs réponses y sont apportées. Un système, Frequency Analyser for Chronicle Extraction (FACE) [DD99], permettant à un expert d’analyser des fichiers de logs d’alarmes pour identifier les phénomènes récurrents et ainsi définir des chroniques est développé. A partir d’une grandeur fqmin fournie par l’utilisateur, on définit une notion de chronique fréquente. On construit ensuite par induction sur la taille des chroniques les chroniques fréquentes dans le flux d’évènements étudié, puis les contraintes temporelles associées à l’aide d’un algorithme favorisant certaines contraintes. Partir d’une base de chroniques déjà posée par des experts permet de diminuer considérablement le temps de calcul. Il s’agit ensuite de filtrer l’ensemble des chroniques fréquentes obtenues : pour déterminer s’il est intéressant de chercher à reconnaître à la fois une chronique et l’une de ses sous-chroniques, une notion de dépendance est définie. [FCD04] propose une méthode de pré-traitement des fichiers pour en extraire des sous-fichiers appropriés et ainsi alléger la saturation de mémoire provoquée par FACE. Une seconde méthode d’aide à l’écriture de chroniques est fondée sur une simulation du système qui permet de faire apparaître des configurations caractéristiques d’évènements. Ceci permet de ré- colter les séquences d’évènements datés associés et ainsi de former, pour chaque configuration, une liste de séquences positives (i.e. liées à la configuration) et une liste de séquences négatives. Une mé- thode de programmation logique inductive (ILP) [MDR94] peut ensuite être appliquée sur ces deux listes pour en dériver des chroniques [CD00]. Les techniques d’Inductive Logic Programming (ILP) peuvent être également utilisées directement sur des flux d’évènements, combinées avec l’Inductive Constraint Logic (ICL) [DRVL95] qui permet l’expression de contraintes sur le type de chroniques à apprendre, assurant ainsi une caractérisation précise des situations à reconnaître [QCCW01]. Dans [DG94], Dousson et al. commencent à introduire une notion d’incertitude autour de la datation des évènements du flux en utilisant des ensemble flous pour exprimer les ensembles de dates possibles pour un évènement. 27Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements Subias et al. partent du formalisme des chroniques de Dousson et al. et l’adaptent aux systèmes distribués. En effet, il est difficile de développer des mécanismes capables de dater des évènements dans un système distribué, et, de plus, des délais de transmission sont à prendre en compte. Il est donc intéressant de subdiviser l’architecture de contrôle en sous-sites de contrôle pouvant se chevaucher par endroits pour faciliter le diagnostic. Pour ce faire, Subias et al. définissent dans [BSC02] une notion de sous-chronique comme extension de la notion de chronique. Une sous-chronique est composée d’un ensemble d’évènements E et d’un ensemble de contraintes sur les évènements de E mais aussi sur des évènements extérieurs. Une chronique peut alors se décomposer en souschroniques, avec une sous-chronique correspondant à chaque sous-site de contrôle et telles que l’ensemble des évènements E de chaque sous-chronique doit être inclus dans l’ensemble des évènements observables par le sous-site de contrôle concerné. Alors, une chronique est reconnue lorsque toutes les sous-chroniques sont reconnues. Une sous-chronique possède deux types de contraintes : les contraintes locales portant uniquement sur les évènements de E, et les contraintes globales faisant intervenir des évènements extérieurs. Le cadre des réseaux de Petri p- et t-temporels (réseaux de Petri classiques auxquels deux types de mécanismes, détaillés ci-dessous, ont été ajoutés [Kha97, BD91]) est choisi pour modéliser le processus de reconnaissance distribué car, d’après [BSC05] : — il offre une visualisation claire de l’état courant de la reconnaissance de la chronique/souschronique ; — il est approprié pour simuler l’évolution d’une chronique à l’aide d’outils, ou pour revenir en arrière ; — les occurrences multiples sont facilement représentables ; — ils pourraient permettre de démontrer la correction du modèle de contraintes temporelles. Les mécanismes t-temporels (contraintes temporelles de type intervalle sur les transitions) sont utilisés pour exprimer les contraintes de fenêtres d’admissibilité de la chronique (contraintes de type 1 ≤ minp(d(e1) − d(ep)) ≤ 3). Les mécanismes p-temporels (contraintes temporelles sur la durée de séjour admissible d’un jeton dans une place) permettent quant à eux l’expression des contraintes de type intervalle (i.e. du type 1 ≤ d(e1) − d(e2) ≤ 3). Chaque type de contrainte est transposé en une brique de réseau de Petri temporel. Un réseau correspondant à une chronique est réalisé en fusionnant les briques de réseau correspondant à chaque contrainte de la chronique. Dans le réseau obtenu, il n’y a pas de situation de conflit structurel (i.e. il n’y a pas de place précédant plusieurs transitions) car les vérifications de contraintes doivent être réalisées de manière indépendantes. Chaque jeton du réseau obtenu correspond à une instance (partielle ou non) de la chronique. Les occurrences d’évènements sont représentées par des transitions, et les jetons (instances partielles) sont dupliqués et complétés au tirage de ces transitions pour obtenir in fine toutes les reconnaissances complètes [BSC02]. Dans le cas des sous-chroniques, la problématique principale est la vérification des contraintes globales. Pour vérifier celles-ci, la sous-chronique doit recevoir les informations adaptées de l’extérieur. Les transitions et places correspondantes sont donc également ajoutées au réseau. La problématique du délai de transmission entre les sous-sites de contrôle est étudiée dans [BSC04]. Le centre de la question est que ce délai induit une incertitude sur la vérification des contraintes. La méthode employée est la suivante. Le délai ∆ de transmission est supposé borné. Les contraintes glo- 28CHAPITRE 1. SYSTÈMES DE TRAITEMENT FORMEL D’ÉVÈNEMENTS COMPLEXES bales du système sont réécrites sous forme d’expressions dépendant de mesures locales (comme les contraintes locales) et des bornes du délai ∆. La possibilité, entre 0 et 1, de vérifier la contrainte est ensuite quantifiée : on obtient des ensembles flous de valeurs permettant de vérifier les contraintes. Dans les cas où la valeur n’est ni 0 ni 1 mais dans ]0,1[, [BSC04] propose un système de coopération qui peut être lancé pour que la contrainte soit vérifiée par un autre sous-site de contrôle. Pour mieux manipuler ces délais temporels, [BSC05] passe aux réseaux de Petri p- et t-temporels flous. 1.5 Le langage des chroniques Onera 1.5.1 Une première implémentation : CRS/Onera Ornato et Carle présentent une première notion de chronique dans [OC94a, OC94b]. Il s’agit de répondre à la problématique de l’automatisation de la reconnaissance des intentions d’un agent considéré comme une boîte noire. Contrairement au domaine de la reconnaissance de plans, les auteurs ne font pas d’hypothèse forte sur le sujet observé : il n’est pas nécessaire d’avoir une base de connaissance exhaustive décrivant les différents plans pouvant être suivis par le sujet, et, en particulier, le sujet n’est pas supposé effectuer les plans sans erreur ni n’en suivre qu’un seul à la fois. De plus, les auteurs souhaitent pouvoir exprimer des notions telles que la suspension ou l’abandon d’un but. Dans cette optique, un système de reconnaissance de comportements, Chronicle Recognition System/Onera (CRS/Onera) est implémenté. Il est fondé sur un langage temporel, le langage des chroniques, qui permet la description de comportements complexes à l’aide d’évènements typés pouvant être dotés de propriétés et des opérateurs suivants : — la séquence, le non-ordre (conjonction) et la disjonction, — la non-occurrence d’une chronique sur un intervalle de temps délimité par une seconde chronique (notons qu’il n’y a pas de garantie d’ordre de traitement entre les deux chroniques à l’arrivée d’un évènement et qu’il y a donc des formes indéterminées), — une notion de délai, — un opérateur de coupure, le cut, permettant de réduire la combinatoire due à l’exhaustivité recherchée dans le processus de reconnaissance : le cut désigne uniquement la première reconnaissance dans le flux, — opérateur d’indexation d’évènements permettant d’identifier à une unique occurrence plusieurs évènements de même nom dans une chronique (les opérateurs de non occurrence et de disjonction sont cependant opaques pour l’indexation). Le système de reconnaissance CRS/Onera se voit imposer trois contraintes principales : 1. les reconnaissances doivent être exhaustives (i.e. toutes les reconnaissances possibles doivent être détectées) ; 2. il doit y avoir une historisation des évènements (i.e. il faut être capable de dire quels évènements sont à l’origine de chaque reconnaissance) ; 3. le processus de reconnaissance doit être suffisamment efficace pour être réalisé en ligne. Pour répondre à ces contraintes, l’algorithme de CRS/Onera a été conçu sur la base d’automates dupliqués représentant les instances éventuellement partielles des chroniques à reconnaître. Chaque 29Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements reconnaissance partielle de chronique est dupliquée et mise en attente des évènements attendus pour la complétion de la reconnaissance. Cette duplication permet à la fois d’assurer l’exhaustivité du processus et également de gérer les constructions de non-occurrence. CRS/Onera propose également la gestion d’actions et de tests à la reconnaissance. Il est possible d’exprimer des conditions à vérifier sur la reconnaissance : si l’expression précisée est fausse, alors l’instance de chronique est éliminée. D’autre part, un évènement peut être envoyé dans le flux à la suite d’une reconnaissance. Celui-ci peut ensuite être utilisé dans une autre chronique et ainsi définir des chroniques de niveau supérieur. Il y a également la possibilité d’exécuter du code C++ avant que le système n’engendre d’évènement, ou après. CRS/Onera se compose d’un compilateur engendrant du code C++. L’utilisateur décrit des fichiers de systèmes de chroniques et le compilateur CRS/Onera engendre le code C++ correspondant avec un facteur d’expansion d’environ 50. Celui-ci est ensuite compilé. Chaque chronique est alors une classe dont les méthodes gèrent les évolutions et les duplications d’instances. 1.5.2 Définition d’une sémantique du langage des chroniques Dans la lignée de CRS/Onera s’inscrivent les travaux de Bertrand et al. [Ber09] dont l’objectif est d’établir une sémantique du langage des chroniques de CRS/Onera. Une sémantique ensembliste est donnée dans [Ber09] puis aboutie dans [CCK11] pour quatre opérateurs de base (la séquence, la conjonction, la disjonction et l’absence). Un ensemble de reconnaissances est défini pour chaque chronique, explicitant formellement ce que cela signifie de reconnaître dans un flux d’évènements donné le comportement décrit par une chronique. Cette sémantique est détaillée dans la sous-section suivante (1.5.3). Une seconde sémantique opérationnelle est également développée. Une comparaison de différentes modélisations possibles du processus de reconnaissance de chroniques est réalisée dans [BCC07]. Les auteurs se concentrent sur deux principales difficultés : la multiplicité des reconnaissances et l’historisation des évènements donnant lieu aux reconnaissances. Les automates standards à états finis permettent la reconnaissance d’expressions régulières, mais une chronique, de par la multiplicité de la notion de reconnaissance, n’est pas une expression régulière. Un automate standard ne peut donc reconnaître qu’une seule fois une chronique ce qui ne répond pas à la problématique initiale. Si l’on introduit des automates à compteurs, les occurrences multiples d’une chronique peuvent alors être comptabilisées mais il n’est alors pas possible de distinguer les différentes reconnaissances comme il n’y a pas d’historisation des évènements. En revanche, les automates dupliqués, en créant une instance d’automate pour chaque reconnaissance partielle d’une chronique, permettent de reconnaître toutes les occurrences d’une chronique tout en préservant l’information de quels évènements ont donné lieu à chaque reconnaissance. Cependant, cette méthode ne permet pas d’avoir une approche modulaire dans le processus d’écriture de chroniques. Dans cette optique, les réseaux de Petri colorés qui permettent multiplicité et historisation, sont choisis car non seulement ils sont dotés de moyens de construction modulaire, mais encore des outils d’édition, de simulation et d’analyse sont disponibles pour mettre en avant les propriétés des réseaux. Une sémantique en réseaux de Petri colorés est donc établie [BCC08, BCC09, Ber09, CCK11]. 30CHAPITRE 1. SYSTÈMES DE TRAITEMENT FORMEL D’ÉVÈNEMENTS COMPLEXES Un réseau est construit pour chaque chronique. Les transition du réseau sont tirées en fonction du flot d’évènement et les reconnaissances (éventuellement partielles) sont produites et/ou complétées en conséquence. La construction de ces réseaux se fait par induction sur la structure du langage à partir de briques de base, mais celle-ci n’est pas formalisée entièrement. 1.5.3 Détail de la sémantique ensembliste du langage des chroniques de CRS/Onera Dans cette sous-section, nous détaillons précisément la sémantique du langage des chroniques telle que présentée dans [CCK11]. Nous présentons le langage des chroniques puis formalisons le concept d’évènement pour pouvoir ensuite définir la notion de reconnaissance d’une chronique. Une chronique décrit un certain agencement d’évènements. Le langage est construit à partir d’évènements simples et des opérateurs suivants, où C1 et C2 sont des chroniques : — la disjonction C1 | | C2 qui correspond à l’occurrence d’au moins l’une des deux chroniques C1 et C2. — la conjonction C1&C2 qui correspond à l’occurrence des deux chroniques C1 et C2, dans un ordre quelconque, éventuellement entrelacées. C1 C2 — la séquence C1 C2 qui correspond à l’occurrence de la chronique C1 suivie de l’occurrence de la chronique C2. C1 C2 — l’absence (C1) − [C2] qui correspond à l’occurrence de la chronique C1 sans occurrence de la chronique C2 pendant l’occurrence de C1. Formellement, on définit le langage des chroniques à partir d’un ensemble donné de noms d’évènement comme suit : Définition 1 (langage des chroniques). Soit N un ensemble dénombrable dont les éléments sont des noms d’évènement simple. On définit l’ensemble des chroniques sur N, noté X(N), par le schéma inductif suivant : A ∈ N A ∈ X(N) (nom) C1 ∈ X(N) C2 ∈ X(N) C1 | | C2 ∈ X(N) (disjonction) C1 ∈ X(N) C2 ∈ X(N) C1&C2 ∈ X(N) (conjonction) C1 ∈ X(N) C2 ∈ X(N) C1 C2 ∈ X(N) (séquence) C1 ∈ X(N) C2 ∈ X(N) (C1) − [C2] ∈ X(N) (absence) Considérons deux exemples illustrant informellement l’expressivité du langage. Exemple 1. Soit A, B et D des noms d’évènement simple de N. La chronique (A&B) | | D correspond à deux évènements de noms respectifs A et B dans un ordre quelconque ou à un évènement de nom D. 31Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements Exemple 2. Soit A, B, D et E des noms d’évènement simple de N. La chronique (A B)−[D | | E] correspond à un évènement de nom A suivi d’un évènement de nom B, sans occurrence ni d’un évènement de nom D ni d’un évènement de nom E entre les occurrences de A et B. Évènements Nous souhaitons définir la notion de reconnaissance de chronique afin de munir le langage d’une sémantique. Pour ce faire, il est nécessaire de formaliser au préalable le concept d’évènement et les notions qui lui sont associées. Définition 2 (évènements). Soit N un ensemble dénombrable de noms d’évènement. Soit E un ensemble dénombrable dont les éléments sont des évènements. Une fonction de nommage est une fonction totale ν : E 7→ N. Elle associe un nom à chaque évènement. Le triplet (E, N, ν) est alors appelé espace nommé d’évènements. Remarque 1. On distingue ainsi les noms des évènements (qui servent à construire les chroniques) que l’on notera par convention en majuscules (A, B, C, D, . . .) et les évènements (qui constituent les données observées à analyser) que l’on notera en minuscule (a, b, c, d, . . .). Pour faciliter la compréhension, on posera en général ν(a) = A, ν(b) = B, . . . Définition 3 (flux d’évènements). Soit (E, N, ν) un espace nommé d’évènements et soit I ⊆ N. Un flux d’évènements est une suite ϕ = (ui)i∈I d’éléments de E. On notera son domaine ◦ ϕ. On a ainsi I = ◦ ϕ. Il s’agit de l’ensemble des indices d’occurrence des évènements. Par convention, si rien n’est spécifié, on commencera la numérotation à 1 (car cela correspondra à celle engendrée par le modèle de reconnaissance en réseaux de Petri colorés qui sera présenté Chap. 3 et 4). Si ϕ = (ui)i∈I est un flux d’évènements et si J ⊆ I, on définit la restriction de ϕ à J, notée ϕ|J , par ϕ|J = (ui)i∈J . Pour un flux ϕ = (ui)i∈I , on définit les fonctions Eϕ(i) = ui et Nϕ(i) = ν(Eϕ(i)) = ν(ui). Eϕ(i) correspond au i e évènement du flux ϕ. Nϕ(i) correspond au nom du i e évènement du flux ϕ. Reconnaissance d’une chronique Il s’agit maintenant de s’appuyer sur les définitions précédentes pour doter le langage d’une sémantique ensembliste définissant la notion de reconnaissance de comportements. Une reconnaissance d’une chronique est représentée par l’ensemble exact des indices des évènements ayant donné lieu à la reconnaissance. Il est donc nécessaire de commencer par définir les notions suivantes liées aux indices. Définition 4 (instances). Soit (E, N, ν) un espace nommé d’évènements sur lequel est défini un flux d’évènements ϕ = (ui)i∈I . 32CHAPITRE 1. SYSTÈMES DE TRAITEMENT FORMEL D’ÉVÈNEMENTS COMPLEXES Une instance de ϕ est un sous-ensemble fini de I. Un support de ϕ est un sous-intervalle fini de I. (Ainsi, tout support de ϕ est une instance de ϕ.) Le support d’une instance r de ϕ est l’ensemble [r] = {i ∈ I : min r ≤ i ≤ max r}. On notera ]r] = [r] \ {min r} et [r[= [r] \ {max r}. On dit que deux instances r et r 0 de ϕ sont compatibles, noté r ./ r0 , si max r < min r 0 (c’est- à-dire si r « précède » r 0 ). On définit la relation ternaire « r est la réunion compatible de r1 et r2 » par r = r1 ./ ∪ r2 si, et seulement si, r1 ./ r2 ∧ r = r1 ∪ r2. Remarque 2. ./ est une relation d’ordre strict sur les instances de I. Pour chaque chronique, en fonction du flux ϕ étudié, on définit par induction l’ensemble des reconnaissances associées. Définition 5 (ensembles de reconnaissances). Soit (E, N, ν) un espace nommé d’évènements sur lequel est défini un flux d’évènements ϕ = (ui)i∈I . Soit C ∈ X(N). On définit par induction l’ensemble des reconnaissances de C sur le flux ϕ, noté RC (ϕ) : — si C = A ∈ N, alors RA(ϕ) = {{i} : i ∈ ◦ ϕ ∧ Nϕ(i) = A}. La chronique A est reconnue lorsqu’un évènement de nom A a lieu. — si C = C1 | | C2, alors RC (ϕ) = RC1 (ϕ) ∪ RC2 (ϕ). La chronique C1 | | C2 est reconnue si la chronique C1 est reconnue ou si la chronique C2 est reconnue. — si C = C1&C2, alors RC (ϕ) = {r1 ∪ r2 : r1 ∈ RC1 (ϕ) ∧ r2 ∈ RC2 (ϕ)}. La chronique C1&C2 est reconnue si la chronique C1 est reconnue et si la chronique C2 est également reconnue, sans autre contrainte. C1 C2 — si C = C1 C2, alors RC (ϕ) = {r1 ∪r2 : r1 ∈ RC1 (ϕ)∧r2 ∈ RC2 (ϕ)∧r1 ./ r2}. La chronique C1 C2 est reconnue si la chronique C1 est reconnue, si la chronique C2 est reconnue et si la reconnaissance de C1 précède le début de la reconnaissance de C2. C1 C2 — si C = (C1)−[C2], alors RC (ϕ) = {r1 : r1 ∈ RC1 (ϕ)∧(Pf([r1[)∩RC2 (ϕ) = ∅)} où, pour tout ensemble s, Pf(s) est l’ensemble des parties finies de s. La chronique C = (C1) − [C2] est reconnue si la chronique C1 est reconnue et s’il n’y a pas eu de reconnaissance de la chronique C2 pendant la reconnaissance de la chronique C1. Ainsi, pour une chronique C et un flux ϕ, chaque reconnaissance de C dans ϕ correspond à un ensemble d’indices (les indices des évènements donnant lieu à la reconnaissance), et RC (ϕ) est l’ensemble de tous ces ensembles. Exemple 3. Soit a, b, d et e des évènements de E tels que ν(a) = A, ν(b) = B, ν(d) = D et ν(e) = E. 33Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements Considérons la chronique C = (A&B) | | D et le flux ϕ = (a, e, b, d, a, b) avec ◦ ϕ = J1, 6K. On a alors RC (ϕ) = {{4}, {1, 3}, {1, 6}, {3, 5}, {5, 6}}. aebdab 1 23 456 Exemple 4. Soit a, b, d, e, f et g des évènements de E tels que ν(a) = A, ν(b) = B, ν(d) = D, ν(e) = E, ν(f) = F et ν(g) = G. Considérons la chronique C = (A B) − [F] et le flux ϕ = (d, a, e, a, b, g, a, f, b) avec ◦ ϕ = J1, 9K. On a alors RC (ϕ) = {{2, 5}, {4, 5}}. Notons que {2, 9}, {4, 9} et {7, 9}, bien qu’étant des reconnaissances de A B, ne sont pas des reconnaissances de C car Nϕ(8) = F. Exemple 5. Soit a, b, d et e des évènements de E tels que ν(a) = A, ν(b) = B, ν(d) = D et ν(e) = E. Considérons la chronique C = (A B) − [D E] et le flux ϕ = (a, d, a, e, b) avec ◦ ϕ = J1, 5K. On a alors RC (ϕ) = {{3, 5}}. {1, 5} n’est pas une reconnaissance de C car {2, 4} ∈ RD E(ϕ) et {2, 4} ⊂ J1, 5K. Exemple 6. Soit a, b, d, e, f et g des évènements de E tels que ν(a) = A, ν(b) = B, ν(d) = D, ν(e) = E, ν(f) = F et ν(g) = G. Considérons la chronique C = (A B) − [(D E) − [F G]] et le flux ϕ = (a, d, f, g, e, b) avec ◦ ϕ = J1, 6K. On a alors RC (ϕ) = {{1, 6}} car R(D E)−[F G](ϕ) = {}. Remarque 3. On peut montrer par induction directe sur les chroniques que, pour tout C ∈ X(N), RC ({}) = {}. 1.6 D’autres modes de représentation et de reconnaissance de comportements Dans les quatre sections précédentes, nous avons détaillé différentes approches du traitement d’évènements complexes. Dans cette section, nous présentons plus succinctement une dernière sélection de systèmes de reconnaissance de comportements moins proches de notre problématique. Nous renvoyons à [CM12] pour une collection plus complète de systèmes d’IFP. Dans [CM12], les problématiques et différentes options envisageables pour l’IFP sont détaillées, puis un grand nombre de systèmes sont présentés, répartis en quatre groupes : — le domaine des bases de données actives ; — les systèmes de gestion de flux de données ; — les systèmes de CEP ; — les systèmes disponibles dans le commerce. 34CHAPITRE 1. SYSTÈMES DE TRAITEMENT FORMEL D’ÉVÈNEMENTS COMPLEXES Nous renvoyons également à d’autres surveys pour une vision plus complète [dCRN13, FTR+10, ASPP12]. [FTR+10] présente le CEP conjointement avec le domaine de l’analyse prédictive à travers une selection d’articles, d’outils commerciaux puis d’outils académiques ou en libre accès. [ASPP12] compare les chroniques de Dousson (Section 1.4), l’EC (Section 1.2) et la logique de Markov qui permet la prise en compte d’incertitudes dans le domaine du CEP [BTF07, dSBAR08, HN03, TD08, KDRR06]. Les trois approches sont comparées sur trois axes : la description des comportements à reconnaître, le raisonnement à réaliser pour effectuer la reconnaissance, et les méthodes d’apprentissage existant pour mettre en œuvre l’écriture des comportements. MUSE [KM87] Kumar et Mukerjee établissent un système de reconnaissance incrémental appelé MUSE. Le modèle de temps adopté est linéaire et discret, avec une résolution suffisamment fine. Un évènement est un couple (ϕ, τ ) où ϕ correspond à un nom d’évènement et τ est un ensemble d’instants consécutifs, décrivant ainsi un certain intervalle de temps. Des assertions temporelles peuvent être utilisées : les 13 assertions de Allen ainsi que quatre autres relations permettant d’exprimer des relations entre des évènements encore « incomplets » (i.e. dont l’ensemble d’instants consécutifs où l’évènement est vérifié n’est pas encore complet). Ceci permet d’assurer que, pour chaque couple d’évènements donné, une unique des ces dix-sept assertions est toujours vérifiée. Le processus de reconnaissance peut ainsi être effectué au fur et à mesure, à l’aide d’automates à états finis qui résument les différentes règles. Des conjonctions et disjonctions d’assertions peuvent ensuite être spécifiées. Pour finir, une sémantique temporelle sur les évènements permet de définir l’ensemble d’instants α d’une reconnaissance et ainsi de s’adapter à différents cas : on peut avoir tout simplement α = τ , mais on peut aussi définir α = min(τ ) (sémantique instantanée) ou d’autres sémantiques plus complexes. SAMOS [GD94b, GD94a] Swiss Active Mechanism based Object-oriented database Systems (SAMOS) est un système de gestion de base de données actives qui offre notamment un langage de description de comportements qui permet de spécifier des évènements complexes à intégrer aux règles de gestion. Les évènements considérés sont ponctuels (pour les évè- nements complexes, une algèbre d’évènements permet de définir leur instant d’occurrence) et dotés de paramètres. Des opérateurs – disjonction, conjonction, séquence, n occurrences dans un intervalle donné, absence d’occurrence dans un intervalle donné, première occurrence uniquement sur un intervalle donné – permettent de composer les évènements. Le mot clé same peut être apposé à un évènement complexe pour préciser des contraintes d’égalité sur les paramètres des évènements mis en jeu. Le système est également doté d’un intervalle de suivi des évènements indiquant une fenêtre dans laquelle reconnaître un comportement donné (reconnaître E dans I). Cet intervalle peut aussi bien être délimité explicitement avec des instants absolus qu’implicitement, et il peut être défini pour réapparaître périodiquement. Pour certaines constructions comme l’absence d’un comportement, il est obligatoire de préciser un tel intervalle. [GD94a] propose un modèle de reconnaissance des évènements complexes ainsi définis à l’aide d’un formalisme proche des réseaux de Petri colorés, car celui-ci permet de faire circuler dans le réseau les informations relatives aux paramètres des évènements complexes ou non. La notion de SAMOS Petri nets (S-PN) est introduite. Il s’agit de réseaux de Petri colorés possédant trois types de places, les places en entrée correspon- 35Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements dant aux évènements composant un évènement complexe, les places en sortie correspondant aux évènements complexes, et des places auxiliaires. Les opérateurs sont représentés par des transitions, et le mot clé same par une garde éventuelle sur les transitions. Une contrainte « d’absence de conflit » est imposée, c’est-à-dire qu’un jeton ne peut pas activer deux places simultanément. Seules les constructions relatives à la conjonction et à la reconnaissance de n occurrences sur un intervalle sont présentées. Lorsqu’un évènement complexe fait partie de la composition d’un autre évènement complexe, les réseaux correspondants sont combinés. Inversement, lorsqu’un évènement simple participe à plus d’un événement complexe, le jeton correspondant à l’évènement simple est dupliqué ; ainsi, à l’occurrence d’un évènement, seule une place doit être marquée. Ce modèle de reconnaissance est implémenté au sein de SAMOS. GEM [MSS97, MSS96] Le langage Generalised Event Monitoring (GEM) est un langage déclaratif fondé sur un système de règles et s’attachant à la reconnaissance de comportements dans le cadre de systèmes distribués. La seule hypothèse réalisée est l’existence d’une horloge globale bien synchronisée. Les évènements considérés sont dotés d’attributs, dont par défaut, la classe d’évènement auquel l’évènement appartient, le composant du système dont il est issu et son instant d’occurrence. Des évènements complexes peuvent être construits à l’aide d’opérateurs de conjonction, de délai suivant une reconnaissance, d’absence d’un évènement complexe entre deux autres évènements, de disjonction, et de séquence. Une garde optionnelle peut exprimer des contraintes sur les attributs et sur les instants de début et de fin des reconnaissances, ce qui permet notamment de décrire l’ensemble des relations d’Allen, et un opérateur d’identification permet de se référer à une instance précise d’un évènement dans une règle. Les règles sont construites en quatre parties : — un identifiant unique de la règle ; — une fenêtre de détection déterminant la durée pendant laquelle doit être conservé l’historique des évènements liés à la règle ; — la description de l’évènement complexe à reconnaître ; — les actions à effectuer si l’évènement complexe est détecté (actions de notification explicite de l’évènement – interne ou externe à la règle –, transfert de certains évènements ayant donné lieu à la reconnaissance – dans une visée de filtrage par exemple –, activation ou désactivation de règles. Des commandes de contrôle globales similaires aux actions ci-dessus sont également disponibles. Le processus de détection des comportements est fondé sur une structure arborescente associée à chaque comportement à reconnaître et suivant la structure de l’expression du comportement. Chaque nœud possède un type identifiant l’opérateur dont il s’agit, la garde associée, et l’historique des évènements correspondants. À l’arrivée d’un évènement, celui-ci est inséré à sa place dans l’arbre, sans considération temporelle autre, ce qui permet d’autoriser un retard dans l’arrivée des évènements, dans la limite de la fenêtre temporelle de détection définie. La gestion du retard des évènements se fait donc au sein même de l’étape de détection. Au niveau de chaque nœud, l’historique des évènements concernés est conservé, dans le cadre de la fenêtre de détection, ce qui permet de diminuer les évènements à ordonner. Des pointeurs sont également utilisés pour éviter la duplication d’historiques. [MSS96] décrit l’intégration et l’implémentation de GEM en C++. 36CHAPITRE 1. SYSTÈMES DE TRAITEMENT FORMEL D’ÉVÈNEMENTS COMPLEXES Rete [For82, Ber02, WBG08, WGB08] Rete [For82] est un algorithme permettant de comparer une grande quantité de motifs (patterns) à une grande quantité d’objets. Il s’agit de déterminer l’intégralité des correspondances correctes et l’accent est porté en particulier sur l’efficacité du système. La version originale de Rete ne permet pas de considérations temporelles, et il existe de nombreuses extensions de ce système. [Ber02] introduit dans Rete une sémantique temporelle où tout évènement simple ou complexe est considéré comme ponctuel. Les règles définissant les comportements à reconnaître sont compilées pour obtenir des graphes composés de trois types de nœuds : les nœuds de classe qui filtrent les faits selon les classes, les nœuds de jonction qui combinent les faits, et les nœuds de discrimination qui filtrent les faits selon leurs attributs. Les faits sont alors stockés au niveau de chaque nœud et sont propagés en fonction des règles transcrites. Pour la gestion des contraintes temporelles, une horloge interne discrète est introduite conjointement avec une notion d’évènement qui s’oppose à la notion de fait. Les évènements sont datés et des contraintes temporelles peuvent être spécifiées sur les dates d’occurrence (on rappelle que chaque évènement même complexe est considéré comme ponctuel) avec les prédicats before et after bornant à un intervalle la différence entre les deux dates d’occurrence des évènements. Les contraintes peuvent être combinées avec des opérateurs de disjonction, de conjonction et de négation. L’accent est mis sur l’importance de la notion de changement d’état qui peut être exprimée à l’aide des évènements introduits. L’introduction de contraintes temporelles permet une gestion de la mémoire à l’intérieur même du système : les évènements rendus obsolètes sont oubliés. Une autre extension de Rete est développée dans [WBG08]. Elle s’oppose à [Ber02] dans la considération des évènements qui ne sont plus ponctuels mais dotés d’un instant de début et d’un instant de fin, ce qui est fondamental pour éviter des erreurs de reconnaissance dans le cas de séquences (problématique mise en avant dans [GA02]). Rete est donc étendu avec une sémantique temporelle d’intervalle qui permet l’expression des treize relations d’Allen étendues de deux manières : — possibilité de définir la valeur exacte ou une plage de valeurs sur la durée séparant l’instant de début et l’instant de fin de deux intervalles de temps dans le cas des opérateurs non restrictifs (during, finishes, starts, overlaps, before) ; — définition possible de limites de tolérance au niveau des opérateurs restrictifs (par exemple pour equals). Il faut noter que ces extensions suppriment le caractère exclusif des opérateurs d’Allen mais permettent de s’adapter aux situations réelles que l’on peut souhaiter exprimer. Un système de calcul de la durée de vie des évènements est implémenté, découlant des contraintes temporelles spécifiées. [WGB08] présente une extension de Rete complémentaire à [WBG08] pour la gestion de fenêtres temporelles de validité autour des évènements. Ces fenêtres temporelles rendent obsolètes les évènements dont l’instant de fin sort de la fenêtre – notons qu’un évènement peut donc commencer avant la fenêtre dans laquelle il est considéré. Snoop [CM94, CKAK94], SnoopIB [AC06] Le domaine des bases de données actives se consacre notamment à surveiller la séquence d’évènements affectant la base de donnée depuis l’extérieur. Le système reconnaît des comportements et peut y réagir suivant des règles EventCondition-Action (ECA) spécifiant les comportements à reconnaître et les actions à effectuer 37Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements lors de la détection. Snoop [CM94] est un tel système. Il est construit sur un modèle de temps linéaire discret et permet l’analyse d’évènements primitifs ne pouvant avoir d’occurrences simultanées et composés à la fois : — d’évènements explicites externes fournis au système avec leurs paramètres contenant au moins le type de l’évènement ainsi que sa date ; — d’évènements temporels pouvant être absolus (c’est-à-dire datés) ou relatifs (i.e. placés temporellement par rapport à un autre évènement de Snoop) ; — d’évènements de la base de donnée correspondant aux opérations de manipulation des données. Les évènements composites à reconnaître sont définis à partir de ces évènements primitifs et des opérateurs de disjonction, de séquence, de conjonction, n occurrences parmi . . . , l’opérateur apériodique A(E1, E2, E3) (E2 a lieu entre E1 et E3) et l’opérateur périodique P(E1, [t], E3) (E1 puis E3 dure exactement [t]. Notons qu’il n’y a pas alors d’expression de négation ou d’absence (la difficulté est évoquée dans la Section 5.4 de [CM94]). Les évènements sont tous considérés comme ponctuels et la notion de modificateur d’évènement est introduite pour définir l’instant d’occurrence d’un évènement initialement non ponctuel. Par défaut, il existe deux modificateurs d’évènements, à savoir begin_of et end_of. Le choix de multiplicité des reconnaissances dans le processus de reconnaissance de ces comportements complexes varie selon le contexte adopté parmi le contexte récent, le contexte chronique, le contexte continu et le contexte cumulatif présentés dans la Section 1.1. Le processus de reconnaissance est fondé sur des graphes associés à chaque évènement complexe à reconnaître et obtenus après compilation des expressions de ces évènements. Les arbres suivent la structure des expressions concernées, et, dans le cas de sous-expressions identiques, les parties associées de l’arbre sont amalgamées. Dans [CM94], un algorithme est détaillé pour le contexte récent qui permet l’utilisation d’un buffer de taille fixe au niveau de chaque nœud des arbres (contrairement aux contextes continu et cumulatif qui demandent beaucoup d’espace mémoire). Une notion d’équivalence d’expressions est définie et peut permettre de réécrire une description de comportement sous une autre forme pour optimiser le processus de reconnaissance par exemple en dévoilant des nouvelles sous-expressions communes. [CKAK94] présente la sémantique de Snoop à l’aide de formules logiques du premier ordre. Cependant, comme pour Rete, la considération d’évènements uniquement ponctuels pose problème dans le cas de composition avec une séquence par exemple car il faut pouvoir également comparer les instants d’initiation (et pas uniquement ceux de terminaison) des reconnaissances (problématique mise en avant dans [GA02]). Pour répondre à ce problème, [AC06] propose SnoopIB, une nouvelle sémantique fondée sur des intervalles et dont la définition formelle est en partie présentée dans [GA02]. Deux nouveaux opérateurs sont introduits, à savoir l’opérateur de non occurrence d’un évènement entre l’instant de terminaison d’un évènement et l’instant d’initiation d’un autre évènement, et l’opérateur plus exprimant l’occurrence d’un évènement suivi d’une durée. 38CHAPITRE 1. SYSTÈMES DE TRAITEMENT FORMEL D’ÉVÈNEMENTS COMPLEXES Domaines d’application Au travers de cette sélection de systèmes d’analyse d’évènements complexes, nous avons un aperçu général des différentes approches possibles, dans des domaines variés. La multiplicité de ces systèmes est due au large éventail d’applications possibles de la reconnaissance de comportements. Nous donnons ici un échantillon des différents domaines dans lesquels l’IFP s’est montrée utile : — supervision et l’analyse de situations dangereuses à l’aide d’un drone pour aider les services de police [Hei01, DKH09a, DKH09b, DGK+00], avec des chroniques ; — de nombreux domaines d’application en médecine comme le monitoring cardiaque où des méthodes d’apprentissage sont appliquées[CD00, CCQW03, QCC+10, Doj96, Por05, CD97], avec des chroniques ; — gestion d’alarmes pour la détection d’intrusions informatique [MD03], avec CRS et les chroniques de Dousson et al. ; — diagnostic de web-services [PS09, CGR+07, LGCR+08], avec des chroniques ; — évaluation de la qualité de transports publics (projet PRONTO) [KVNA11, VL10], avec l’EC ; — surveillance vidéo (projet CAVIAR) [SA11, ASP10b, ASP10a, AP09], avec l’EC ; — analyse des médias sociaux [SA11] ; — aide à la prise de décision dans le cadre de combats aériens [CV98], avec des chroniques ; — supervision et gestion de réseaux [SEC+10], avec des chroniques ; — dans l’industrie, supervision d’une turbine à gaz dans une usine pétrochimique [MNG+94] et supervision d’une usine de lait [MCCDB10], avec des chroniques ; — caractérisation d’activité humaine [CMM12], avec des chroniques. Après cette introduction et ce survol des systèmes existants, nous allons développer le système de reconnaissance de comportements des Chroniques/Onera afin de se rapprocher d’un système répondant aux enjeux évoqués dans la Section 1.1. 39Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements 40Chapitre 2 Construction d’un cadre théorique pour la reconnaissance de comportements : le langage des chroniques Sommaire 3.1 Définition du formalisme des réseaux de Petri colorés . . . . . . . . 70 3.1.1 Types et expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.1.2 Réseaux de Petri colorés . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.1.3 La fusion de places . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.1.4 Arcs inhibiteurs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.2 Construction formelle des réseaux dits « à un seul jeton » . . . . . 77 3.2.1 Types et expressions utilisés dans le modèle . . . . . . . . . . . . . . . . 77 3.2.2 Structure générale des réseaux « à un seul jeton » . . . . . . . . . . . . 79 3.2.3 Briques de base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.2.4 Construction par induction . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.3 Formalisation et description de l’exécution des réseaux . . . . . . . 89 3.3.1 Reconnaissance d’un évènement simple . . . . . . . . . . . . . . . . . . . 89 3.3.2 Reconnaissance d’une séquence . . . . . . . . . . . . . . . . . . . . . . . 91 3.3.3 Reconnaissance d’une disjonction . . . . . . . . . . . . . . . . . . . . . . 94 3.3.4 Reconnaissance d’une conjonction . . . . . . . . . . . . . . . . . . . . . 95 3.3.5 Reconnaissance d’une absence . . . . . . . . . . . . . . . . . . . . . . . . 99 3.3.6 Définition formelle de la stratégie de tirage . . . . . . . . . . . . . . . . 106 3.4 Démonstration de la correction du modèle « à un seul jeton » . . . 107 41Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements 3.5 Étude de la taille des réseaux . . . . . . . . . . . . . . . . . . . . . . . 115 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 De nombreuses applications concrètes de la reconnaissance de comportements nécessitent notamment des moyens à la fois de validation et de traçabilité, comme il sera illustré dans le Chapitre 5. Pour cela, il s’agit de fournir un cadre théorique solide pour la reconnaissance de comportements en adoptant une approche purement formelle qui assure une possibilité de vérification et d’analyse du processus de reconnaissance. Dans ce chapitre, nous posons ce cadre théorique en développant un langage de description de comportements, le langage des chroniques introduit dans la Section 1.5 1 , et en formalisant le processus de reconnaissance associé à l’aide d’une sémantique [PBC+] : — nous développons un formalisme autour de la notion d’évènement et de leurs attributs ; — nous étendons largement la syntaxe du langage des chroniques de [CCK11] avec des constructions permettant non seulement l’expression de contraintes temporelles variées mais aussi la spécification de contraintes complexes sur des attributs d’évènement ; — nous introduisons une nouvelle représentation de la notion de reconnaissance de chronique, en passant d’ensembles d’ensembles à des ensembles d’arbres ce qui donne une structure des reconnaissances plus précise et permet ainsi des opérations plus fines sur les reconnaissances ; — nous formalisons le processus de reconnaissance à travers une sémantique du langage que nous avons étendu ; — nous rendons possible l’implémentation du processus de reconnaissance avec un modèle de temps continu grâce à une fonction « Look-ahead » qui fournit le prochain instant où interroger le programme. Dans une première section (2.1), nous posons les définitions générales formalisant le contexte de notre travail d’analyse de comportements complexes, à savoir les notions d’évènement et d’attributs d’évènement. Nous définissons ensuite dans la Section 2.2 la syntaxe étendue du langage des chroniques, permettant notamment la spécification à la fois de contraintes sur des attributs d’évènement et de contraintes temporelles exprimant des exigences sur les délais. La Section 2.3 définit ensuite la sémantique du langage des chroniques, spécifiant ainsi la notion de reconnaissance, et ce après avoir défini une nouvelle représentation arborescente des reconnaissances. Pour nous familiariser avec le langage des chroniques et pour simplifier les démonstrations à venir, nous étudions dans la Section 2.4 les principales propriétés du langage. Dans la visée d’une implémentation du processus de reconnaissance et pour assurer la gestion d’un modèle de temps continu, nous définissons dans la Section 2.5 une fonction dite de « Look-ahead » qui permet par la suite le pilotage des appels au processus de reconnaissance. Le chapitre s’achève avec un tableau récapitulatif informel des propriétés du langage construit dans la Section 2.6. 1. Rappelons que les chroniques étudiées ici se réfèrent à celles introduites par P. Carle dans [CBDO98] qui diffèrent de celles introduites par C. Dousson dans [DGG93]. 42CHAPITRE 2. CONSTRUCTION D’UN CADRE THÉORIQUE POUR LA RECONNAISSANCE DE COMPORTEMENTS : LE LANGAGE DES CHRONIQUES 2.1 Définitions générales préalables L’objectif de cette section est de formaliser la notion d’évènement qui sera manipulée tout au long du chapitre. Nous reprenons le cadre formel posé dans la Section 1.5 [CCK11] en introduisant la notion d’attribut dans la formalisation du concept d’évènement. En effet, nous souhaitons maintenant considérer des évènements munis d’informations (par exemple un identifiant, une qualité, des coordonnées, une vitesse, . . .) sur lesquelles il sera ensuite possible de raisonner en effectuant des comparaisons ou des calculs, et de poser des contraintes. Cette extension primordiale est motivée par de nombreuses applications qui nécessitent de pouvoir raisonner sur des données liées aux évènements du flux que l’on souhaite analyser. Par exemple, supposons que nous surveillons un avion en vol dans le but de s’assurer que la fréquence radio sur laquelle il est réglé correspond bien à celle associée à sa zone de vol. Il faut identifier les évènements relatifs à l’avion au milieu de tous les autres évènements, puis accéder aux données relatives à la fréquence radio et à la position de l’appareil afin d’effectuer des comparaisons entre elles. Dans une autre situation, on peut également être amené à effectuer des calculs, pour évaluer la distance entre deux avions et garantir une distance minimale. Nous allons donc introduire la notion d’attribut d’évènement. Les évènements observés pourront être dotés d’une ou plusieurs caractéristiques, que nous appellerons attributs ou propriétés. Nous cherchons à reconnaître des agencements complexes de tels évènements, agencements décrits par des formules de chroniques que nous définirons par la suite. Dans ces chroniques, nous souhaitons exprimer des contraintes sur ces attributs d’évènement. Pour manipuler librement ces propriétés, nous construisons, à partir des attributs d’évènement, des attributs de reconnaissance de chronique qui auront un rôle similaire, à savoir représenter des informations liées au comportement plus global décrit par la chronique. Ainsi, si l’on souhaite écrire des chroniques pour réaliser de l’évitement de collision à partir de mesures radar, il peut être intéressant d’écrire une première chronique calculant la distance entre deux aéronefs à partir des données brutes du radar. Cette chronique correspond alors à la reconnaissance d’une distance avec comme propriétés les identifiants des deux appareils ainsi que la donnée numérique de la distance calculée. La chronique peut ensuite être utilisée au sein d’autres chroniques pour engendrer des alertes par exemple. La chronique, munie de ses nouveaux attributs, peut alors être considérée comme un évènement complexe de plus haut niveau, souvent non ponctuel, et formant une abstraction des évènements du flux. La chronique obtenue peut alors être utilisée pour former une autre chronique, au même titre qu’un simple évènement, et l’on peut disposer de ses attributs. Pour définir ces notions d’évènement et d’attribut, on considère les trois ensembles suivants : — N un ensemble dénombrable de noms d’évènement, contenant un élément τ utilisé pour nommer les instants temporels purs ; — P un ensemble dénombrable de noms de propriété ou aussi de noms d’attribut contenant un élément particulier ♦ dénommant les propriétés anonymes qui désignent les propriétés qui n’ont pas encore été nommées par l’utilisateur ; — V un ensemble de valeurs de propriété. 43Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements 2.1.1 Évènements et leurs attributs Commençons par définir la notion d’évènement. Contrairement à [CCK11], on souhaite considérer un modèle de temps continu. Les évènements sont donc datés par des réels et nous les identifions maintenant par un couple (nom, date) et non plus par leur indice d’occurrence dans le flux d’évènements étudié. Définition 6 (évènements). Un évènement est une paire (e, t) ∈ N × R composée d’un nom d’évènement et d’une date représentée par un réel. Le nom d’évènement τ ∈ N est réservé pour identifier les évènements temporels purs. Sur l’ensemble des évènements E ⊆ N × R, la projection sur la date est la fonction de datation, notée θ : E → R. Les informations spécifiques associées aux évènements sous la forme d’attributs sont regroupées dans un ensemble d’attributs comme suit : Définition 7 (attributs, ensembles d’attributs). Un attribut, aussi appelé une propriété, est une paire a = (p, v) ∈ P × V composée d’un nom de propriété et d’une valeur. Sur l’ensemble des attributs, la projection sur le nom de propriété est appelée la fonction de référence, notée ρ. Un ensemble d’attributs d’évènement est un ensemble X ⊆ P×V vérifiant la propriété fonctionnelle suivante, qui exprime que X est le graphe d’une fonction, c’est-à-dire que chaque propriété n’a qu’une seule valeur : ∀p∀v((p, v) ∈ X ⇒ ∀w((p, w) ∈ X ⇒ w = v)) (2.1) Par la suite on considère un ensemble Ae(P, V) d’ensembles d’attributs d’évènement sur P × V stable par union d’ensembles d’attributs d’évènement ayant des noms disjoints, c’est-à- dire vérifiant la contrainte suivante : ∀X1 ∈ Ae(P, V) ∀X2 ∈ Ae(P, V) {p ∈ P : ∃v ∈ V(p, v) ∈ X1} ∩ {p ∈ P : ∃v ∈ V(p, v) ∈ X2} = ∅ ⇒ X1 ∪ X2 ∈ Ae(P, V) Cette contrainte de stabilité est nécessaire pour la bonne définition des ensembles de reconnaissance donnée dans la Définition 16 (en effet, Ae(P, V) est le domaine de définition de la fonction D de la Définition 11). Ces évènements, dotés éventuellement de leurs attributs, sont regroupés sous la forme de flux d’évènements que l’on souhaite étudier et analyser avec notre système de reconnaissance de comportements. Définition 8 (flux d’évènements). Un flux d’évènements est défini comme une suite d’évènements ϕ = (ui)i∈N ∈ E N ordonnée par rapport au temps : ∀i∀j(i < j ⇒ θ(ui) < θ(uj )) et dotée d’une fonction d’extraction d’attributs α : {ϕ(i) : i ∈ N} → Ae(P, V) qui fournit l’ensemble d’attributs associé à chaque évènement permettant ainsi l’accès aux valeurs des attributs d’évènement simple dans le flux d’évènements. 44CHAPITRE 2. CONSTRUCTION D’UN CADRE THÉORIQUE POUR LA RECONNAISSANCE DE COMPORTEMENTS : LE LANGAGE DES CHRONIQUES 2.1.2 Opérations sur les attributs Lors du processus de reconnaissance qui analyse le flux d’évènements pour reconnaître les comportements décrits par des chroniques, des évènements sont rassemblés pour former des reconnaissances, comme ce sera formalisé dans la Section 2.3. Durant ce processus de reconnaissance, des attributs doivent être manipulés et peuvent être modifiés. Il peut être nécessaire de réaliser des opérations sur divers attributs de différents évènements. Si ces opérations sont imbriquées dans un comportement plus complexe à reconnaître, les résultats de ces opérations doivent être stockés sous forme d’attributs associés cette fois-ci aux reconnaissances et non plus aux évènements, afin de pouvoir être utilisés a posteriori. Comme évoqué précédemment, les reconnaissances de comportements sont donc elles aussi dotées d’attributs. Définition 9 (attribut de reconnaissance). Un ensemble d’attributs de reconnaissance de comportements est un ensemble X ⊆ P×Ae(P, V) vérifiant la propriété fonctionnelle (2.1). L’ensemble des ensembles d’attributs de reconnaissance est noté Ar(P, V). Comme défini au début de cette section, l’ensemble des noms de propriété contient un nom spécifique, ♦, utilisé comme nom anonyme. Lors de la progression du processus de reconnaissance, des attributs peuvent être calculés et enregistrés sous ce nom, en tant que nouveaux attributs temporaires, avant d’être éventuellement nommés pour être utilisés par la suite, comme il sera détaillé 2.3.2. Les fonctions suivantes permettent l’expression de telles opérations sur les attributs. Définition 10 (transformations d’attributs). Une transformation d’attributs est une fonction définie sur l’ensemble des ensembles d’attributs de reconnaissance Ar(P, V) dans l’ensemble des ensembles d’attributs d’évènement Ae(P, V) qui permet d’engendrer de nouvelles propriétés qui seront anonymes jusqu’à ce qu’elles soient oubliées ou nommées. Une fonction de transformation d’attributs f doit vérifier la contrainte suivante : ∀Xr ∈ Ar(P, V) {p ∈ P : ∃v ∈ V(p, v) ∈ f(Xr)} ∩ {p ∈ P : ∃Xe ∈ Ae(P, V) ∃pr ∈ P((pr, Xe) ∈ Xr ∧ ∃v ∈ V(p, v) ∈ Xe)} = ∅ qui exprime que les nouveaux attributs d’évènement créés par la fonction f ont des noms strictement différents de ceux déjà employés dans l’ensemble d’attributs de reconnaissance qui est en argument de f. Cette obligation participe à assurer l’unicité d’utilisation des noms de propriété dans une chronique. L’ensemble des fonctions de transformation d’attributs sur (P, V) est noté T(P, V). Les transformations d’attributs produisent ainsi de nouvelles données attachées aux reconnaissances. Pour pouvoir les employer par la suite dans des comparaisons ou des calculs, il est nécessaire de les identifier en les nommant. Les nouvelles propriétés qui sont soit issues d’une transformation d’attributs soit directement issues du flux d’évènements (i.e. des attributs d’évènement) ont d’abord un nom anonyme ♦ qui leur est donné par la fonction suivante : Définition 11 (fonction de dénomination anonyme). Une fonction de dénomination anonyme est définie sur l’ensemble des ensembles d’attributs d’évènement. Elle crée un ensemble d’attributs de reconnaissance, réduit à un singleton, en nommant ♦ l’ensemble d’attributs d’évènement 45Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements initial comme défini ci-dessous : D : Ae(P, V) → Ar(P, V) X 7→  {(♦, X)} si X 6= ∅ ∅ si X = ∅ Les propriétés anonymes ♦ peuvent ensuite être nommées par la fonction suivante : Définition 12 (fonction de renommage d’attributs). La fonction de renommage d’attributs est définie sur l’ensemble des ensembles d’attributs de reconnaissance et sur l’ensemble des noms de propriété. Elle nomme l’attribut anonyme ♦ de l’ensemble d’attributs comme défini ci-dessous 2 : R : Ar(P, V) × P → Ar(P, V) (X, p0 ) 7→ {(p 0 , v) : (♦, v) ∈ X} ∪ {(p, v) ∈ X : p 6= ♦} Ces fonctions permettent d’effectuer toutes les opérations nécessaires à la manipulation d’attributs et sont utilisées dans la Section 2.3.2 pour définir le processus de reconnaissance où l’on manipule des attributs qui doivent être nommés et parfois modifiés. 2.2 Définition d’une syntaxe étendue du langage des chroniques : ajout de contraintes sur des attributs d’évènement et de constructions temporelles Nous pouvons maintenant définir une syntaxe du langage des chroniques étendant largement celle de [CCK11]. De même que dans la Section 1.5, le langage des chroniques est construit par induction à partir d’évènements simples et de divers opérateurs permettant de spécifier des contraintes, temporelles ou non, sur les évènements étudiés. Commençons par détailler les diffé- rentes extensions et modifications envisagées. Expression de contraintes sur des attributs d’évènement Le langage des chroniques peut maintenant être étendu pour permettre de prendre en compte et de raisonner sur les attributs définis dans la Section 2.1.1 à l’aide des fonctions définies dans la Section 2.1.2. Toute chronique est dotée d’un prédicat P qui exprime les contraintes souhaitées sur les propriétés manipulées. Avant de pouvoir valider une reconnaissance, il faut que le prédicat P, évalué sur les valeurs des attributs de la reconnaissance, soit vérifié. Pour la manipulation des attributs, une chronique est également dotée d’une fonction de transformation d’attributs (Définition 10) introduisant de nouveaux attributs. 2. Il est immédiat de montrer que les images de la fonction R sont bien des éléments de Ar(P, V). 46CHAPITRE 2. CONSTRUCTION D’UN CADRE THÉORIQUE POUR LA RECONNAISSANCE DE COMPORTEMENTS : LE LANGAGE DES CHRONIQUES Une construction de nommage Nous ajoutons également une construction de nommage afin de pouvoir spécifier un nom pour nommer les nouvelles propriétés anonymes ♦, comme introduit précédemment dans 2.1. Une telle construction est nécessaire. En effet, un nom unique doit être donné aux nouveaux attributs, ils ne peuvent donc pas être nommés d’après le nom de l’évènement ou de la sous-chronique auquel ils correspondent car plusieurs évènements de même nom ou même plusieurs sous-chroniques peuvent prendre part à une description de comportement, mais il est nécessaire de pouvoir les distinguer. Donc, comme plusieurs évènements d’un même nom peuvent prendre part à la construction d’une chronique, il est nécessaire de pouvoir se référer précisément à l’un d’entre eux afin de savoir de quel évènement il faut récupérer dans le flux les valeurs des attributs pour évaluer le prédicat. Le nom spécifié par la construction de nommage remplit ce rôle mais pour cela il faut assurer qu’un nom donné n’est employé qu’une unique fois. Notion de contexte Pour assurer que les noms de propriété ne sont effectivement utilisés qu’une seule fois dans une chronique, il est nécessaire de faire apparaître des contraintes sur ces noms au niveau de la construction du langage. Pour ce faire, nous définissons la notion de contexte d’une chronique : il s’agit de l’ensemble des noms de propriété mis en jeu dans la chronique. Le contexte d’une chronique donnée, construit par induction en parallèle du langage, est intuitivement généralement l’union des contextes des sous-chroniques directes formant la chronique étudiée (c’est-à-dire l’union des noms de propriété des sous-chroniques). Cette gestion du contexte doit être particularisée pour quelques constructions, et nous détaillons les raisons de ces particularités après la définition suivante. Le contexte est également utilisé pour déterminer quels attributs sont disponibles pour l’évaluation du prédicat P et de la fonction de transformation f associés à la chronique. On devra ainsi distinguer deux types de contextes : un contexte d’évaluation pour le prédicat P et la fonction f, et un contexte résultant à transmettre dans l’induction pour la construction des contextes de chroniques plus complexes. Le contexte évolue donc en deux temps : une première fois avant la reconnaissance de la chronique et l’évaluation du prédicat, et une seconde fois après. Nous appelons contexte d’évaluation le contexte correspondant au domaine possible du prédicat et de la fonction, et contexte résultant, le contexte d’évaluation éventuellement modifié, après l’évaluation du pré- dicat, et qui sert à définir le prochain contexte d’évaluation dans l’induction. Des contraintes qui apparaissent dans la construction du langage pour assurer qu’un nom ne peut être utilisé qu’une seule fois portent sur le contexte d’évaluation. Les raisons précises derrière l’existence de ces deux notions de contexte seront développées après la Définition 13. Des opérateurs de contraintes sur les délais Afin de modéliser des contraintes temporelles, dix constructions sont ajoutées par rapport à [CCK11]. Ces constructions découlent de la logique d’intervalles d’Allen [All83] évoquée dans le Chapitre 1. Il s’agit de spécifier des contraintes sur des intervalles de reconnaissance de chroniques, c’est-à-dire, pour une reconnaissance donnée, sur l’intervalle de temps nécessaire et suffisant pour établir cette reconnaissance. Une première partie des opérateurs de contraintes temporelles permet d’exprimer toutes les relations temporelles entre deux intervalles de reconnaissance de deux chroniques (opérateurs equals, starts, finishes, meets, 47Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements overlaps, et during) à l’instar des treize relations d’Allen 3 . D’autres constructions décrivent trois différentes contraintes sur la durée de reconnaissance d’une chronique (opérateurs lasts δ, at least δ, at most δ) exprimant que le temps de reconnaissance de la chronique doit être respectivement exactement, au moins ou au plus une certaine durée δ. Elles correspondent à une transcription des opérateurs d’Allen equals, during et le symétrique de during. Seules ces trois constructions sont retenues car ce sont les seules ayant un sens dans notre contexte. En effet, si l’on considère par exemple la transposition de la relation starts, la contrainte qu’une certaine durée δ « débute » une reconnaissance n’apporte aucune spécification supplémentaire que celle qui impose que la reconnaissance dure au moins δ. La relation overlaps, quant à elle, lorsqu’elle est transcrite entre une reconnaissance et une durée, serait vérifiée dans tous les cas ; de même que la relation before. Il n’y a donc pas de sens à les introduire. La dernière construction spécifie l’écoulement d’un laps de temps directement après la reconnaissance d’une chronique (opérateur then δ) et correspond à une transposition de l’opérateur d’Allen meets. Les bornes de l’absence La notion d’absence est une notion cruciale dans le domaine de la reconnaissance de comportements, comme évoqué dans la Section 1.1. Dans la description d’une absence, il est nécessaire, pour pouvoir statuer d’une reconnaissance, de spécifier l’intervalle de temps pendant lequel le comportement non désiré ne doit pas se produire. Dans [CCK11], le langage des chroniques permet la description de l’absence d’un comportement pendant un autre comportement. C’est ce second comportement qui spécifie l’intervalle de temps à observer et pendant lequel le comportement interdit ne doit pas avoir lieu. Se pose alors la question de l’inclusion ou non des bornes dans l’intervalle considéré : le comportement interdit peut-il commencer en même temps ou se terminer au même instant que le comportement recherché ? Dans [CCK11], un seul cas de figure est formulable : le comportement interdit peut commencer en même temps mais doit terminer strictement avant la fin de la reconnaissance recherchée pour l’invalider. Nous introduisons dans la définition suivante (cf. [absence]) trois nouvelles notations qui permettent l’expression des trois possibilités. Notons que l’ancienne notation est utilisée parmi les trois, mais, afin de rendre la lecture du langage plus intuitive, elle ne désigne plus les même bornes que dans [CCK11] (cf. Définition 16). Le « cut » Nous ajoutons également plusieurs nouveaux opérateurs. Les deux premiers ajouts correspondent à des séquences dont nous souhaitons limiter la multiplicité des reconnaissances. Nous commençons par ajouter un opérateur que l’on appelle « cut », noté « ! ». Il exprime la reconnaissance consécutive de deux comportements A et B, comme dans le cadre d’un séquence, mais nous la restreignons à la première occurrence du comportement B après chaque comportement A. Nous limitons donc le nombre de reconnaissances par rapport à une séquence classique. 3. L’opérateur d’Allen before correspond à une version stricte de la séquence (la séquence est en fait une disjonction de meets et before) qui fait déjà partie du langage de [CCK11] et n’est donc pas ajouté. Notons également que l’opérateur de conjonction peut alors être exprimé comme une disjonction de tous les opérateurs d’Allen. 48CHAPITRE 2. CONSTRUCTION D’UN CADRE THÉORIQUE POUR LA RECONNAISSANCE DE COMPORTEMENTS : LE LANGAGE DES CHRONIQUES Le changement d’état ou « double cut » Nous ajoutons une seconde construction, correspondant à une séquence dont nous limitons la multiplicité des reconnaissances, avec une chronique exprimant le « changement d’état ». Il s’agit de décrire le passage d’un comportement caractérisant un état à un autre comportement caractérisant un autre état. Cette construction est représentée par l’opérateur noté « !! » et correspond, comme nous le verrons plus clairement par la suite, à un cut « dans les deux sens », donc nous l’appelons également « double cut ». Elle répond à une problématique fréquente lors du traitement de cas concrets. Considérons par exemple un drone à terre dont le décollage doit être détecté. Ses coordonnées de position permettent aisément d’identifier si le drone se trouve à terre (on_ground) ou bien s’il vole (above_ground). Le changement d’état de l’un à l’autre caractérise le décollage de l’appareil. Identifier ce changement d’état correspond à détecter « le dernier on_ground avant le premier above_ground », ce qui correspond à la sémantique de l’opérateur « !! ». Notons que ce changement d’état peut être exprimé à l’aide des autres opérateurs, mais lorsque l’on traite des applications réelles, il apparaît que l’opérateur de changement d’état est souvent nécessaire. Cette construction est donc ajoutée au langage afin de s’affranchir de définitions fastidieuses de chroniques. Une construction d’évènement de reconnaissance Lors de l’écriture des descriptions des comportements à reconnaître, il peut être intéressant de décomposer les comportements complexes pour les décrire en plusieurs étapes. Il y a alors deux possibilités pour imbriquer une chronique dans une autre. Soit la chronique est considérée comme un évènement ayant un instant de fin et un instant de début a priori disjoints, soit la chronique est réduite à son instant de reconnaissance. La construction par induction du langage des chroniques implique l’intégration naturelle du premier cas dans la syntaxe. En revanche, si l’on souhaite pouvoir exprimer le second cas, il faut rajouter une construction à apposer à une chronique pour se référer uniquement à son instant de reconnaissance. Pour ce faire, nous introduisons un opérateur, appelé « at » et noté « @ », qui correspond à la détection d’un évènement « abstrait d’une chronique », c’est-à-dire réduit à son instant de reconnaissance et donc nécessairement ponctuel. Donnons maintenant la syntaxe du langage muni des extensions décrites ci-dessus. Notons que la sémantique du langage est présentée Définition 16. Définition 13 (chroniques). Soit N, P, et V les ensembles introduits au début de la Section 2.1, p. 43. Soit S un ensemble de symboles de prédicats. L’ensemble des chroniques sur (N, P, V, S), noté X, est un ensemble de triplets (C, P, f), où : — C est une formule de chronique, comme définie dans la définition inductive de X qui suit ; — P ∈ S est un symbole de prédicat ; — f ∈ T(P, V) est une transformation d’attributs. X est défini inductivement avec deux notions de contextes, qui sont des fonctions de X dans P : — un contexte d’évaluation, noté Ce(·); — un contexte résultant, noté Cr(·). Pour tous C1 ∈ X et C2 ∈ X, on a : 49Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements [évènement simple] Si A ∈ N, alors (A, P, f) ∈ X, Ce(A, P, f) = {♦}, et Cr(A, P, f) = Ce(A, P, f); [séquence] Si Ce(C1) ∩ Ce(C2) = {♦}, alors (C1 C2, P, f) ∈ X, Ce(C1 C2, P, f) = Cr(C1)∪Cr(C2), et Cr(C1 C2, P, f) = Ce(C1 C2, P, f); [conjonction] Si Ce(C1) ∩ Ce(C2) = {♦}, alors (C1&C2, P, f) ∈ X, Ce(C1&C2, P, f) = Cr(C1) ∪ Cr(C2), Cr(C1&C2, P, f) = Ce(C1&C2, P, f); [disjonction] (C1 || C2, P, f) ∈ X, Ce(C1 || C2, P, f) = Cr(C1) ∩ Cr(C2), et Cr(C1 || C2, P, f) = Ce(C1 || C2, P, f); [absence] Si Ce(C1) ∩ Ce(C2) = {♦}, alors ((C1) − [C2], P, f) ∈ X, Ce((C1) − [C2], P, f) = Cr(C1)∪Cr(C2), et Cr((C1) − [C2], P, f) = Cr(C1); De même pour (C1)−]C2], (C1) − [C2[ 4 et (C1)−]C2[ ; [meets] Si Ce(C1) ∩ Ce(C2) = {♦}, alors (C1 meets C2, P, f) ∈ X, Ce(C1 meets C2, P, f) = Cr(C1) ∪ Cr(C2), et Cr(C1 meets C2, P, f) = Ce(C1 meets C2, P, f); [overlaps] Si Ce(C1) ∩ Ce(C2) = {♦}, alors (C1 overlaps C2, P, f) ∈ X, Ce(C1 overlaps C2, P, f) = Cr(C1)∪Cr(C2), et Cr(C1 overlaps C2, P, f) = Ce(C1 overlaps C2, P, f); [starts] Si Ce(C1) ∩ Ce(C2) = {♦}, alors (C1 starts C2, P, f) ∈ X, Ce(C1 starts C2, P, f) = Cr(C1) ∪ Cr(C2), et Cr(C1 starts C2, P, f) = Ce(C1 starts C2, P, f); [during] Si Ce(C1) ∩ Ce(C2) = {♦}, alors (C1 during C2, P, f) ∈ X, Ce(C1 during C2, P, f) = Cr(C1)∪Cr(C2), et Cr(C1 during C2, P, f = Ce(C1 during C2, P, f); [finishes] Si Ce(C1) ∩ Ce(C2) = {♦}, alors (C1 finishes C2, P, f) ∈ X, Ce(C1 finishes C2, P, f) = Cr(C1)∪Cr(C2), et Cr(C1 finishes C2, P, f) = Ce(C1 finishes C2, P, f); [equals] Si Ce(C1) ∩ Ce(C2) = {♦}, alors (C1 equals C2, P, f) ∈ X, Ce(C1 equals C2, P, f) = Cr(C1)∪Cr(C2), et Cr(C1 equals C2, P, f) = Ce(C1 equals C2, P, f); [lasts δ] Si δ ∈ R + ∗ , alors (C1 lasts δ, P, f) ∈ X, Ce(C1 lasts δ, P, f) = Cr(C1), et Cr(C1 lasts δ, P, f) = Ce(C1 lasts δ, P, f); [at least δ] Si δ ∈ R + ∗ , alors (C1 at least δ, P, f) ∈ X, Ce(C1 at least δ, P, f) = Cr(C1), et Cr(C1 at least δ, P, f) = Ce(C1 at least δ, P, f); [at most δ] Si δ ∈ R + ∗ , alors (C1 at most δ, P, f) ∈ X, Ce(C1 at most δ, P, f) = Cr(C1), et Cr(C1 at most δ, P, f) = Ce(C1 at most δ, P, f); [then δ] Si δ ∈ R + ∗ , alors (C1 then δ, P, f) ∈ X, Ce(C1 then δ, P, f) = Cr(C1), et Cr(C1 then δ, P, f) = Ce(C1 then δ, P, f); [nommage de propriété] Si x ∈ P \ {♦}, alors (C1→x, P, f) ∈ X, Ce(C1→x, P, f) = Cr(C1), Cr(C1→x, P, f) = {x, ♦} ; [cut] Si Ce(C1) ∩ Ce(C2) = {♦}, alors (C1!C2, P, f) ∈ X, Ce(C1!C2, P, f) = Cr(C1) ∪ Cr(C2), et Cr(C1!C2, P, f) = Ce(C1!C2, P, f); 4. C’est cette construction qui correspond à celle de l’absence dans [CCK11] présentée dans la Section 1.5.3 et notée (C1) − [C2]. 50CHAPITRE 2. CONSTRUCTION D’UN CADRE THÉORIQUE POUR LA RECONNAISSANCE DE COMPORTEMENTS : LE LANGAGE DES CHRONIQUES [changement d’état] Si Ce(C1) ∩ Ce(C2) = {♦}, alors (C1!!C2, P, f) ∈ X, Ce(C1!!C2, P, f) = Cr(C1) ∪ Cr(C2), et Cr(C1!!C2, P, f) = Ce(C1!!C2, P, f); [évènement de reconnaissance] (@C1, P, f) ∈ X, Ce(@C1, P, f) = Cr(C1), et Cr(@C1, P, f) = Cr(C1). Remarque 4. La grammaire du langage des chroniques, sans considération de contexte, s’exprime comme suit sous la forme de Backus-Naur, avec A ∈ N, δ ∈ R + ∗ et x ∈ P \ {♦} : C ::= A | C C | C&C | C || C | (C) − [C] | C meets C | C overlaps C | C starts C | C during C | C finishes C | C equals C | C lasts δ | C at least δ | C at most δ | C then δ | C→x | C!C | C!!C | @C Éclaircissements sur la double notion de contexte La contrainte récurrente Ce(C1) ∩ Ce(C2) = {♦} assure que tout nom de propriété ne peut être utilisé qu’une seule fois dans une chronique, ce qui est nécessaire pour pouvoir identifier correctement les propriétés. En effet, comme évoqué précédemment, un nom d’attribut se réfère à un évènement ou une chronique spécifique, et un même nom de propriété ne peut donc pas être donné à plusieurs structures au sein d’une même chronique. Quant à la construction des contextes, la définition générique intuitive évoquée précédemment est celle de la séquence et elle est partagée par la plupart des opérateurs. Le contexte d’évaluation est simplement l’union des contextes résultant des deux sous-chroniques, regroupant ainsi tous les noms de propriété mis en jeu dans l’ensemble de la chronique. Le contexte résultant est identique au contexte d’évaluation. Cependant, ceci ne peut pas être appliqué à l’ensemble des opérateurs, et c’est là qu’apparaît la nécessité de définir deux contextes. Présentons les trois exceptions. Le cas de la disjonction Tout d’abord, notons que la disjonction est la seule chronique construite à partir de deux sous-chroniques mais à laquelle n’est pas imposée la contrainte générique d’avoir l’intersection des contextes d’évaluation réduite au singleton {♦}. Au contraire, nous allons nous intéresser aux noms de propriété qui sont employés dans les deux branches de la disjonction. Cette particularité provient du fait que, dans une disjonction, seule l’une des deux sous-chroniques peut être reconnue. Pour qu’un nom d’attribut puisse avoir un sens dans une disjonction, c’est- à-dire pour qu’il soit toujours possible de lui attribuer une valeur en tout cas de figure, un nom d’attribut dans une disjonction doit se référer à un évènement dans chacune des deux branches. En effet, sinon, on ne peut pas assurer que, quelle que soit la branche reconnue, tout nom d’attribut se réfère effectivement à un évènement du flux ce qui est nécessaire pour évaluer le prédicat et la fonction de transformation. Considérons par exemple la chronique (A→x B→y) || (D→z A→x) qui est une disjonction reconnaissant soit la séquence de deux évènements A suivi de B soit la séquence de D suivi de A. Toute reconnaissance de cette chronique mettra en jeu un évènement de nom de propriété x, mais selon la branche de la disjonction reconnue, elle mettra en jeu un évènement de nom de propriété soit y soit z. Au niveau de la disjonction, on peut donc se référer à x pour lequel des valeurs seront toujours disponibles, mais on ne peut pas se référer à y ou z 51Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements comme cela dépend de la branche reconnue. Pour cette raison, le contexte d’évaluation associé à une disjonction est l’intersection des contextes de ses sous-chroniques. Le cas de l’absence Le cas de l’absence est similaire à celui de la disjonction. Nous souhaitons permettre l’expression de la contrainte suivante, sur les attributs des deux sous-chroniques d’une absence : « (C1)−[C2] vérifiant le prédicat P » correspond au comportement décrit par « C1 reconnue sans qu’il n’y ait aucune occurrence de C2 vérifiant le prédicat P durant la reconnaissance de C1, où P peut porter sur les attributs à la fois de C1 et de C2 ». Par exemple, le comportement suivant peut alors être décrit : « Un avion d’ID n a modifié sa fréquence radio à f 6= 118.075 (qui est en fait la fréquence recherchée) à l’instant t au plus 5 min après le décollage, et, après t mais avant cette échéance de 5 min, la fréquence radio n’a pas été corrigée par l’avion n ». Pour reconnaître ce comportement, les attributs des deux sous-chroniques C1 et C2 doivent être comparés pour identifier le même avion n et pour avoir accès à l’instant t. Notons que cet exemple sera développé dans l’application présentée dans la Section 5.3. Pour permettre au prédicat d’avoir accès aux attributs des deux sous-chroniques, le contexte d’évaluation de la chronique doit être l’union des deux contextes résultants des deux sous-chroniques, comme c’est le cas généralement. Cependant, la chronique (C1) − [C2] est reconnue s’il n’y a aucune reconnaissance de C2 pendant la reconnaissance de C1, donc les deux contextes ne doivent pas être passés dans l’induction à une chronique de plus haut niveau. En effet, comme dans le cas de la disjonction, il n’y a pas nécessairement de valeurs pour les attributs de C2 car il n’y a pas nécessairement de reconnaissance de C2. C’est pour cela que l’on introduit la notion de contexte résultant, qui permet de conserver dans le contexte d’évaluation l’union des deux contextes des sous-chroniques, mais de ne passer dans l’induction uniquement le contexte de C1 en réduisant le contexte résultant à celui-ci. Le cas du nommage de propriété La dernière exception à la définition générique des contextes est le cas du nommage de propriété. Le rôle de cette construction de nommer les attributs de la chronique. Afin que ce nom puisse être utilisé par la suite, ce nom doit donc être ajouté au contexte résultant de la chronique. Par ailleurs, il a été décidé que, lorsque les propriétés d’une chronique sont ainsi nommées, on ne conserve que les propriétés créées par la fonction de transformation f (qui sont stockées sous le nom anonyme ♦ avant d’être éventuellement nommées) et les anciennes propriétés sont « oubliées ». Ceci se traduit par le fait que le contexte résultant est donc réduit à l’ensemble {♦, x} où x est le nom de propriété choisi. Notons que si l’on souhaite conserver l’ensemble des propriétés présentes à l’intérieur de la chronique nommée, cela est possible à travers de la fonction f. Le choix d’oublier par défaut ces propriétés a été fait car cela permet d’abstraire la sous-chronique en une sorte d’évènement de plus haut niveau muni d’attributs plus complexes tout en se défaisant d’informations superflues. 52CHAPITRE 2. CONSTRUCTION D’UN CADRE THÉORIQUE POUR LA RECONNAISSANCE DE COMPORTEMENTS : LE LANGAGE DES CHRONIQUES 2.3 Définition de la sémantique du langage à travers la notion de reconnaissance de chronique Dans la section précédente, nous avons établi la syntaxe de notre langage des chroniques, en introduisant de nombreuses nouvelles constructions. Dans cette section, nous allons maintenant définir la sémantique de ce langage en définissant la notion de reconnaissance d’une chronique. Dans un premier temps (Section 2.3.1) nous étudierons le modèle de représentation des reconnaissances dans le formalisme. Dans [CCK11], une reconnaissance est représentée par un ensemble. Nous montrerons qu’une représentation arborescente est plus appropriée. Nous définirons dans un second temps (Section 2.3.2) la sémantique du langage des chroniques fondée sur ce formalisme de reconnaissance arborescente. 2.3.1 Passage à une représentation arborescente des reconnaissances Il s’agit d’étudier ici le formalisme employé pour représenter une reconnaissance de chronique. Rappelons que dans [CCK11], une reconnaissance d’une chronique est un ensemble contenant les indices d’occurrence des évènements ayant donné lieu à la reconnaissance (cf. Définition 1.5.3). Nous souhaitons mettre en avant un problème de multiplicité des reconnaissances lié à la structure ensembliste utilisée. Commençons par étudier l’exemple suivant qui illustre ce problème. Considérons la chronique C = (A B)&A sur le flux ϕ = ((a, 1),(a, 2),(b, 3)). Avec le formalisme de [CCK11] dans lequel une reconnaissance est un ensemble d’évènements, nous obtenons trois reconnaissances de la chronique C sur le flux ϕ : RC (ϕ) = {{1, 2, 3}, {1, 3}, {2, 3}} Remarquons que, dans la reconnaissance {1, 2, 3}, on ne peut pas identifier quel évènement a participe à la reconnaissance de la séquence A B et quel évènement a correspond à l’évènement simple A. Ceci est dû à l’impossibilité de distinguer les ensembles {1, 3, 2} et {2, 3, 1}, et donc à distinguer les deux a. Or, comme introduit dans la Section 1.1, l’historisation des évènements, à savoir l’identification exacte de quel évènement a participé à quel morceau de la reconnaissance, est une problématique omniprésente. D’autre part, l’introduction d’attributs rend primordial l’appariement exact des évènements à la chronique pour que les valeurs correctes des propriétés soient considérées. Nous voudrions donc pouvoir différencier les ensembles {1, 3, 2} et {2, 3, 1}, ce qui donnerait donc quatre reconnaissances pour la chronique C sur le flux ϕ. Ce problème a donc également une incidence sur la combinatoire du système. Pour résoudre cette question, nous proposons de manipuler des reconnaissances sous forme d’arbres binaires plutôt que sous forme de simples ensembles. Davantage d’informations peuvent ainsi être conservées : l’arbre d’une reconnaissance est calqué sur la structure arborescente de la chronique associée et l’on a donc la possibilité de connaître la correspondance exacte entre les noms d’évènements simples de la chronique et les évènements du flux prenant part à la reconnaissance. Pour l’exemple précédent, cela permet de différencier les appariements des a du flux et ainsi 53Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements obtenir quatre reconnaissances. Avec la représentation d’arbre binaire définie par la suite (Définition 14), l’ensemble de reconnaissances est alors : RC (ϕ) = {hh1, 3i, 2i,hh2, 3i, 1i,hh1, 3i, 1i,hh2, 3i, 2i} Par ailleurs, pour favoriser la lecture, nous adoptons un autre système de représentation des évènements, en indiquant les évènements et leur date d’occurrence en lieu et place de leur indice d’occurrence dans le flux, comme annoncé au début de la Section 2.1.1. Nous serons également amenés à ajouter un paramètre d à l’ensemble de reconnaissance, indiquant l’instant jusqu’auquel les évènements du flux ont été pris en compte. Ce paramètre est nécessaire pour l’écriture des ensemble de reconnaissances de certains opérateurs exprimant des contraintes temporelles. Avec ces nouvelles notations, l’ensemble de reconnaissances de l’exemple précédent sera noté : RC (ϕ, 3) = {hh(a, 1),(b, 3)i,(a, 2)i,hh(a, 2),(b, 3)i,(a, 1)i, hh(a, 1),(b, 3)i,(a, 1)i,hh(a, 2),(b, 3)i,(a, 2)i} Définissons maintenant formellement la représentation arborescente utilisée. Une reconnaissance r est l’arbre binaire des évènements (e, t) ayant donné lieu à la reconnaissance. La structure de l’arbre de reconnaissance reflète celle de la chronique associée, les feuilles de celui-ci correspondant aux évènements donnant lieu à la reconnaissance. Les informations qui sont pertinentes sont la structure de l’arbre ainsi que les étiquettes des feuilles 5 . Pour identifier sans ambiguïté les appariements d’évènements avec les sous-chroniques de la chronique étudiée, il n’est pas nécessaire de nommer d’autres nœuds que les feuilles. Nous utilisons donc le formalisme suivant. Définition 14 (arbres de reconnaissance). L’ensemble A(E) des arbres de reconnaissance sur l’ensemble d’évènements E se définit par induction comme suit, où X est un ensemble d’attributs : — si (e, t) ∈ E, alors ((e, t), X) ∈ A(E); — si r1 ∈ A(E) et r2 ∈ A(E), alors (hr1, r2i, X) ∈ A(E); — si r ∈ A(E), alors (hri, X) ∈ A(E), (hr, ⊥i, X) ∈ A(E) et (h⊥, ri, X) ∈ A(E). Nous définissons aussi l’ensemble F(r) des feuilles d’un arbre de reconnaissance r par induction : — si r = ((e, t), X) avec (e, t) ∈ E, alors F(r) = {(e, t)} ; — si r = (hr1, r2i, X) avec r1 ∈ A(E) et r2 ∈ A(E), alors F(r) = F(r1) ∪ F(r2); — si r ∈ {(hr1, ⊥i, X),(h⊥, r1i, X),(hr1i, X)} avec r1 ∈ A(E), alors F(r) = F(r1). Notons que, dans la définition précédente, nous avons introduit une notation pour distinguer deux types de paires : les feuilles qui sont des paires composées d’un nom d’évènement et d’une date, notées entre parenthèses (), et les ramifications des arbres, notées entre chevrons hi. Avant de définir la sémantique de notre langage à travers la notion de reconnaissance, nous définissons au préalable deux fonctions Tmin et Tmax retournant respectivement, en fonction d’une reconnaissance r, le premier et le dernier instants auxquels a lieu un évènement participant à r : 5. L’ensemble des feuilles d’un arbre de reconnaissance correspond à la reconnaissance dans le formalisme de [CCK11]. 54CHAPITRE 2. CONSTRUCTION D’UN CADRE THÉORIQUE POUR LA RECONNAISSANCE DE COMPORTEMENTS : LE LANGAGE DES CHRONIQUES Définition 15 (temps min et max). Tmin : A(E) → R, r 7→ min{t : (e, t) ∈ F(r)} Tmax : A(E) → R, r 7→ max{t : (e, t) ∈ F(r)} Ainsi, pour toute reconnaissance r, l’intervalle [Tmin(r), Tmax(r)] correspond à l’intervalle de temps nécessaire et suffisant pour établir la reconnaissance r. Ces deux fonctions permettront de poser des contraintes temporelles entre les intervalles d’occurrence de chroniques. 2.3.2 Formalisation de la notion de reconnaissance de chronique Nous pouvons maintenant poser la sémantique de notre langage des chroniques en définissant la notion de reconnaissance d’une chronique tout en intégrant la nouvelle représentation arborescente des reconnaissances. La définition de la sémantique est délicate pour deux raisons principales : — d’une part, le processus de reconnaissance doit permettre d’évaluer des prédicats sur des attributs d’évènement provenant de différentes parties de la chronique ; — d’autre part, les chroniques doivent être reconnues en ligne. Avec ces deux contraintes à l’esprit, nous définissons pour chaque chronique un ensemble de reconnaissances qui correspond à toutes les reconnaissances de la chronique représentées sous forme d’arbres et munies de leurs attributs de reconnaissance associés. Comme l’on souhaite que le processus de reconnaissance puisse être effectué au fur et à mesure tout en étant exhaustif, la construction des ensembles de reconnaissances est progressive, ce qui s’exprime par une définition inductive dépendant de l’instant d jusqu’auquel les évènements du flux sont considérés. La constitution de ces ensembles ne nécessite donc pas de connaître par avance l’intégralité du flux d’évènements. Nous définissons donc, par induction pour chaque chronique, l’ensemble de ses reconnaissances sur un flux d’évènements et jusqu’à une date donnés. Les évènements de cet ensemble sont des couples (r, X) où r est un arbre de reconnaissance et X est l’ensemble d’attributs de reconnaissance associé. Chaque définition peut se décomposer en trois parties : — l’expression des contraintes temporelles liées à l’opérateur; — la vérification du prédicat; — la définition de l’ensemble d’attributs de reconnaissance. Définition 16 (ensembles de reconnaissances). Soit C ∈ X une chronique. Considérons, pour tout prédicat P, un (P × V)-modèle M dans lequel il existe une interprétation Pˆ de P. L’ensemble des reconnaissances de C sur le flux d’évènements ϕ jusqu’à la date d est noté RC (ϕ, d) et est un sous-ensemble de A(E). L’ensemble d’attributs associé à une reconnaissance r est noté Xr. Nous utilisons les notations suivantes : X♦ r = {(♦, v) ∈ Xr : v ∈ Ae(P, V)}, qui est un singleton, et X∗ r = Xr \ X♦ r . Les ensembles de reconnaissances et les ensembles d’attributs associés aux reconnaissances sont définis par induction comme suit : — Un évènement simple A vérifiant le prédicat P est reconnu si un évènement de nom A a lieu dans le flux avant l’instant d et si ses attributs vérifient le prédicat P. Les attributs de reconnaissance sont réduits à une unique propriété anonyme contenant les éventuels attributs créés par la fonction f et les attributs de l’évènement du flux. 55Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements Si A∈N, alors : R(A,P,f)(ϕ, d) = {((e, t), X(e,t)) : e = A ∧ ∃i ϕ(i) = (e, t) ∧ t ≤ d ∧ Pˆ[D◦α((e, t))] ∧ X(e,t)={ (♦, X) : ∃Xe ∈ Ae(P, V) ( X = Xe∪α((e, t)) ∧ f◦D◦α((e, t)) = {(♦, Xe)} ) } } En d’autres termes, les reconnaissances d’un évènement simple suivent les règles suivantes : — le nom de l’évènement est correct ; — la date de l’évènement est inférieure à l’horizon (t ≤ d) ; — le prédicat, s’il existe, est vérifié sur les attributs de l’évènement (Pˆ[D◦α((e, t))]). Et en supplément, l’ensemble d’attributs associé à la reconnaissance est constitué de la manière suivante. Il contient : — les attributs de l’évènement dans le flux (α((e, t))) ; — les attributs créés par la fonction f (f◦D◦α((e, t))). — Une disjonction C1 || C2 vérifiant un prédicat P est reconnue lorsque l’une des deux souschroniques est reconnue et vérifie P. La structure de l’arbre indique quelle branche de la chronique a été reconnue. Les attributs de reconnaissance sont réduits aux attributs dont les noms sont utilisés à la fois dans C1 et C2, comme détaillé dans la Section 2.2 (p.51), avec les attributs anonymes créés par la fonction f. On rappelle que la fonction ρ permet d’accéder au nom d’une propriété. R(C1||C2,P,f)(ϕ, d) = {(hr, ⊥i, Xhr,⊥i) : r ∈ RC1 (ϕ, d) ∧ Pˆ[X∗ r ] ∧ Xhr,⊥i = {x ∈ X∗ r : ρ(x) ∈ Ce(C1||C2)} ∪ D◦f[X∗ r ]} ∪ {(h⊥, ri, Xh⊥,ri) : r ∈ RC2 (ϕ, d) ∧ Pˆ[X∗ r ] ∧Xh⊥,ri={x ∈ X∗ r : ρ(x) ∈ Ce(C1 || C2)} ∪ D◦f[X∗ r ]} — Une séquence C1C2 vérifiant P est reconnue lorsque C2 est reconnue après avoir reconnu C1 et que les deux reconnaissances vérifient P. Les attributs de reconnaissance sont ceux des reconnaissances de C1 et de C2 avec les attributs anonymes créés par la fonction f. C1 C2 R(C1 C2,P,f)(ϕ, d) = {(hr1, r2i, Xhr1,r2i) : r1 ∈ RC1 (ϕ, d) ∧ r2 ∈ RC2 (ϕ, d) ∧Tmax(r1) < Tmin(r2) ∧ Pˆ[X∗ r1 ∪ X∗ r2 ] ∧ Xhr1,r2i = X∗ r1 ∪ X∗ r2 ∪ D◦f[X∗ r1 ∪ X∗ r2 ]} — Une conjonction C1&C2 vérifiant P est reconnue lorsque à la fois C1 et C2 sont reconnues et vérifient P. Les attributs de reconnaissance sont construits comme pour la séquence. R(C1&C2,P,f)(ϕ, d)={(hr1, r2i, Xhr1,r2i) : r1∈RC1 (ϕ, d) ∧ r2 ∈ RC2 (ϕ, d) ∧ Pˆ[X∗ r1 ∪ X∗ r2 ] ∧ Xhr1,r2i = X∗ r1 ∪ X∗ r2 ∪ D◦f[X∗ r1 ∪ X∗ r2 ]} — Une absence (C1) − [C2] vérifiant un prédicat P est reconnue lorsque C1 est reconnue sans aucune occurrence de C2 vérifiant P durant la reconnaissance de C1. Attention, dans le cas d’une absence, la signification du prédicat est donc particulière. Les attributs de reconnaissance sont alors réduits à ceux de la reconnaissance de C1, comme détaillé dans 2.2, complétés des éventuels attributs anonymes créés par la fonction f. 56CHAPITRE 2. CONSTRUCTION D’UN CADRE THÉORIQUE POUR LA RECONNAISSANCE DE COMPORTEMENTS : LE LANGAGE DES CHRONIQUES C1 C2 R((C1)−[C2],P,f)(ϕ, d) = {(hr1i, Xhr1i) : r1∈RC1 (ϕ, d) ∧∀r2∈RC2 (ϕ, d) ( Tmin(r1)>Tmin(r2) ∨Tmax(r1)Tmin(r2) ∨ Tmax(r1)≤Tmax(r2); — pour (C1)−]C2], elle devient Tmin(r1)≥Tmin(r2) ∨ Tmax(r1)Tmin(r2)∧Tmax(r1)Tmin(r2)∧Tmax(r1)=Tmax(r2). — Une chronique C1 equals C2 est reconnue lorsque à la fois C1 et C2 sont reconnues sur exactement le même intervalle de temps, et P est vérifié. C1 C2 R(C1 equals C2,P,f)(ϕ, d) est défini comme la séquence mais avec la contrainte temporelle Tmin(r1)=Tmin(r2)∧Tmax(r1)=Tmax(r2). 6. C’est cette construction qui correspond à celle de l’absence dans [CCK11] présentée dans la Section 1.5.3 et notée (C1) − [C2]. 57Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements — Une chronique C1 lasts δ est reconnue lorsque C1 est reconnue, P est vérifié, et la taille de l’intervalle de reconnaissance est exactement δ. C1 δ R(C1 lasts δ,P,f)(ϕ, d) = {(hri, Xhri) : r ∈ RC1 (ϕ, d) ∧Tmax(r) − Tmin(r) = δ ∧ Pˆ[X∗ r ] ∧ Xhri = X∗ r ∪ D◦f[X∗ r ]} — Une chronique C1 at most δ est reconnue lorsque C1 est reconnue, P est vérifié, et la taille de l’intervalle de reconnaissance est au plus δ. C1 δ R(C1 at most δ,P,f)(ϕ, d) est défini comme la chronique « lasts » mais avec la contrainte temporelle Tmax(r) − Tmin(r) < δ. — Une chronique C1 at least δ est reconnue lorsque C1 est reconnue, P est vérifié, et la taille de l’intervalle de reconnaissance est au moins δ. C1 δ R(C1 at least δ,P,f)(ϕ, d) est défini comme la chronique « lasts » mais avec la contrainte temporelle Tmax(r) − Tmin(r) > δ. — Une chronique C1 then δ est reconnue lorsque exactement δ unités de temps s’écoulent après une reconnaissance de C1. L’arbre de reconnaissance conserve l’évènement d’instant temporel pur (τ, t) correspondant à l’instant de la reconnaissance. Ceci permet, entre autre, d’avoir une définition correcte de Tmax. C1 δ R(C1 then δ,P,f)(ϕ, d)={(hr,(τ, t)i, Xhr,(τ,t)i) : t ≤ d ∧ r ∈ RC1 (ϕ, t) ∧t = Tmax(r)+δ ∧ Pˆ[X∗ r ] ∧ Xhr,(τ,t)i = X∗ r ∪ D◦f[X∗ r ]} — Une chronique de nommage de propriété C1→x est reconnue lorsque C1 est reconnue. Les attributs de reconnaissance sont les attributs anonymes de la reconnaissance de C1 mais renommés x, avec les nouveaux attributs anonymes créés par la fonction f. R(C1→x,P,f)(ϕ, d) = {(hri, Xhri) : r∈RC1 (ϕ, d) ∧ Pˆ[X∗ r ] ∧ Xhri = R(X♦ r , x) ∪ D◦f[X∗ r ]} — Une chronique de cut C1!C2 est reconnue lorsque C1 et C2 sont reconnues successivement et lorsque la reconnaissance de C2 est la première après celle de C1 En d’autres termes, il n’y a pas d’autre reconnaissance de C2 entre les deux reconnaissances sélectionnées. R(C1!C2,P,f)(ϕ, d)={(hr1, r2i, Xhr1,r2i) : r1∈RC1 (ϕ, d) ∧ r2∈RC2 (ϕ, d) ∧ Tmax(r1) Tmin(r2) ∨ Tmax(r1) ≤ Tmax(r2) )}. Soit r1 ∈ RC (ϕ|I , d). Alors r1 ∈ RC1 (ϕ|I , d) et ∀r2 ∈ RC2 (ϕ|I , d) ( Tmin(r1) > Tmin(r2) ∨ Tmax(r1) ≤ Tmax(r2) ). Par hypothèse d’induction, RC1 (ϕ|I , d) ⊆ RC1 (ϕ|J , d), et donc r1 ∈ RC1 (ϕ|J , d). Montrons de plus par l’absurde que ∀r2 ∈ RC2 (ϕ|J , d) ( Tmin(r1) > Tmin(r2) ∨ Tmax(r1) ≤ Tmax(r2) ). Soit r2 ∈ RC2 (ϕ|J , d) tel que Tmin(r1) ≤ Tmin(r2) ∧ Tmax(r1) > Tmax(r2). En particulier, Tmax(r2) < Tmax(r1) ≤ max I et r2 ∈ RC2 (ϕ|J , d), donc, comme I est un segment initial de J, r2 ∈ RC2 (ϕ|I , d), donc ∃r2 ∈ RC2 (ϕ|I , d) ( Tmin(r1) ≤ Tmin(r2) ∧ Tmax(r1) > Tmax(r2) ). Absurde. Donc r1 ∈ RC (ϕ|J , d), et on a donc bien RC (ϕ|I , d) ⊆ RC (ϕ|J , d). — La démonstration pour les autres opérateurs est analogue à celle de la séquence. Remarque 10. En particulier, pour tout k ∈ N et tout d ∈ R, RC ((ui)i∈J1,kJ , d) ⊆ RC ((ui)i∈J1,k+1J , d) Remarque 11. En revanche, la propriété « pour tout I inclus dans le domaine de ϕ, RC (ϕ|I , d) ⊆ RC (ϕ, d) » n’est pas vérifiée pour toute chronique dans laquelle est utilisée l’opérateur d’absence. Le problème se pose si I n’est pas un segment initial du domaine de ϕ. Par exemple, prenons C = (A D)−[B[ et posons ϕ = ((a, 1),(b, 2),(d, 3)) avec I = {1, 3}. Alors, RC (ϕ|I , d) = {h(a, 1),(d, 3)i} mais RC (ϕ, d) = { }, donc RC (ϕ|I , d) * RC (ϕ, d). Il en est de même pour la propriété plus générale « pour tout I ⊆ J, RC (ϕ|I , d) ⊆ RC (ϕ|J , d) » s’il n’y a pas de condition supplémentaire sur I et J. 2.4.3 Associativité, commutativité, distributivité Les démonstrations des propriétés suivantes se trouvent dans l’Annexe A. Propriété 3 (associativité). La disjonction, la conjonction, la séquence, meets et equals sont associatifs. Remarque 12. 1. « overlaps » n’est pas associatif. En effet, considérons le flux ϕ = ((a, 1),(f, 2),(d, 3),(b, 4),(e, 5),(g, 6)), et les chroniques C1 = A B, C2 = D E et C3 = F G. On a alors FC1 overlaps C2 (ϕ, 6) = {{1, 3, 4, 5}} et FC2 overlaps C3 (ϕ, 6) = { }, Donc F(C1 overlaps C2) overlaps C3 (ϕ, 6) = {{1, 2, 3, 4, 5, 6}}, mais FC1 overlaps (C2 overlaps C3)(ϕ, 6) = { }. 2. « starts » n’est pas associatif. En effet, considérons le flux ϕ = ((a, 1),(b, 2),(d, 3),(e, 4)), et les chroniques C1 = A D, C2 = A B et C3 = A E. 62CHAPITRE 2. CONSTRUCTION D’UN CADRE THÉORIQUE POUR LA RECONNAISSANCE DE COMPORTEMENTS : LE LANGAGE DES CHRONIQUES On a alors FC1 starts C2 (ϕ, 4) = { } donc F(C1 starts C2) starts C3 (ϕ, 4) = { }, Mais FC2 starts C3 (ϕ, 4) = {{1, 2, 4}}, donc FC1 starts (C2 starts C3)(ϕ, 4) = {{1, 2, 3, 4}}. 3. « during » n’est pas associatif. En effet, considérons le flux ϕ = ((a, 1),(b, 2),(d, 3),(e, 4),(f, 5),(g, 6)), et les chroniques C1 = B E, C2 = D F et C3 = A G. On a alors FC1 during C2 (ϕ, 6) = { } donc FC1 during (C2 during C3)(ϕ, 6) = { }, Mais FC2 during C3 (ϕ, 6) = {{1, 3, 5, 6}}, donc FC1 during (C2 during C3)(ϕ, 6) = {{1, 2, 3, 4, 5, 6}}. 4. « finishes » n’est pas associatif. En effet, considérons le flux ϕ = ((a, 1),(b, 2),(d, 3),(e, 4)), et les chroniques C1 = B E, C2 = D E et C3 = A E. On a alors FC1 finishes C2 (ϕ, 4) = { } donc F(C1 finishes C2) finishes C3 (ϕ, 4) = { }, Mais FC2 finishes C3 (ϕ, 4) = {{1, 3, 4}}, donc FC1 finishes (C2 finishes C3)(ϕ, 4) = {{1, 2, 3, 4}}. 5. Le changement d’état n’est pas associatif. En effet, considérons le flux ϕ = ((a, 1),(b, 2),(b, 3),(d, 4)), et les chroniques C1 = A!!(B!!D) et C2 = (A!!B)!!D. On a alors FC1 (ϕ, 4) = {{1, 3, 4}} mais FC2 (ϕ, 4) = {{1, 2, 4}}. 6. Le cut n’est pas associatif, puisque le changement d’état est un cut particulier. Propriété 4 (commutativité). La disjonction, la conjonction et equals sont commutatifs. Remarque 13. 1. La séquence n’est pas commutative. En effet, considérons le flux ϕ = ((a, 1),(b, 2)), et les chroniques A B et B A. On a alors FA B(ϕ, 2) = {{1, 2}} mais FB A(ϕ, 2) = { }. 2. meets, overlaps, starts, during, finishes, le cut et le changement d’état ne sont pas commutatifs. En effet, comme pour la séquence, les propriétés temporelles qui caractérisent ces opérateurs sont exprimées par des inégalités et ne sont donc pas commutatives. Propriété 5 (distributivité). Tous les opérateurs sont distributifs sur la disjonction. Remarque 14. La distributivité de tout opérateur sur un opérateur autre que la disjonction n’est pas vérifiée. En effet, considérons deux opérateurs ~ et }. Si ~ est distributif sur }, alors, en particulier, l’égalité suivante est vérifiée, où A, B et D sont des évènements simples : A ~ (B } D) ≡ (A ~ B)}(A~D). Une reconnaissance de la chronique du membre de gauche ne peut correspondre qu’à un unique évènement a. En revanche, si } n’est pas une disjonction, deux évènements a distincts peuvent participer à une même reconnaissance de la chronique du membre de droite. L’équivalence n’est donc pas vérifiée si } n’est pas une disjonction, tout opérateur n’est ainsi pas distributif sur un opérateur autre que la disjonction. Considérons par exemple le cas de la distributivité de la séquence sur la conjonction, avec le flux ϕ = ((a, 1),(b, 2),(a, 3),(d, 4)), et les chroniques A (B&D) et (A B)&(A D). On a alors {1, 2, 3, 4} ∈ F(A B)&(A D)(ϕ, 4) mais {1, 2, 3, 4} ∈/ FA (B&D)(ϕ, 4). 63Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements Pour les opérateurs liés aux durées, et qui n’ont donc pas de sens sur des évènements simples, le même raisonnement s’applique sur les évènements simples qui constituent chaque sous-chronique. Par exemple, pour le cas de la distributivité de la séquence sur starts, considérons le flux ϕ = ((a, 1),(b, 2),(d, 3),(e, 4),(b, 5),(f, 6),(g, 7)), et les chroniques (A B) ((D E) starts (F G)) et (A B D E) starts (A B F G). Alors : {1, 2, 3, 4, 5, 6, 7} ∈ F(A B D E) starts (A B F G)(ϕ, 7) mais : {1, 2, 3, 4, 5, 6, 7} ∈/ F(A B) ((D E) starts (F G))(ϕ, 7) 2.5 Gestion du temps continu à l’aide d’une fonction Lookahead Cette section s’attache à la gestion du temps dans la mise en œuvre du processus de reconnaissance. Des opérations de contraintes temporelles ont été ajoutées au langage et la syntaxe comme la sémantique ont été adaptées pour considérer un modèle de temps continu. Dans une optique d’implémentation, la considération d’un temps continu soulève des problèmes et rend nécessaire la définition d’une fonction « Look-ahead » indiquant un prochain instant TC (ϕ, d) jusqu’auquel le système n’a pas besoin d’être réexaminé car on a l’assurance qu’aucune reconnaissance ne va se produire jusqu’à cet instant. En effet, les systèmes considérés sont asynchrones, et cette fonction fournit le prochain instant où les chroniques peuvent évoluer. En indiquant le moment où observer le système, cette fonction permet de ne pas avoir à le surveiller constamment pour une reconnaissance éventuelle, ce qui serait impossible du fait du modèle de temps continu (et ce qui serait de toute façon trop exigeant même si le modèle de temps pouvait être discrétisé). La fonction « Look-ahead » est inférée de la sémantique du langage présentée dans la Définition 16 et est définie comme suit : Définition 19 (fonction « Look-ahead »). Soit d ∈ R, C ∈ X et ϕ un flux d’évènements. Nous définissons TC (ϕ, d) par induction sur la chronique C : — si A ∈ N et C = (A, P, f), alors TC (ϕ, d) = min{t : ∃i ∈ N ∃e ∈ N ϕ(i) = (e, t) ∧ t > d} ; — pour tout ~ ∈ {||, &, “ ”,( ) − [ ],( ) − [ [,( )−] ],( )−] [, meets, overlaps, starts, during, finishes, equals, !, !!}, si C = (C1 ~ C2, P, f) avec C1 ∈ X et C2 ∈ X, alors TC (ϕ, d) = min{TC1 (ϕ, d), TC2 (ϕ, d)} ; — si C = (C1 lasts δ, P, f), alors TC (ϕ, d) = TC1 (ϕ, d); — si C = (C1 at most δ, P, f), alors TC (ϕ, d) = TC1 (ϕ, d); — si C = (C1 at least δ, P, f), alors TC (ϕ, d) = TC1 (ϕ, d); — si C = (C1 then δ, P, f), alors nous définissons tout d’abord l’instant τC1,δ(ϕ, d) correspondant au plus petit instant t + δ tel qu’il y a eu une nouvelle reconnaissance de C1 dans le flux ϕ à l’instant t et tel que t ≤ d < t + δ, comme suit : τC1,δ(ϕ, d) = min{Tmax(r1) + δ : r1 ∈ RC1 (ϕ, d) ∧ Tmax(r1) + δ > d} Alors TC (ϕ, d) = min{τC1,δ(ϕ, d), TC1 (ϕ, d)} ; 64CHAPITRE 2. CONSTRUCTION D’UN CADRE THÉORIQUE POUR LA RECONNAISSANCE DE COMPORTEMENTS : LE LANGAGE DES CHRONIQUES — si C = (C1→x, P, f), alors TC (ϕ, d) = TC1 (ϕ, d); — si C = (@C1, P, f), alors TC (ϕ, d) = TC1 (ϕ, d). Remarque 15. Notons que l’opérateur « then » se distingue des autres opérateurs du fait que la reconnaissance de la chronique découle du temps qui passe et non de l’occurrence d’un évènement dans le flux. C’est cette chronique qui rend nécessaire la définition de la fonction Look-ahead. Nous pouvons alors démontrer la propriété suivante, qui caractérise le comportement recherché de la fonction. Propriété 6 (Look-ahead). Après l’instant d, l’ensemble des reconnaissances de la chronique C n’évoluera pas avant au moins l’instant TC (ϕ, d). Plus formellement, pour toute chronique C ∈ X et pour tout instant d ∈ R : ∀t1 ∈ [d, TC (ϕ, d)[ RC (ϕ, d) = RC (ϕ, t1) Démonstration. Soit d ∈ R. Nous montrons cette propriété par induction sur la chronique C. Pour simplifier les notations, on s’affranchit ici des attributs, du prédicat P et de la fonction f. La démonstration complète est analogue. Soit t1 ∈ [d, TC (ϕ, d)[. Par la Propriété 1, il ne reste à montrer que l’inclusion RC (ϕ, d) ⊇ RC (ϕ, t1). — Si C = A ∈ N. Par définition, on a RA(ϕ, t1) = {(e, t) : ∃i ϕ(i) = (e, t) ∧ e = a ∧ t ≤ t1}, et RA(ϕ, d) = {(e, t) : ∃i ϕ(i) = (e, t) ∧ e = a ∧ t ≤ d}. Par l’absurde, soit (e0, t0) ∈ RA(ϕ, t1) \ RA(ϕ, d). On a donc t0 < t1 < TA(ϕ, d), i.e. t0 < min{t : ∃i ∈ N ∃e ∈ N ϕ(i) = (e, t) ∧ t > d}. Or t0 > d car (e0, t0) ∈/ RA(ϕ, d), donc t0 ∈ {t : ∃i ∈ N ∃e ∈ N ϕ(i) = (e, t) ∧ t > d} et, en particulier, t0 < t0. Absurde. D’où RA(ϕ, t1) \ RA(ϕ, d) = ∅, et donc RA(ϕ, d) ⊇ RA(ϕ, t1). — Si C = C1 | | C2. On note que t1 < TC (ϕ, d) = min{TC1 (ϕ, d), TC2 (ϕ, d)} et donc t1 < TC1 (ϕ, d) et t1 < TC2 (ϕ, d). RC (ϕ, t1) = {hr, ⊥i : r ∈ RC1 (ϕ, t1)} ∪ {h⊥, ri : r ∈ RC2 (ϕ, t1)} = {hr, ⊥i : r ∈ RC1 (ϕ, d)} ∪ {h⊥, ri : r ∈ RC2 (ϕ, d)} par hypothèse d’induction = RC (ϕ, d) — La démonstration relative aux opérateurs de conjonction, de séquence, d’absence, meets, overlaps, starts, during, finishes, equals, lasts δ, at most δ, at least δ, cut, et changement d’état est identique à celle du cas précédent (car la définition de TC (ϕ, d) est identique pour ces opérateurs). — Si C = C1 then δ. On a RC (ϕ, d) = {hr1,(τ, i)i : i ≤ d ∧ r1 ∈ RC1 (ϕ, i) ∧ i = Tmax(r1) + δ}, Et : RC (ϕ, t1) = {hr1,(τ, j)i : j ≤ t1 ∧ r1 ∈ RC1 (ϕ, j) ∧ j = Tmax(r1) + δ}. = RC (ϕ, d) ∪ {r1 : ∃j d < j ≤ t1 ⇒ (r1 ∈ RC1 (ϕ, j) ∧ j = Tmax(r1) + δ)} On souhaite montrer que {hr1,(τ, j)i : d < j ≤ t1 ∧ r1 ∈ RC1 (ϕ, j) ∧ j = Tmax(r1) + δ} est 65Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements vide. Par l’absurde supposons hr1,(τ, j)i un élément de cet ensemble. Alors d < j ≤ t1 et r1 ∈ RC1 (ϕ, j) ∧ j = Tmax(r1) + δ. Comme t1 < TC (ϕ, d) ≤ TC1 (ϕ, d), j ∈]d, t1] implique que j ∈ [d, TC1 (ϕ, d)[. On peut donc appliquer l’hypothèse d’induction qui nous donne que RC1 (ϕ, d) = RC1 (ϕ, j). On a donc que r1 ∈ RC1 (ϕ, d). De plus, on a bien que j = Tmax(r1) + δ > d, Donc j = Tmax(r1) + δ ∈ {Tmax(r1) + δ : r1 ∈ RC1 (ϕ, d) ∧ Tmax(r1) + δ > d}. Or j ≤ t1 < TC1 (ϕ, d) ≤ τC1,δ(ϕ, d) = min{Tmax(r1)+δ : r1 ∈ RC1 (ϕ, d)∧Tmax(r1)+δ > d}. Absurde. Donc RC (ϕ, t1) = RC (ϕ, d). — Si C = C1→x ou si C = @C1, la démonstration est immédiate car TC (ϕ, d) = TC1 (ϕ, d). 2.6 Tableau récapitulatif informel des propriétés du langage des chroniques Le Tableau 2.1 présente un récapitulatif informel de l’ensemble des constructions et propriétés du langage des chroniques qui ont été présentées dans ce chapitre. 66CHAPITRE 2. CONSTRUCTION D’UN CADRE THÉORIQUE POUR LA RECONNAISSANCE DE COMPORTEMENTS : LE LANGAGE DES CHRONIQUES Tableau 2.1 – Récapitulatif informel des constructions et propriétés du langage des chroniques Nom Chronique Pré-requis Ce(C1)∩Ce(C2) ={♦} Ce Cr Forme de l’arbre Condition temporelle Commut. Assoc. Look-ahead évènement simple A {♦} Ce (e, t) min{t:ϕ(i)=(e, t) ∧t > d} séquence C1 C2 • ∪ Ce hr1, r2i Tmax(r1) < Tmin(r2) • min{TC1 , TC2 } conjonction C1& C2 • ∪ Ce hr1, r2i • • min{TC1 , TC2 } disjonction C1 || C2 ∩ Ce hr1, ⊥i, h⊥, r2i • • min{TC1 , TC2 } absence (C1) − [C2[ • ∪ Cr(C1) hr1i ∀r2 ( Tmin(r1)>Tmin(r2) ∨ Tmax(r1) Tmin(r2) ∧ Tmax(r1) Tmin(r2) ∧ Tmax(r1)= Tmax(r2) min{TC1 , TC2 } equals C1 equals C2 • ∪ Ce hr1, r2i Tmin(r1) = Tmin(r2) ∧ Tmax(r1)= Tmax(r2) • • min{TC1 , TC2 } lasts C lasts δ Cr Ce hri Tmax(r)− Tmin(r)=δ TC at least C at least δ Cr Ce hri Tmax(r)− Tmin(r)>δ TC at most C at most δ Cr Ce hri Tmax(r)− Tmin(r)<δ TC then C then δ Cr Ce hr, (τ, i)i i = Tmax(r)+δ min{τC1,δ(ϕ, d), TC1 (ϕ, d)} nommage C →x Cr {x, ♦} hri TC cut C1!C2 • ∪ Ce hr1, r2i Tmax(r1) < Tmin(r2) . . . min{TC1 , TC2 } changement d’état C1!!C2 • ∪ Ce hr1, r2i Tmax(r1) < Tmin(r2) . . . min{TC1 , TC2 } évènement de reco. @ C Cr Ce (e, t) TC 67Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements 2.7 Conclusion Dans ce chapitre, nous avons largement étendu le langage des chroniques présenté dans [CCK11]. Nous avons ajouté de nombreux opérateurs, augmentant ainsi l’expressivité de notre langage. Notamment, nous avons ajouté dix constructions exprimant toutes les contraintes temporelles possibles en transposant dans notre formalisme la logique d’intervalles d’Allen. Nous avons également formalisé la notion de propriété liée à un évènement du flux à analyser. Ceci nous a ensuite permis de doter le langage des chroniques de la possibilité d’exprimer des contraintes sur ces attributs d’évènement. Pour l’ensemble de ces constructions, nous avons formellement défini la syntaxe et la sémantique associées. Pour ce faire, nous avons été amenés à raffiner la représentation des reconnaissances en passant d’un formalisme ensembliste à un formalisme arborescent conservant davantage d’informations sur la reconnaissance. Nous avons étudié les principales propriétés du langage étendu des chroniques. Nous avons également défini une fonction de Look-ahead qui permet de formaliser la gestion du temps continu dans le cadre d’une implémentation éventuelle du processus de reconnaissance, en indiquant à quels instants interroger le système. Nous possédons donc maintenant un cadre théorique formel complet pour effectuer de la reconnaissance de comportements. Il s’agit donc maintenant d’implémenter un processus de reconnaissance de chroniques fondé sur cette base théorique. Nous allons construire deux tels modèles : — dans les Chapitres 3 et 4, nous allons définir un modèle à l’aide de réseaux de Petri colorés permettant de valider les principes de reconnaissance en faisant notamment ressortir les problèmes de concurrence ; — dans le Chapitre 5, nous implémentons un modèle de reconnaissance sous la forme d’une bibliothèque C++ appelée Chronicle Recognition Library (CRL). 68Chapitre 3 Un modèle de reconnaissance en réseaux de Petri colorés dit « à un seul jeton » Sommaire 4.1 Construction et fonctionnement des réseaux dits « multi-jetons » . 118 4.1.1 Types et expressions utilisés dans les réseaux multi-jetons . . . . . . . . 119 4.1.2 Structure globale des réseaux multi-jetons . . . . . . . . . . . . . . . . . 119 4.1.3 Briques de base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 4.1.4 Construction par induction . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.1.5 Bilan sur le degré de contrôle acquis et stratégie de tirage . . . . . . . . 133 4.2 Construction et fonctionnement des réseaux « contrôlés » . . . . . . 134 4.2.1 Types et expressions utilisés . . . . . . . . . . . . . . . . . . . . . . . . . 135 4.2.2 Structure globale des réseaux . . . . . . . . . . . . . . . . . . . . . . . . 137 4.2.3 Briques de base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 4.2.4 Un séparateur de jetons générique . . . . . . . . . . . . . . . . . . . . . 144 4.2.5 Construction par induction des réseaux contrôlés . . . . . . . . . . . . . 146 4.2.6 Graphes d’espace d’états des réseaux contrôlés . . . . . . . . . . . . . . 159 4.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Dans le Chapitre 2, nous avons formalisé un langage de description de comportements, le langage des chroniques, et sa sémantique associée, définissant ainsi la notion de reconnaissance d’une chronique dans un flux d’évènements. Il s’agit maintenant de proposer une implémentation du processus de reconnaissance dont il est possible de montrer que l’exécution respecte la sémantique préalablement définie dans la Section 2.3.2. En effet, dans l’optique de traiter des applications critiques (par exemple s’assurer qu’un avion sans pilote respecte bien des procédures de sécurité 69Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements données, comme dans la Section 5.3), il faut que le processus de reconnaissance soit fiable et robuste car il n’y a pas de droit à l’erreur. Nous allons développer deux implémentations du système de reconnaissance de comportements permettant cette vérification. Dans ce chapitre et le suivant, nous commençons par définir, à l’aide de réseaux de Petri colorés, un premier modèle de reconnaissance d’un sous-langage des chroniques restreint aux opérateurs de séquence, de conjonction, de disjonction et d’absence, et qui ne permet donc pas l’expression de contraintes temporelles ni de contraintes sur des attributs d’évènement. Pour chaque chronique, un réseau, construit par induction, calcule l’ensemble des reconnaissances de la chronique sur un flux d’évènements donné. Une version initiale de ce modèle est présentée dans [CCK11], mais il n’est pas totalement formalisé car il faut réaliser la démonstration de son adéquation avec la sémantique des chroniques. Ce modèle, contrairement à ceux présentés dans le Chapitre 4, ne possède qu’un jeton par place. C’est pourquoi nous nous y référons comme le modèle « à un seul jeton ». Dans ce chapitre, nous complétons la formalisation des réseaux de Petri colorés présentés dans [CCK11] et nous nous attaquons à la vérification de son adéquation [CCKP12a] : — nous posons une nouvelle définition théorique de la notion de fusion de réseaux de Petri colorés ; — nous corrigeons les modèles de la conjonction et de l’absence pour qu’ils fonctionnent dans tous les cas de composition ; — nous proposons une formalisation de la construction des réseaux en faisant ressortir une structure commune à tous les réseaux afin de pouvoir les définir par induction ; — nous formalisons également les règles d’exécution des réseaux pour bien obtenir les ensembles de reconnaissances des chroniques concernées ; — nous démontrons la correction des réseaux vis-à-vis de la sémantique ensembliste de la Section 2.3.2 pour les chroniques incluant, au plus, une absence au plus haut niveau. Dans une première Section (3.1), nous commençons par rappeler la définition des réseaux de Petri colorés en introduisant la notion d’arcs inhibiteurs ainsi qu’une nouvelle définition de fusion. Nous construisons ensuite formellement dans la Section 3.2 les réseaux de Petri colorés modélisant la reconnaissance de chroniques ; puis, dans la Section 3.3, nous décrivons le comportement des réseaux lorsqu’ils suivent une certaine règle d’exécution définie en 3.3.6. Le modèle est alors complètement formalisé. Ceci nous permet de montrer dans la Section 3.4 que, pour des constructions faisant intervenir au plus une absence au plus haut niveau, les reconnaissances produites par les réseaux correspondent exactement à celles définies par la sémantique du langage. Nous achevons ce chapitre en étudiant dans la Section 3.5 la complexité des réseaux construits. 3.1 Définition du formalisme des réseaux de Petri colorés Commençons par définir le formalisme des réseaux de Petri colorés employé par la suite pour modéliser le processus de reconnaissance. Dans la Section 3.1.1, nous introduisons les notions de type et d’expression. Nous donnons ensuite dans la Section 3.1.2 une définition des réseaux de Petri colorés. Cette définition est complétée dans la Section 3.1.3 par une nouvelle notion de fusion de réseaux, et dans la Section 3.1.4 par des arcs inhibiteurs. 70CHAPITRE 3. UN MODÈLE DE RECONNAISSANCE EN RÉSEAUX DE PETRI COLORÉS DIT « À UN SEUL JETON » 3.1.1 Types et expressions Il s’agit ici de fournir un cadre formel pour la description des types de données et des fonctions utilisés dans les réseaux (voir la Section 3.2.1 pour la description concrète des types et expressions de nos réseaux). À partir d’un ensemble de types de base, nous construisons l’ensemble des types de fonction possibles : Définition 20 (signature d’expression typée). Soit un ensemble B dont les éléments seront des types de base. On définit inductivement l’ensemble des types fonctionnels de B, noté TB, par : b ∈ B b ∈ TB (type de base) τ1, . . . , τn, τ ∈ TB τ1 × · · · × τn → τ ∈ TB (type fonctionnel) On définit l’arité d’un type χ ∈ TB, notée ari(χ), par : — si χ ∈ B, alors ari(χ) = 0 — si χ = τ1 × · · · × τn → τ , alors ari(χ) = n Si B est un ensemble de types de base, F, un ensemble de fonctions, et σ : F −→ TB une fonction de typage, on appelle Σ = (B, F, σ) une signature d’expression typée. À partir de cette signature, on peut définir l’ensemble des expressions qui sont employées dans les gardes et les étiquettes des arcs : Définition 21 (ensemble des expressions). Soit Σ = (B, F, σ) une signature d’expression typée. Un ensemble de variables B-typées est un couple (V, σ) où V est un ensemble de variables et σ : V → B est une fonction de typage. On définit par induction l’ensemble des V -expressions de Σ, noté ExprΣ(V ) , et on prolonge canoniquement la fonction σ à ExprΣ(V ) par : x ∈ V x ∈ ExprΣ(V ) (variable) c ∈ F ari(σ(c)) = 0 c ∈ ExprΣ(V ) (constante) f ∈ F e1, . . . , en ∈ ExprΣ(V ) σ(f) = σ(e1) × · · · × σ(en) → τ f(e1, . . . , en) ∈ ExprΣ(V ) σ(f(e1, . . . , en)) = τ (fonction) 3.1.2 Réseaux de Petri colorés Un réseau de Petri coloré est un graphe biparti composé de deux types de nœuds : des places représentées par des ellipses, et des transitions représentées par des rectangles. Des arcs orientés lient des places à des transitions. Ils sont étiquetés par des expressions. On les appelle arcs en entrée et, dans les réseaux que nous utilisons, ils sont étiquetés par des noms de variables. D’autres arcs orientés lient des transitions à des places. On les appelle arcs en sortie et ils sont étiquetés par des expressions de fonctions dans lesquelles des variables des arcs d’entrée de la transition, appelées variables d’entrée, peuvent apparaître. 71Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements Les places sont typées et stockent un ou plusieurs éléments de leur type, appelés jetons, qui constituent le marquage de la place. Dans les réseaux de Petri de ce chapitre, chaque place ne contiendra qu’un seul jeton. En revanche, dans le Chapitre 4, nous construisons un modèle tirant parti de la possibilité d’avoir plusieurs jetons par place. Les transitions peuvent être munies de gardes, c’est-à-dire de conditions booléennes, dans lesquelles peuvent apparaître les variables d’entrée de la transition. Une transition peut être tirée lorsque les deux conditions suivantes sont réunies : (i) si elle possède une garde, celle-ci est vérifiée ; (ii) toutes les places liées par un arc d’entrée à la transition possèdent un jeton (c’est-à-dire qu’une valeur peut être attribuée à chaque variable d’entrée de la transition). Tirer une transition revient à consommer les jetons indiqués sur les arcs d’entrée de la transition et à appliquer les fonctions étiquetant les arcs de sortie de la transition, c’est-à-dire à ajouter la valeur de ces fonctions au contenu des places liées par des arcs de sortie à la transition. Ainsi, au fur et à mesure que les transitions sont tirées, le marquage des places évolue. Remarque 16 (multiples tirages successifs possibles de transitions). On remarque donc que, pour un marquage donné d’un réseau, il peut y avoir plusieurs séquences possibles de transitions tirables. Ce comportement non déterministe – dans le sens où il ne mène pas nécessairement toujours au même marquage – peut être souhaité pour exprimer différentes possibilités d’évolution d’un système. Cependant, notre travail sur les réseaux de Petri nécessite d’obtenir systématiquement le même marquage à l’issue du traitement occasionné par un évènement du flux. En effet, on doit pouvoir lire dans celui-ci l’ensemble des reconnaissances de la chronique. Notons que cela n’implique pas qu’il n’existe qu’une seule stratégie de tirage de transitions adaptée, comme on le verra dans la Section 4.2. Il nous faudra donc soit accompagner chaque réseau d’une stratégie de tirage, soit montrer que toute séquence possible de transitions donne le même marquage final du réseau. Définissons maintenant plus formellement un réseau de Petri coloré, d’après la définition de [JK09]. Définition 22 (réseau de Petri coloré). Un réseau de Petri coloré N est un 9-uplet (P, T, A, B, V, C, G, EX, I) où : 1. P est un ensemble fini de places ; 2. T est un ensemble fini de transitions disjoint de P ; 3. A ⊆ P × T ] T × P est un ensemble d’arcs orientés ; 4. B est un ensemble fini de types ; 5. V est un ensemble fini de variables typées par une fonction σ telle que ∀v∈V σ(v)∈B ; 6. C : P → B est une fonction de coloriage qui à chaque place associe un type ; 7. G : T → ExprΣ(V ) est une fonction de garde qui à chaque transition associe une garde de type booléenne et dont les variables sont des éléments de V ; 72CHAPITRE 3. UN MODÈLE DE RECONNAISSANCE EN RÉSEAUX DE PETRI COLORÉS DIT « À UN SEUL JETON » 8. EX : A → ExprΣ(V ) est une fonction d’expression d’arcs qui à chaque arc associe une expression telle que, pour tout arc a ∈ A, EX(a) est du même type que la place à laquelle est relié l’arc a ; 9. I : P → ExprΣ(V ) est une fonction d’initialisation qui à chaque place associe un marquage initial tel que, pour toute place p ∈ P, σ(I(p)) = C(p). Pour modéliser et exécuter les réseaux de Petri colorés, nous utilisons le logiciel CPN Tools 1 [RWL+03]. La Figure 3.1 présente l’apparence d’un réseau dans l’interface de CPN Tools. Pour simplifier la lecture et la conception de réseaux, CPN Tools dispose d’un système d’annotation de fusion (« fusion tags ») : deux places possédant la même annotation de fusion 2 doivent être assimilées à une unique place, et possèdent par conséquent le même marquage. Ce système sert notamment à la composition de réseaux, ce que nous exploitons pour la construction inductive de nos réseaux. Il offre également la possibilité d’effectuer des simulations et des analyses d’espace d’états. INT transition [garde cpt] cpt+1 0 place cpt 3 1`3++ 2`4 marquage initial marquage courant: 1 jeton de valeur 3 et 2 jetons de valeur 4 type nombre total de jetons annotation de fusion annotation de fusion Figure 3.1 – Un réseau sur CPN Tools 3.1.3 La fusion de places Nous modélisons le processus de reconnaissance de chroniques en élaborant des réseaux de Petri colorés associés aux chroniques, et ce de façon modulaire : cette construction se fait par induction sur la structure de la chronique, à l’aide de fusion de places. Il s’agit maintenant de définir formellement ce mécanisme de fusion. Il existe deux types de fusion pour les réseaux de Petri colorés : la fusion de places, et la fusion de transitions. Cette dernière consiste en l’unification de plusieurs transitions en une unique transition regroupant les arcs d’entrée et de sortie de ces transitions, et dont la garde est la conjonction de l’ensemble des gardes des transitions fusionnées. La transition résultante n’est donc tirable que lorsque l’ensemble des transitions fusionnées le sont. Cette caractéristique n’est pas désirable pour notre modèle et nous n’utilisons donc pas cette fonctionnalité. La fusion de places se comprend premièrement intuitivement comme une facilité d’écriture : les places fusionnées partagent systématiquement le même marquage initial, le même type, et le même 1. http://cpntools.org/ 2. Selon le formalisme défini par la suite en 3.1.3, ces deux places appartiennent donc à un même ensemble de fusion de places. 73Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements marquage courant, et il s’agit donc du dédoublement d’une unique place facilitant la représentation des arcs d’entrée et de sortie, évitant ainsi l’éventuel « plat de spaghetti ». Cependant, nous verrons que la fusion de places peut également poser des problèmes de sémantique. Une définition formelle de la fusion est proposée dans [CP92] (Definition 4.3) où sont définis des réseaux de Petri modulaires – Modular Coloured Petri Nets (MCPNs). Ce sont des triplets (S, P F, T F) où : (i) S est un ensemble fini de modules qui sont des réseaux de Petri disjoints ; (ii) P F est un ensemble fini d’ensembles de fusion de places vérifiant certaines conditions ; (iii) T F est un ensemble fini d’ensembles de fusion de transitions. Nous nous inspirons largement de cette définition, mais nous avons besoin d’y apporter quelques modifications. D’une part, dans cette définition, un réseau de Petri modulaire est construit à partir d’un ensemble de réseaux de Petri non modulaires S dont on souhaite fusionner certaines places et certaines transitions. Cette définition ne convient pas directement à notre construction. En effet, elle est définie par induction, et des fusions successives sont donc effectuées. La fusion doit donc se faire à partir d’un ensemble de réseaux de Petri eux-même modulaires. Dans [CP92] il est ensuite montré (Théorème 4.9) que, pour tout MCPN, on peut aisément construire un réseau de Petri coloré non modulaire équivalent, ce qui résout notre problème. Cependant, afin de nous affranchir de conversions incessantes, nous établissons dans cette section une définition directe de la fusion de places à partir de MCPN plutôt qu’à partir de réseaux de Petri colorés simples. D’autre part, un pré-requis sur l’ensemble P F est que toutes les places d’un de ses ensembles de fusions de places pf (c’est-à-dire toutes les places à fusionner ensemble) doivent avoir le même type et le même marquage initial. C’est une condition que l’on retrouve également dans un article de K. Jensen [HJS91]. Cependant, dans le logiciel que nous utilisons, CPN Tools, il est possible de fusionner deux places ayant des types et marquages initiaux différents : c’est la première place sélectionnée qui définit les caractéristiques de la seconde. Nous n’autoriserons pas la fusion de places de types différents car cela pourrait entraîner la construction d’un réseau mal formé, même si les réseaux de départ son bien formés. En revanche, la fusion de places de marquages initiaux différents offre des possibilités intéressantes en permettant d’adapter le marquage initial d’une place selon qu’elle soit fusionnée ou non. Nous aurons besoin d’exploiter cette possibilité, donc nous l’intégrons dans notre définition de la fusion. Enfin, comme nous ne mettons pas en œuvre de fusion de transitions, nous ne faisons pas apparaître l’ensemble T F. On obtient ainsi les définitions suivantes, en commençant par poser la notion de MCPN puis en formalisant la fusion. Définition 23 (réseau de Petri coloré modulaire). Un réseau de Petri coloré modulaire (MCPN) est un couple (S, P F) vérifiant les propriétés suivantes : (i) S est un ensemble fini de modules tel que : — Chaque module s ∈ S est un réseau de Petri coloré (Ps, Ts, As, Bs, Vs, Cs, Gs, EXs, Is). 74CHAPITRE 3. UN MODÈLE DE RECONNAISSANCE EN RÉSEAUX DE PETRI COLORÉS DIT « À UN SEUL JETON » — Les ensembles des éléments des réseaux sont deux-à-deux disjoints : ∀s1 ∈ S ∀s2 ∈ S [s1 6= s2 ⇒ (Ps1 ∪Ts1 ) ∩ (Ps2 ∪Ts2 ) = ∅] Notons que l’on peut renommer les places et les transitions de manière unique en les préfixant par un identifiant de réseau. On pourra donc supposer que les réseaux ont des places et des transitions distinctes sans mettre en œuvre cette préfixation qui conduit à des notations excessivement lourdes. (ii) P F ⊆ P × P(P) est un ensemble fini de couples de fusion de places, dont on appellera place d’initialisation de la fusion le premier membre, ainsi qu’ensemble de fusion de places le second, où P = S s∈S Ps et P(P) désigne l’ensemble des parties de P, et tel que : — Toute place d’initialisation appartient à l’ensemble de fusion de places associé. ∀(p0, E) ∈ P F p0 ∈ E — Toutes les places d’un ensemble de fusion de places ont même type. ∀(p0, E) ∈ P F ∀p1 ∈ E ∀p2 ∈ E C(p1) = C(p2) — L’ensemble des ensembles de fusion de places forme une partition de P 3 . [ ∀p ∈ P ∃p0 ∈ P ∃E ∈ P(P) ((p0, E) ∈ P F ∧ p ∈ E) ] ∧ [ ∀p01 ∈ P ∀p02 ∈ P ∀E1 ⊆ P ∀E2 ⊆ P ( (p01 , E1)∈P F ∧(p02 , E2)∈P F ∧E16=E2 ) ⇒ E1∩E2 = ∅ ] Remarque 17. Notons qu’à tout réseau de Petri coloré N correspond trivialement un MCPN équivalent : il suffit de poser M = {S, P F} où S = {N} et P F = {(p, {p}) : p ∈ PN }. On définit maintenant une fonction de fusion entre réseaux de Petri modulaires. C’est celle-ci que nous appliquons lors de l’induction pour construire nos réseaux de Petri associés aux chroniques. Définition 24 (fusion de places). Soit M = {(S1, P F1), . . . ,(Sn, P Fn)} un ensemble fini de MCPN et soit P F0 un ensemble fini de couples de fusion de places tels que : — Les places d’initialisation de fusion des couples de P F0 sont des places d’initialisation de couples des P F1, . . . , P Fn : ∀p0∈P ∀E0⊆P [ (p0, E0)∈P F0 ⇒ ( ∃i∈J1, nK ∃Ei⊆P (p0, Ei)∈P Fi ) ] — Les ensembles de fusion de places des couples de P F0 sont formés de places des MCPN de M : ∀p0∈P ∀E0⊆P [ (p0, E0)∈P F0 ⇒ ( ∀p∈E0 ∃i∈J1, nK ∃Ei⊆P ((p0, Ei)∈P Fi ∧ p ∈ Ei) ) ] Alors on peut définir la fusion de ces MCPN relativement à l’ensemble P F0 : F usion(M, P F0) = (S, P F) où : — S = S 1≤i≤n Si — P F = {(p0, S p∈E E(p)) : (p0, E) ∈ P F0} où E(p) est l’unique Ei tel que (p, Ei) ∈ P Fi . 3. Notons que la contrainte que les ensembles de fusion de places soient deux à deux disjoints n’était pas requise dans [CP92] mais découle de notre construction qui autorise la fusion de places à marquages initiaux différents. En effet, si une place appartenait à deux ensembles de fusion de places distincts, deux marquages initiaux différents pourraient lui être associés, ce qui est absurde. 75Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements On peut montrer, à l’aide d’une démonstration analogue à celle exposée dans [CP92], que le réseau de Petri suivant ainsi que le MCPN à partir duquel il est construit sont équivalents du point de vue de leur comportement (évolution du marquage, transitions tirables, places atteignables. . . ). Définition 25 (réseau de Petri coloré équivalent). Pour toute place d’initialisation p0, E(p0) désigne l’ensemble de fusion de places associé, et pour toute place p et toute transition t, s(p) et s(t) désignent respectivement le réseau de Petri de S tel que p ∈ Ps(p) et celui tel que t ∈ Ts(t) . Soit M = (S, P F) un réseau de Petri coloré modulaire. On définit le réseau de Petri coloré équivalent par M∗ = (P ∗ , T ∗ , A∗ , B ∗ , V ∗ , C∗ , G∗ , EX∗ , I∗ ) où : (i) P ∗ = {p0 ∈ P : ∃E0 ∈ P(P) (p0, E0) ∈ P F} (on rappelle que P = S s∈S Ps) (ii) T ∗ = S s∈S Ts (iii) A∗ = {(p0, t) ∈ P ∗ × T ∗ : ∃p ∈ E(p0) (p, t) ∈ As(p)} ∪ {(t, p0) ∈ T ∗ × P ∗ : ∃p ∈ E(p0) (t, p) ∈ As(p)} (iv) B ∗ = S s∈S Bs (v) V ∗ = S s∈S Vs (vi) C ∗ : P ∗ → B∗ , p0 7→ Cs(p0)(p0) (vii) G∗ : T ∗ → ExprΣ(V ∗) , t 7→ Gs(t)(t) (viii) EX∗ : A∗ → ExprΣ(V ∗) est défini par : — pour tous (p0, t) tels que ∃p ∈ E(p0) (p, t) ∈ As(p) , (p0, t) 7→ EXs(p)((p, t)), — pour tous (t, p0) tels que ∃p ∈ E(p0) (t, p) ∈ As(p) , (t, p0) 7→ EXs(p)((t, p)). (ix) I ∗ : P ∗ → ExprΣ(V ∗) , p0 7→ Is(p0)(p0) 3.1.4 Arcs inhibiteurs Dans le Chapitre 4, nous construisons des réseaux de Petri plus complexes où il faut pouvoir spécifier qu’une transition n’est tirable que si une place donnée est vide. Cette contrainte ne peut pas s’exprimer par une simple garde sur la transition. On introduit un nouveau type d’arc, l’arc inhibiteur, qui permet l’implémentation d’algorithmes fondés sur l’absence ou non de jetons dans une place. Des réseaux équivalents peuvent être construits sans arcs inhibiteurs à l’aide de listes mais cela conduit à des réseaux trop complexes. On complète donc la Définition 22 des réseaux de Petri colorés comme suit : Définition 26 (arcs inhibiteurs). Un réseau de Petri coloré muni d’arcs inhibiteurs est un 10-uplet (P, T, A, B, V, C, G, EX, I, B) où (P, T, A, B, V, C, G, EX, I) est un réseau de Petri coloré et : 10. B ⊆ P × T est un ensemble d’arcs inhibiteurs orientés. Un arc inhibiteur n’a d’incidence que sur le tirage des transitions. Notons Inh(t) = {p ∈ P : (p, t) ∈ B} l’ensemble des places reliées par un arc inhibiteur à une transition t ∈ T. La contrainte suivante s’ajoute pour qu’une transition t ∈ T soit tirable : 76CHAPITRE 3. UN MODÈLE DE RECONNAISSANCE EN RÉSEAUX DE PETRI COLORÉS DIT « À UN SEUL JETON » (iii) aucune place inhibitrice p ∈ Inh(t) ne contient de jeton. Dans le logiciel CPN Tools, les arcs inhibiteurs sont représentés comme sur la Figure 3.2. Lorsque la place est vide, la transition est tirable, ce qui est indiqué par le liseré vert. place 0 INT transition 1 1`3 arc inhibiteur place 0 INT transition Figure 3.2 – Arcs inhibiteurs sur CPN Tools 3.2 Construction formelle des réseaux dits « à un seul jeton » À l’aide du formalisme des réseaux de Petri colorés munis du mécanisme de fusion décrit dans 3.1, nous construisons par induction un modèle du processus de reconnaissance. Pour chaque chronique C, nous définissons un réseau de Petri coloré associé dont le marquage évolue en fonction du flux d’évènements. Une place du réseau contient l’ensemble des reconnaissances correspondant à la chronique étudiée. Nous commençons par établir dans la Section 3.2.1 les types des données et fonctions utilisés. Nous donnons ensuite un aperçu de la structure générale des réseaux que nous allons construire (Section 3.2.2). Nous posons dans la Section 3.2.3 un ensemble élémentaire de réseaux de Petri colorés, les briques de base, que nous utilisons pour construire nos réseaux lors d’une induction sur la structure de la chronique dans la Section 3.2.4. 3.2.1 Types et expressions utilisés dans le modèle Posons maintenant les types et fonctions que nous utilisons dans nos réseaux de Petri en défi- nissant Σ = (B, F, σ) (Définition 20). Posons B = {INT, Event, NCList}, et définissons le type NEvent = Event × INT. Le type INT correspond à un entier, Event, à un évènement, et NCList, à une liste de listes de couples NEvent. Un jeton de type NCList correspond à un ensemble de reconnaissances. Chaque élément de la liste NCList est une liste de NEvent, c’est-à-dire une liste d’évènements datés qui représente une reconnaissance. Posons F = {+, max, ANR, CPR, rem_obs, mixAnd, last_ini_recog} qui correspond à l’ensemble des fonctions employées dans nos réseaux. 77Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements Le Tableau 3.3 décrit brièvement le fonctionnement général de ces fonctions. Une explication axée sur notre utilisation de ces fonctions sera donnée lors de la description de l’exécution des réseaux dans la Section 3.3. À partir des fonctions de F, nous pouvons alors définir une dernière fonction qui permet de compléter des reconnaissances dans une liste avec une instance d’un évènement : complete : ((E(a), cpt), curr, inst) 7→ ANR(inst, CPR([[(E(a), cpt + 1)]], curr) Tableau 3.3 – Fonctions utilisées dans nos réseaux Fonction Type (image de la fonction σ) Description + INT × INT → INT Addition usuelle. max INT × INT → INT Entier maximum entre deux entiers. ANR NCList × NCList → NCList (Add New Recognition) Ajoute le contenu de la seconde liste à la première liste, s’il n’y est pas encore. CPR NEvent × NCList → NCList (Complete Partial Recognition) Renvoie la liste de listes de couples dont chaque liste de couples a été complétée par le couple supplé- mentaire (E(a), cpt + 1). rem_obs INT × NCList → NCList Renvoie la liste de laquelle on a d’abord supprimé toutes les listes de couples dont un des couples (E(a), cpt) a un indice cpt inférieur à l’entier n qui est en argument. mergeAndNew NCList × NCList × NCList × NCList → NCList Renvoie la dernière liste à laquelle a été ajoutée une combinaison explicitée par la suite des trois premières listes. startsub NCList → NCList Renvoie [ [ ] ] si l’argument est une liste non vide, [ ] sinon. chg_win INT × NCList → INT Renvoie le maximum entre l’entier en argument et le plus grand instant de reconnaissance des reconnaissances de la liste. concatabs NCList × NCList × NCList × INT → NCList Effectue les combinaisons séquentielles possibles entre les deux premières listes, puis les ajoute à la troisième liste si elles sont nouvelles (le critère de nouveauté étant déterminé par une comparaison avec l’entier). 78CHAPITRE 3. UN MODÈLE DE RECONNAISSANCE EN RÉSEAUX DE PETRI COLORÉS DIT « À UN SEUL JETON » 3.2.2 Structure générale des réseaux « à un seul jeton » Nous construisons donc inductivement sur la structure du langage un réseau de Petri coloré pour chaque chronique. Il permet de calculer, en fonction d’un flux d’évènements, l’ensemble de reconnaissances associé. Pour pouvoir réaliser cette construction inductive, il faut que les réseaux soient modulaires. Cette caractéristique se traduit par le fait que tous les réseaux associés aux chroniques ont une structure générale identique permettant ainsi de les combiner. Cette structure est présentée dans la Figure 3.4. Figure 3.4 – Structure des réseaux Chaque réseau possède un compteur d’évènements (place Present et transition End) ainsi que quatre places principales : — la place Present, de type INT, est fusionnée avec le compteur d’évènements et contient un entier correspondant au nombre d’évènements déjà traités ; — la place Start est de type NCList, elle contient une liste de reconnaissances qui seront complétées par le réseau ; — la place Success est également de type NCList, elle contient les reconnaissances de la place Start complétées par le réseau ; — la place Wini, de type INT, contient un entier qui sert de repère dans le cas d’une absence, permettant de déterminer les reconnaissances qu’il faut alors supprimer de la place Start. Chaque place contient exactement un seul jeton d’où le nom de modèle « à un seul jeton ». Les places Start et Success sont marquées d’un jeton contenant une liste des reconnaissances. Les reconnaissances vont circuler dans les réseaux au fur et à mesure de l’évolution du flux d’évènements. Pour une chronique complexe, une reconnaissance est d’abord vide [ ] dans la place Start 79Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements du réseau général, puis elle va petit à petit être complétée par les évènements pertinents du flux. Au fur et à mesure qu’elle est complétée, elle va transiter de la place Start à la place Success des sous-réseaux, passant de sous-réseau en sous réseau pour arriver, lorsque la reconnaissance est complète, à la place Success du réseau général. Les réseaux « à un seul jeton » se lisent de gauche à droite, ce qui correspond au circuit des reconnaissances. Ce sont ces quatre places principales que l’on fusionne tour à tour entre plusieurs réseaux et avec les briques élémentaires que nous allons définir dans la Section 3.2.3. Pour la gestion correcte d’absences dans une séquence (ce qui est détaillé dans la Section 3.3.5), les places Wini sont en fait divisées en deux catégories : les places WiniIn et les places WiniOut qui permettent de fusionner correctement les places pour construire nos réseaux. Lors de la construction des réseaux, nous définissons donc cinq fonctions qui à chaque chronique C associent les cinq types de place principaux Present(C), Start(C), Success(C), WiniIn(C) et WiniOut(C) du réseau correspondant à la chronique C. 3.2.3 Briques de base Définissons maintenant les quelques réseaux élémentaires que nous utilisons comme briques de base dans la construction inductive de notre modèle de reconnaissance. cpt+1 cpt End Present 0 INT Figure 3.5 – Compteur d’évènements Compteur Ce réseau élémentaire, noté CPT et présenté dans la Figure 3.5, fait office de compteur d’évènements. Il est composé d’une place Present dans laquelle est stockée la valeur de ce compteur, et d’une transition End qui incrémente le compteur à chaque fois qu’elle est tirée. Opérateur AND Ce réseau élémentaire, noté OPAND et présenté dans la Figure 3.6, calcule l’ensemble des reconnaissances de la conjonction de deux chroniques C1 et C2. On donne ici une description très succincte du fonctionnement de cet opérateur. La Section 3.3.4 détaille davantage le déroulement d’une reconnaissance de conjonction. Le réseau est composé de six places. Dans deux places, Operand1 et Operand2, sont stockées les reconnaissances respectives de C1 et de C2. Lorsque la transition AND est tirée, les reconnaissances de C1 et celles de C2 sont combinées de façon à récupérer dans la place Success les reconnaissances de C1&C2. Si la conjonction forme la seconde partie d’une séquence, c’est-à-dire si la chronique étudiée est de la forme · · ·(C3 C1&C2)· · · , nous ne souhaitons pas que le réseau commence à reconnaitre C1 et C2 tant que la première partie de la séquence (à savoir C3) n’a pas été reconnue. La transition Sub 80CHAPITRE 3. UN MODÈLE DE RECONNAISSANCE EN RÉSEAUX DE PETRI COLORÉS DIT « À UN SEUL JETON » rem_obs init inst2 inst2 mergeAndNew curr inst1 inst2 inst Forget AND [] NCList Operand2 [] NCList ~1 INT Success [] NCList [] [] Wini inst Sub startSub curr NCList curr rem_obs init curr inst2 inst1 rem_obs init inst1 Operand1 inst1 Start NCList startsub curr curr2 curr init Figure 3.6 – Opérateur AND sert à initialiser les places Start de C1 et de C2 afin de contrôler la mise en route du mécanisme de reconnaissance de C1 et de C2. La transition Forget sert dans le cas de l’absence. La place Success de ce réseau joue un rôle central car elle stocke les reconnaissances de C1&C2. Nous aurons donc besoin de nous y référer pour effectuer des fusions. Pour cela, nous utiliserons Success(AND) pour Success, de même que Start(AND) pour Start et Wini(AND) pour Wini. bck init inst Update Down ~1 INT WiniBe ~1 INT [] NCList [] NCList [] [[]] INT init Sub WiniAf Abs Forget Start rem_obs init curr StartSub curr Oper Present Success concatabs inst curr inst2 cpt inst2 curr2 0 curr cpt max bck init init chg_win init inst [] inst [] NCList startsub inst NCList NCList Figure 3.7 – Opérateur ABS Opérateur ABS Ce réseau élémentaire, noté OPABS et présenté dans la Figure 3.7, sert à composer deux réseaux de Petri correspondant aux chroniques C1 et C2 pour obtenir les reconnaissances de (C1) − [C2[. Le rôle de ce réseau est double : — La partie de gauche du réseau est chargée d’assurer la gestion de l’entier repère stocké dans les places Wini du reste du réseau. Rappelons que cet entier permet d’établir si certaines 81Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements reconnaissances doivent être supprimées car invalidées par une absence. La place WiniBe stocke l’indice suivant celui de la dernière reconnaissance de C2 et est mise à jour par le tirage de la transition Update. Cet indice sert à supprimer les reconnaissances partielles de C1 qui ne doivent pas être complétées car C2 a été reconnue. La place WiniAf sert dans le cas où (C1) − [C2[ est elle-même imbriquée dans une autre absence : elle propage grâce à la transition Down la valeur du Wini de la seconde absence, mais la valeur de Wini de la première absence (celle dans WiniBe) n’est en revanche pas propagée à l’extérieur de celle-ci. — La partie de droite doit son origine au caractère modulaire de nos réseaux. Elle est chargée de recombiner les reconnaissances de l’absence (qui sont stockées dans la place Oper) avec les reconnaissances pouvant les précéder (qui sont dans la place Start). Ce genre de combinaison est nécessaire lorsque la chronique étudiée contient une absence mais à un niveau de profondeur non nul, par exemple lorsqu’une absence est composée avec une sé- quence (comme D ((A B) − [C[)). Il s’agit d’une combinaison analogue à celle effectuée par la fonction mergeAndNew pour l’opérateur OPAND mais avec une contrainte temporelle supplémentaire : la combinaison doit être séquentielle. Il n’est pas possible de faire transiter les reconnaissances précédant l’absence à travers le réseau d’absence car la portée de l’absence doit être délimitée. La brique ABS permet donc de marquer les bornes de l’absence et d’isoler les reconnaissances jusqu’à ce qu’elles soient prêtes à être combinées. Comme dans l’opérateur AND, la place StartSub sert à activer le réseau de l’absence. En effet, lorsque l’on cherche à reconnaître D ((A B) − [C[) par exemple, on ne souhaite pas commencer à reconnaître (A B) − [C[ tant qu’il n’y a pas de reconnaissance de D à compléter. La transition Sub permet donc à la fois de mettre à jour la liste des reconnaissances globales de la place Success, et d’activer le réseau de l’absence si nécessaire. Une description plus élaborée du mécanisme de l’absence et des nombreuses problématiques qui lui sont associées est donnée dans la Section 3.3.5. 3.2.4 Construction par induction Avec les types et expressions définis dans la Section 3.2.1 et les réseaux élémentaires de la Section 3.2.3, nous pouvons maintenant construire notre modèle en réseau de Petri colorés du processus de reconnaissance. Pour chaque chronique C, nous définissons par induction un réseau de Petri coloré N(C) qui calcule les reconnaissances de C. Chaque réseau résulte de la fusion d’un compteur d’évènements et de sous-réseaux. La construction par induction se fait donc en deux étapes. Pour une chronique C, nous définissons d’abord une réseau N0 (C) qui correspond au mécanisme global de reconnaissance sans le compteur, puis nous réalisons une fusion de N0 (C) avec le compteur pour définir N(C). Ce sont les réseaux N0 (C) qui sont utilisés comme sous-réseaux dans l’induction comme nous le verrons par la suite. Comme évoqué dans la Section 3.2.2, dans la construction par induction, certaines des places Present, Start, Success et Wini jouent un rôle dans la composition des réseaux. En effet, quelle que soit la chronique C, chaque réseau N(C) a la même structure globale que nous avons présentée dans la Figure 3.4. Formellement, nous définissons en parallèle de N(C) les places Present(C), Start(C), Success(C), WiniIn(C) et WiniOut(C) qui délimitent cette structure et qui sont donc 82CHAPITRE 3. UN MODÈLE DE RECONNAISSANCE EN RÉSEAUX DE PETRI COLORÉS DIT « À UN SEUL JETON » utilisées si l’on compose le réseau avec un autre réseau. Dans cette section, nous présentons la construction formelle de nos réseaux de reconnaissance de chroniques en expliquant les fusions effectuées et le mécanisme global de chaque réseau. Dans les Sections 3.3.1 à 3.3.5, nous donnons ensuite une explication plus détaillée du mécanisme de chaque réseau. Si C = A ∈ N Dans le cas d’un évènement simple, ce réseau élémentaire utilisé pour sa reconnaissance correspond exactement à N0 (C). cpt End Forget A Present 1_Num 0 INT Wini ~1 Success [] NCList Present 1_Num INT Start 1_Num curr NCList rem_obs init curr curr [[]] init INT complete (E(a),cpt) curr inst 0 1_Num cpt+1 cpt inst Figure 3.8 – Réseau correspondant à la chronique A La Figure 3.8 représente le réseau N(A) relatif à l’évènement simple A. Comme évoqué dans la description de la structure générale des réseaux (Section 3.2.2), la place Start contient les reconnaissances devant être complétées par le réseau. Dans l’exemple présenté ici, il s’agit uniquement de reconnaître A. Le marquage de la place Start est donc [ [ ] ] : la reconnaissance à compléter est la reconnaissance vide [ ] qui pourra évoluer en une reconnaissance de A en transitant dans le réseau. La place Wini et la transition Forget sont utilisées si le réseau est fusionné pour construire une chronique plus complexe incluant une absence. On définit la structure du réseau : Present(C) = Present Start(C) = Start Success(C) = Success WiniOut(C) = Wini WiniIn(C) = ∅ 4 83Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements Puis on pose 5 : N(C) = Fusion({N 0 (C), CPT}, {(Present(C), {Present(C), Present(CPT)})}) (3.1) Si C = C1 C2 rem_obs init curr init End A Forget B Forget Present Fusion 84 Present Fusion 84 INT Success Fusion 85 Start Wini Present Fusion 84 Success Start Fusion 85 Wini Fusion 85 Fusion 84 INT cpt cpt+1 0 Fusion 84 0 cpt complete (E(a),cpt) curr inst inst NCList[] curr curr NCList [[]] rem_obs init curr init INT Fusion 84 INT 0 complete (E(b),cpt) curr inst NCList [] inst cpt curr curr [] NCList INT ~1 ~1 Figure 3.9 – Réseau correspondant à la chronique A B Afin de modéliser une séquence C1 C2 (comme la séquence A B dont le réseau est représenté Figure 3.9), nous fusionnons la place Success du réseau N(C1) avec la place Start du réseau N(C2). Le marquage initial de la place Start(C2) change donc : il prend le marquage initial de la place Success(C1), c’est-à-dire [ ]. Ainsi, le réseau ne commence pas à reconnaître C2 tant que ce n’est pas pour compléter une reconnaissance de C1. Remarque 18 (marquages initiaux des places Start). Dans nos réseaux, il y a deux marquages initiaux possibles pour une place Start : — la liste qui contient la liste vide, [ [ ] ], indique que le réseau peut activer son mécanisme et compléter la reconnaissance partielle vide [ ] – on dit alors que le réseau est activé ; — la liste vide [ ] indique qu’il n’y a encore aucune reconnaissance partielle à compléter, et tant que le marquage n’est pas modifié, le mécanisme du réseau ne peut pas opérer et on dit qu’il n’est pas activé. Les marquages initiaux des places Start permettent donc de contrôler précisément l’activation des différentes parties d’un réseau, pour ne pas activer la reconnaissance d’un évènement tant que 4. Ceci signifie qu’il n’y a pas de place WiniIn dans les réseaux d’évènement simple. 5. Par la suite, nous effectuons des fusions de réseaux de Petri colorés et de MCPN. Du fait de la Remarque 17, nous ne prenons pas la peine de redéfinir un MCPN équivalent pour les quelques réseaux de Petri non modulaires mis en jeu. 84CHAPITRE 3. UN MODÈLE DE RECONNAISSANCE EN RÉSEAUX DE PETRI COLORÉS DIT « À UN SEUL JETON » ce n’est pas nécessaire. Par exemple, dans le cas de la séquence C1 C2, nous avons vu qu’il n’est pas nécessaire de commencer à reconnaître les évènements relatifs à C2 tant qu’une reconnaissance complète de C1 n’est pas disponible pour être complétée. Les différents marquages initiaux sont rendus possibles par la fonctionnalité de fusion évoquée dans la Section 3.1.3 qui détermine le marquage initial des places fusionnées. On pose : N0 (C) = Fusion({N0 (C1), N0 (C2)}, { (Present(C1), {Present(C1), Present(C2)}), (Success(C1), {Success(C1), Start(C2)}), (WiniIn(C1), {WiniIn(C1), WiniIn(C2), WiniOut(C2)}) }) On définit la structure du réseau : Present(C) = Present(C1) Start(C) = Start(C1) Success(C) = Success(C2) WiniOut(C) = WiniOut(C1) WiniIn(C) = WiniOut(C2) Puis nous fusionnons avec le compteur d’évènements comme dans l’équation (3.1). Sur la Figure 3.9, on remarque que les places Present(A), Present(B), et Present(CPT) ont la même annotation de fusion, à savoir Fusion_84. De même pour Success(A) et Start(B) qui appartiennent à un même ensemble de fusion annoté Fusion_85. Si C = C1 || C2 Afin de modéliser la disjonction (comme A || B dont le réseau est représenté Figure 3.10), les deux réseaux N0 (C1) et N0 (C2) fonctionnent en parallèle. Nous fusionnons donc les places Start des deux réseaux et les places Success des deux réseaux. On pose : N0 (C) = Fusion({N0 (C1), N0 (C2)}, { (Start(C1), {Start(C1), Start(C2)}), (Present(C1), {Present(C1), Present(C2)}), (Success(C1), {Success(C1), Success(C2)}), (WiniOut(C1), {WiniOut(C1), WiniOut(C2)}), (WiniIn(C1), {WiniIn(C1), WiniIn(C2)}) }) 85Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements inst init cpt init A Forget B Forget End Success Fusion 89 Present Fusion 87 Start Fusion 88 Wini Fusion 90 Present Fusion 87 Success Fusion 89 Start Fusion 88 Wini Fusion 90 Present Fusion 87 Fusion 89 Fusion 87 Fusion 87 Fusion 89 complete (E(a),cpt) curr inst INT0 cpt NCList curr [] curr Fusion 88 NCList [[]] rem_obs init curr INT Fusion 90 ~1 INT 0 Fusion 87 cpt+1 cpt INT0 complete (E(b),cpt) curr inst [] NCList inst curr curr NCList[[]] Fusion 88 rem_obs init curr Fusion 90 INT ~1 Figure 3.10 – Réseau correspondant à la chronique A || B On définit la structure du réseau : Present(C) = Present(C1) Start(C) = Start(C1) Success(C) = Success(C1) WiniOut(C) = WiniOut(C1) WiniIn(C) =  WiniIn(C1) si WiniIn(C1) 6= ∅ WiniIn(C2) sinon Puis nous fusionnons avec le compteur d’évènements comme dans l’équation (3.1). 86CHAPITRE 3. UN MODÈLE DE RECONNAISSANCE EN RÉSEAUX DE PETRI COLORÉS DIT « À UN SEUL JETON » rem_obs init curr init inst2 curr startsub curr curr2 cpt rem_obs init curr init cpt cpt+1 cpt Forget AND Sub Forget A Forget B End Operand1 4_SuccessA NCList Operand2 4_SuccessB NCList Wini 4 Init INT Success [] NCList Start [] NCList startSub 4 Start [[]] Success 4_SuccessA NCList Wini 4 Init Present 4_Num Start 4 Start NCList Wini ~1 4 Init Start 4 Start NCList Success 4_SuccessB NCList Present 4_Num 0 Present 4_Num 4_SuccessA 4_SuccessB INT 4_Num 0 0 4_Num INT complete (E(a),cpt) curr inst 4_SuccessA [] inst curr curr init 4 Init ~1 INT rem_obs init curr 4 Start [[]] INT 4_Num complete (E(b),cpt) curr inst [] 4_SuccessB inst curr [[]] curr 4 Start INT 4 Init inst1 mergeAndNew curr inst1 inst2 inst inst curr NCList ~1 4 Init 4 Start rem_obs init inst1 rem_obs init inst2 inst2 curr inst1 [] [] Figure 3.11 – Réseau correspondant à la chronique A&B Si C = C1&C2 Pour modéliser une conjonction C1&C2 (comme A&B dont le réseau est représenté Figure 3.11), les réseaux N0 (C1) et N0 (C2) fonctionnent aussi en parallèle, donc nous fusionnons les places Start des deux réseaux. Nous fusionnons les places Success des deux réseaux avec des places de l’opérateur OPAND afin de construire les reconnaissances de C1&C2. On pose : N0 (C) = Fusion({N0 (C1), N0 (C2), OPAND}, { (Start(C1), {Start(C1), Start(C2)}), (Present(C1), {Present(C1), Present(C2)}), (Success(C1), {Success(C1), Operand1}), (Operand2, {Operand2, Success(C2)}), (WiniOut(C1), {WiniOut(C1), WiniOut(C2), Wini(AND)}, (WiniIn(C1), {WiniIn(C1), WiniIn(C2)}) }) 87Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements On définit la structure du réseau : Present(C) = Present(C1) Start(C) = Start(C1) Success(C) = Success(AND) WiniOut(C) = WiniOut(C1) WiniIn(C) =  WiniIn(C1) si WiniIn(C1) 6= ∅ WiniIn(C2) sinon Puis nous fusionnons avec le compteur d’évènements comme dans l’équation (3.1). Si C = (C1) − [C2[ rem_obs init curr init init chg_win init inst init curr inst2 concatabs inst curr inst2 cpt cpt rem_obs init curr init curr complete (E(c),cpt) curr inst cpt curr curr init cpt init curr cpt cpt+1 cpt Forget Update Down Sub Forget C Forget B Forget A End WiniAf INT Abs Fusion 92 [] NCList WiniBe Fusion 95 ~1 INT StartSub Fusion 94 [] NCList Oper Fusion 98 [] NCList Success NCList Start [[]] NCList Present Fusion 91 0 INT Wini ~1 INT Start Fusion 94 [] NCList Success Fusion 92 [] NCList Present Fusion 91 0 INT Start Fusion 93 [] NCList Success Fusion 98 [] NCList Wini Fusion 95 ~1 INT Present Fusion 91 0 INT Success Fusion 93 [] NCList Wini ~1 INT Start Fusion 94 [] NCList Present Fusion 91 0 INT Present 0 Fusion 91 INT Fusion 91 Fusion 94 Fusion 93 Fusion 91 Fusion 95 Fusion 98 Fusion 93 Fusion 91 Fusion 92 Fusion 94 Fusion 95 Fusion 94 complete (E(a),cpt) curr inst inst complete (E(b),cpt) curr inst inst rem_obs init curr curr rem_obs init curr inst curr max bck init bck ~1 Fusion 98 Fusion 91 startsub inst [] curr2 curr inst Fusion 92 [] inst Figure 3.12 – Réseau correspondant à la chronique (A B) − [C[ Afin de modéliser l’absence (C1) − [C2[ (comme (A B) − [C[ dont le réseau est représenté Figure 3.12), nous fusionnons les places Wini du réseau N(C1) et Success du réseau N(C2) avec des places de l’opérateur OPABS afin de rendre la chronique C2 « interdite ». On pose : 88CHAPITRE 3. UN MODÈLE DE RECONNAISSANCE EN RÉSEAUX DE PETRI COLORÉS DIT « À UN SEUL JETON » N0 (C) = Fusion({N0 (C1), N0 (C2), OPABS}, { (Start(C1), {Start(C1), Start(C2), StartSub}), (Present(C1), {Present(C1), Present(C2), Present(ABS)}), (Success(C1), {Success(C1), Oper}), (Success(C2), {Success(C2), Abs}), (WiniIn(C1), {WiniIn(C1), WiniBe}) }) On définit la structure du réseau : Present(C) = Present(C1) Start(C) = Start(ABS) Success(C) = Success(ABS) WiniOut(C) = WiniAf WiniIn(C) = ∅ Puis nous fusionnons avec le compteur d’évènements comme dans l’équation (3.1), ce qui complète la formalisation de la construction de nos réseaux. 3.3 Formalisation et description de l’exécution des réseaux Dans cette section, nous présentons le fonctionnement des réseaux que nous venons de définir en prenant appui sur des exemples d’exécution. Nous définissons ensuite une stratégie formelle pour le tirage des transitions car toutes les transitions de nos réseaux sont en permanence tirables mais toute séquence de transitions tirée ne mène pas au marquage recherché, à savoir celui où l’on peu correctement lire les ensembles de reconnaissance. Décrivons maintenant le fonctionnement de ces réseaux. Pour chacune des constructions précé- dentes, nous présentons l’ensemble de ses places, puis les effets de ses transitions. Nous expliquons ensuite la stratégie de tirage à adopter pour obtenir les ensembles de reconnaissance corrects. Cette stratégie de tirage est formalisée dans la Section 3.3.6. 3.3.1 Reconnaissance d’un évènement simple Étudions le comportement d’un réseau reconnaissant un évènement simple sur l’exemple de la Figure 3.13 qui correspond à la chronique A ∈ N. Il est composé de deux sous-réseaux : le réseau de compteur d’évènements de la Figure 3.5, et le réseau relatif à la transition A. Notons que l’annotation de fusion 1_Num indique que les deux places Present sont fusionnées. Places Les places Present des deux sous-réseaux sont fusionnées, et ont donc le même marquage. Elles sont de type INT et contiennent un entier qui correspond à la valeur du compteur d’évènement. 89Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements Figure 3.13 – Réseau correspondant à la chronique A Les places Start et Success sont de type NCList, c’est-à-dire qu’elles contiennent une liste d’instances de chroniques. La place Start a un rôle dans la composition des réseaux pour des chroniques complexes. Ici, son marquage est constant, égal à son marquage initial [ [ ] ]. La place Success contient la liste des reconnaissances de la chronique A. La partie du réseau composée de la place Wini et de la transition Forget est utilisée dans le cas de l’absence. Son fonctionnement est donc explicité dans la section de la reconnaissance de l’absence (3.3.5). Transitions Lorsque la transition End est tirée, l’entier de la place Present est incrémenté de 1. La valeur du compteur est apposée aux évènements pour les distinguer entre eux, comme il est détaillé par la suite. Il faut donc augmenter le compteur à chaque évènement du flux. Au tirage de la transition End, le marquage du réseau évolue comme suit : Start Success Present   curr inst cpt   End −→   curr inst cpt + 1   Lorsque la transition A est tirée, le contenu de la place Success est modifié : la liste des reconnaissances déjà présente dans la place Success est complétée par une nouvelle reconnaissance, notée (E(a), cpt+ 1) (que l’on notera aussi Ecpt+1 a ) où cpt est la valeur du compteur d’évènements. Ci-dessous, nous avons simplifié la définition de la fonction complete en intégrant le fait que, ici, curr = [ [ ] ]. La fonction ANR (Add New Recognition) ajoute la nouvelle reconnaissance [Ecpt+1 a ] à la liste inst. 90CHAPITRE 3. UN MODÈLE DE RECONNAISSANCE EN RÉSEAUX DE PETRI COLORÉS DIT « À UN SEUL JETON » Start Success Present   curr inst cpt   A −→   curr ANR(inst, [ [Ecpt+1 a ] ]) cpt   Stratégie de tirage Considérons un flux ϕ. Ce sont les évènements du flux ϕ qui déterminent la suite de transitions à tirer. Si un évènement de nom différent de A a lieu, seule la transition End est tirée : seul le compteur d’évènement est incrémenté. Si un évènement de nom A a lieu, la transition A est tirée, de façon à ajouter l’évènement à la liste des reconnaissances, puis la transition End est tirée pour incrémenter le compteur. Exemple 8. Soit ϕ = ((b, 1),(a, 2),(d, 3),(a, 4)) avec a, b, d ∈ N où nous souhaitons reconnaître la chronique A. La liste des transitions à tirer correspondant au flux ϕ est [End, A, End, End, A, End]. Le marquage des places du réseau évolue comme suit : Start Success Present   [ [ ] ] [ ] 0   End −→   [ [ ] ] [ ] 1   A −→   [ [ ] ] [ [E2 a ] ] 1   End −→   [ [ ] ] [ [E2 a ] ] 2   End −→   [ [ ] ] [ [E2 a ] ] 3   A −→   [ [ ] ] [ [E2 a ], [E4 a ] ] 3   End −→   [ [ ] ] [ [E2 a ], [E4 a ] ] 4   On obtient bien deux reconnaissances, [E2 a ] et [E4 a ] dues à (a, 2) et (a, 4). 3.3.2 Reconnaissance d’une séquence Nous allons étudier le réseau de Petri de la Figure 3.14 qui correspond à la chronique A B où A, B ∈ N pour examiner le déroulement d’une séquence. Il est composé de trois sous-réseaux : le réseau de compteur d’évènements, le réseau relatif à la transition A et le réseau relatif à la transition B. Places Comme précédemment, les places Present des trois sous-réseaux sont fusionnées, de type INT, et contiennent un entier correspondant au compteur d’évènements. La place Start du réseau A a un marquage constant, égal à son marquage initial [ [ ] ]. La place Success du réseau A est fusionnée avec la place Start du réseau B. Les deux sousréseaux A et B fonctionnent donc en série. La place Start du réseau B n’a donc plus un marquage constant [ [ ] ]. Son marquage initial est la liste vide [ ] qui est le marquage initial de Success(A). Comme évoqué dans la Remarque 18, ceci implique que le mécanisme du réseau relatif à B ne peut pas être effectif tant que le marquage initial n’a pas été modifié car il n’y a pas de reconnaissance partielle à compléter. Au fur et à mesure de l’exécution du réseau, Start(B) pourra contenir une liste contenant une liste non vide : il s’agira des reconnaissances partielles de A B, c’est-à-dire des reconnaissances de A, qu’il faut compléter par des reconnaissances de B. 91Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements rem_obs init curr init End A Forget B Forget Present Fusion 84 Present Fusion 84 INT Success Fusion 85 Start Wini Present Fusion 84 Success Start Fusion 85 Wini Fusion 85 Fusion 84 INT cpt cpt+1 0 Fusion 84 0 cpt complete (E(a),cpt) curr inst inst NCList[] curr curr NCList [[]] rem_obs init curr init INT Fusion 84 INT 0 complete (E(b),cpt) curr inst NCList [] inst cpt curr curr [] NCList INT ~1 ~1 1 1`0 1 1`0 1 1`[] 1 1`[[]] 1 1`(~1) 1 1`0 1 1`[] 1 1`[] 1 1`(~1) Figure 3.14 – Réseau correspondant à la chronique A B La place Success du réseau B contient les reconnaissances de la chronique A B. Comme précédemment, on ignore pour le moment les parties des réseaux A et B composées de la place Wini et de la transition Forget et relatives à l’absence. Transitions Lorsque la transition End est tirée, le compteur d’évènements est incrémenté de 1. Lorsque la transition A est tirée, une nouvelle reconnaissance de A est ajoutée à la liste contenue dans la place Success, comme dans le réseau précédent. Lorsque la transition B est tirée, la reconnaissance de B complète les reconnaissances de A qui se trouvent dans la place Start(B) (à l’aide de la fonction CPR - Complete Partial Recognition) pour former des reconnaissances de A B qui sont ajoutées à la liste de la place Success(B). En effet, la fonction complete prend en argument la variable currB qui correspond au contenu de la place Start et complète les reconnaissances partielles qui s’y trouvent. C’est pour cela que le marquage initial de Success(A) et Start(B) est [ [ ] ] et non [ ] : la fonction complete va compléter la liste vide, il n’y a pas encore de reconnaissance de A. Start(A) Start(B) Success(B) Present     currA currB instB cpt     B −→     currA currB ANR(instB, CPR([ [E cpt+1 b ] ], currB)) cpt     Stratégie de tirage Si un évènement de nom différent de A et de B a lieu, seule la transition End est tirée, et donc seul le compteur d’évènements est incrémenté. Si un évènement de nom A a lieu, la transition A est tirée de façon à ajouter l’évènement à la liste des reconnaissances partielles, puis la transition End est tirée. De même, si un évènement de nom B a lieu, la transition B est tirée 92CHAPITRE 3. UN MODÈLE DE RECONNAISSANCE EN RÉSEAUX DE PETRI COLORÉS DIT « À UN SEUL JETON » de façon à compléter les reconnaissances partielles et obtenir des reconnaissances de A B, puis la transition End est tirée. Exemple 9. Soit ϕ = ((b, 1),(a, 2),(d, 3),(a, 4),(b, 5)) avec a, b, d ∈ N où l’on souhaite reconnaître la chronique A B. La liste des transitions à tirer correspondant au flux ϕ est [B, End, A, End, End, A, End, B, End]. Le marquage des places du réseau évolue comme suit (on rappelle que les places Success(A) et Start(B) sont fusionnées et ont donc le même marquage) : Start(A) Success(A) Success(B) Present     [ [ ] ] [ ] [ ] 0     B −→     [ [ ] ] [ ] [ ] 0     End −→     [ [ ] ] [ ] [ ] 1     A −→     [ [ ] ] [ [E2 a ] ] [ ] 1     End −→     [ [ ] ] [ [E2 a ] ] [ ] 2     End −→     [ [ ] ] [ [E2 a ] ] [ ] 3     A −→     [ [ ] ] [ [E2 a ], [E4 a ] ] [ ] 3     End −→     [ [ ] ] [ [E2 a ], [E4 a ] ] [ ] 4     B −→     [ [ ] ] [ [E2 a ], [E4 a ] ] [ [E2 a , E5 b ], [E4 a , E5 b ] ] 4     End −→     [ [ ] ] [ [E2 a ], [E4 a ] ] [ [E2 a , E5 b ], [E4 a , E5 b ] ] 5     Remarquons que le premier tirage de la transition B ne modifie pas le marquage du réseau. Ceci est dû au fait que le marquage de la place Start(B) est [ ] et qu’il n’y a donc aucune liste, même vide, à compléter. Comme il s’agit d’une séquence, on ne commence pas à reconnaître B tant que l’on n’a pas reconnu A. Le cas particulier A A Il est important de vérifier que le cas particulier A A ne pose pas de problème dans la gestion des différentes combinaisons pour les reconnaissances et qu’il faut bien deux occurrences distinctes de A pour reconnaître la séquence. Ceci est garanti par la fonction complete qui ne complète que les reconnaissances datant d’un instant inférieur ou égal à l’instant courant cpt du compteur d’évènements. Étudions le mécanisme sur le flux ((a, 1),(a, 2)). On dénomme A1 et A2 les deux réseaux fusionnés pour former N(A A). À la suite du tirage de la transition A1 pour le traitement de l’évènement (a, 1), le marquage de Success(A1) (qui est aussi celui de Start(A2)) est [[E1 a ]]. Lorsque l’on tire A2, la fonction complete examine l’instant de chacune des reconnaissances à compléter. Il n’y a pour le moment que [E1 a ] à compléter et son instant de reconnaissance est 1. La compteur d’évènements est encore à 0 donc la contrainte n’est pas vérifiée (¬ 0 ≥ 1) et [E1 a ] ne peut être complétée par (a, 1). On tire alors la transition End et le compteur passe à 1. [E1 a ] peut donc maintenant être complétée. Pour traiter (a, 2), on tire A1 ce qui modifie le marquage de Success(A1) à [[E1 a ], [E2 a ]] et on tire A2 ce qui ne peut compléter que [E1 a ]. On obtient alors [[E1 a , E2 a ]] 93Reconnaissance de comportements complexes par traitement en ligne de flux d’évènements comme marquage de Success(A2); on a bien une unique reconnaissance et le cas particulier A A est correctement traité. 3.3.3 Reconnaissance d’une disjonction cpt+1 cpt init remove_obsolete init curr curr inst complete (E(b),cpt) curr inst inst complete (E(a),cpt) curr inst init remove_obsolete init curr End Forget B A Forget Present 3_Num 0 INT Wini 3 Init ~1 INT Success 3_Success [] NCList Present 3_Num 0 INT Present 3_Num 0 INT Success 3_Success [] NCList Start 3_Start [[]] NCList Wini 3 Init ~1 INT Start 3_Start [[]] NCList 3_Start 3_Success 3_Num 3_Num 3_Success 3_Num 3 Init 3 Init cpt cpt curr curr curr 1 1`0 1 1`(~1) 1 1`[] 1`0 1 1`0 1 1 1`[] 1 1`[[]] 1 1`(~1) 1 1`[[]] Figure 3.15 – Réseau correspondant à la chronique A || B Étudions le comportement d’une disjonction à travers le réseau de Petri de la Figure 3.15 qui correspond à la chronique A || B où A, B ∈ N. Il est composé de trois sous-réseaux : le réseau de compteur d’évènements, le réseau relatif à la transition A et le réseau relatif à la transition B. Places Comme précédemment, les places Present des trois sous-réseaux sont fusionnées, de type INT, et contiennent un entier correspondant au compteur d’évènements. 94CHAPITRE 3. UN MODÈLE DE RECONNAISSANCE EN RÉSEAUX DE PETRI COLORÉS DIT « À UN SEUL JETON » Les places Start des réseaux A et B sont fusionnées et ont ici un marquage constant, égal à [ [ ] ]. Les deux sous-réseaux fonctionnent donc en parallèle. Les places Success des réseaux A et B sont aussi fusionnées. Elles contiennent la liste des reconnaissances de A || B. Comme précédemment, on ignore pour le moment les parties des réseaux A et B composées de la place Wini et de la transition Forget. Transitions Lorsque la transition End est tirée, le compteur d’évènements est incrémenté de 1. Lorsque la transition A est tirée, une nouvelle reconnaissance de A est ajoutée à la liste contenue dans la place Success. Une reconnaissance de A est une reconnaissance de A || B. De même, lorsque la transition B est tirée, une nouvelle reconnaissance de B est ajoutée à la liste contenue dans la place Success. Ainsi, la place Success contient toutes les reconnaissances de A et toutes celles de B, ce qui correspond à toutes les reconnaissances de A || B. Stratégie de tirage Si un évènement de nom différent de A et de B a lieu, seule la transition End est tirée, et donc seul le compteur d’évènements est incrémenté. Si un évènement de nom A (respectivement B) a lieu, la transition A (respectivement B) est tirée de façon à ajouter l’évènement à la liste des reconnaissances de la place Success, puis la transition End est tirée. Exemple 10. Soit ϕ = ((b, 1),(a, 2),(d, 3),(a, 4)) avec a, b, d ∈ N où nous souhaitons reconnaître A || B. La liste des transitions à tirer correspondant au flux ϕ est [B, End, A, End, End, A, End]. Le marquage des places du réseau évolue comme suit : Start Success