8. 8/35
Simulation
Different types of simulations:
Fluid flow,
Structural,
Thermal, …
Simulations can be very complex
Execution can vary greatly (milliseconds to weeks)
Making virtual, real
Ref: Autodesk Fusion 360 simulation options
Ref: Fire simulation in Autodesk Maya
10. 10/35
Challenges
Offer a rich set of multi-physic solver
Fast ones for an interactive experience, accurate ones to refine selected designs
Assembling existing solvers and developing new ones
Aggregate the necessary data consumed by these solvers
materials, machines, processes, ...
Composed a schema-based data unification service
Guide the user to compose the initial problem and identify the desired design
Composed a validation and recommendation service
Structure content for optimal storage and performance
Total/partial reuse with history branching
Working on application specific data & metadata template system
Create a compatible cloud platform
18. 18/35
White paper
“Unified Access to Heterogeneous Data Sources Using an Ontology”
Semantic Technology: 8th Joint International Conference,
November 2018, Awaji, Japan
DOI: 10.1007/978-3-030-04284-4_8
19. 19/35
Considerations
On-demand vs Permanent
Dispersed content vs Aggregated content
Lossless vs Losses
Time consuming vs Fast
Next …
Word embeddings targeting specific engineering domains
to automate/simplify schema matching
Over service uses
21. 21/35
Assist during content creation
With recommendations
JSON
orChunks
of JSON
Creation
Process
Report
&
Recommend
Validator
Generate
Validation
Successful
22. 22/35
z
Data transfer
Internet
Cloud
Server
Secure communication
Secure message content
Assess data syntax (schema)
Validate data
Consolidate data
Propagate data
File storage
and
Database
Compute
workflow
24. 24/35
Knowledge structure
Domain ontologies Application model
• Application specific
• Map to external resources
• Versioned
• Domain dependent
• Portable, Reusable
• Object oriented (versioning)
e.g. geometry (mesh, polygon, vertices),
materials (mechanical, thermal properties)
e.g. Autodesk Autocad, Moldflow, Revit, Maya, …
Classes used to compose
25. 25/35
White paper
“Validation and Recommendation Engine from Service Architecture and
Ontology”
11th International Joint Conference on Knowledge Discovery, Knowledge
Engineering and Knowledge Management
September 2019, Vienna, Austria
DOI: 10.5220/0008070602660273
26. 26/35
Next …
Develop higher intelligence with big data & ML
Internet
Cloud
Server
Validator
Server
Validator
Users
Subject-matter expert
Server
Knowledge
Repository
Big Data
ML
Init/Update Monitor/Collect
29. 29/35
Data
Transformation
All in one/several files
holds all data
potentially heavy
favors redundancies
versioning not included
user organized
portable
protected by file copies
DB
Object
storage
Structured metadata
point to data
light weight
minimizes redundant data
versioning integrated
automatically organized
centralized content
global availability
protected by block replicas
Desktop Cloud
30. 30/35
Supporting data content & lifecycle
Thousands of studies per project (up to TB of data)
Large histories and extended branching
Linked information to define & monitor an application data ecosystem
Listing & location of data
Embedded & associated metadata
Definition of this data lifecycle
For Generative Design
32. 32/35
Challenges
Using Semantic Web technologies:
Composed a schema-based data unification service
Composed a validation and recommendation service
Working on enhancing the management of data and metadata
For Generative Design
33. 33/35
Knowledge graphs
Source of data cohesion
Map complex concepts
Useful for existing or new applications
Natural bridge between Human and Machine
Integration, application specific
User Interface(UI) / API, critical to abstract inherent complexity
To support Cloud operations
34. 34/35
Semantic Technologies
Good first layer of intelligence using Descriptive Logic, DL and reasoners
Excellent complement to Machine Learning, ML
As piece of AI
What do we care or matter ?
Deal with the complexity.
Mean to validate the data, all possible aspects.
Prevent the launch of lengthy computations when the data is incomplete or incorrect.
A simple way to create the necessary knowledge to validate the data .. in a modular form so that it can be reuse and recomposed for different applications.
Report any missing or invalid content with recommendations for quick identification and correction.
Provide justification as to why validation fail
Why did we do this work and what is different from what is existing.
Rule checking is most famous for firewall to secure communications
Traditional rule systems are usually relatively linear (one line per rule), following a specifc style (XML), with limited validation capabilities and lengthy, hard to read or interpret rule listing.
We tried to build a complex yet easy to use experience for both the users and the subject-matter expert hat create and maintain the validation knowledge base.
Initial problem targeted: Lightweighting.
But the system can also achieve a practical, performant and good looking geometry.
The system do so through intelligent interactions and a back and forth with the user to identify the right design for the desired purpose.
Lightweight data unification service
Convert data from multiple source types
Schema based
Works using content maps connected by domain ontologies
Easily deployable with dedicated user interface for mapping creation
Customers are attached to local desktop deployments on their own machine.
Migration is in stage:
Migrate data to facilitate live and regular updates
Create bridges to Cloud compute capabilities through web based equivalents
Making the application to talk the same language would require massive refactoring and entire recoding.
Thus the idea to introduce an ontology service as a middleman
beneficial to the experience
advantageous
Unification is really plan B as it is time consuming, plan A is to hove all related data in the same media if possible.
Does not mean all data is one uber place but an intelligent domain separation with an effort to avoid duplications that would require unifications.
Lightweight service for validation and recommendation
JSON based
Rich set of validation techniques
with Descriptive Logic and Code Logic
Well-structured and easy to use validation knowledge base
by combining Domain ontologies and Application models
Easily deployable and scalable
Stream reasoning strategies
Roughly
Whether it is for supporting existing applications or creating new intelligent ones, Semantic Web has an incredibly important place, directly inside core applications or within derivative services to structure coordinate and reason over data. In the future, it will certainly be an essential component to build intelligence along with algorithms in Machine learning and technologies sch as Quantum computing.
Connected to other digital storage -> Capture human knowledge and expertise
As we have neurons in our brain and Cloud have service architecture. The answer is a mix but both machines and humans have limits.
Personal opinion mostly based on how manageable the data (structure, lifecycle and monitoring) and who is supposed to manage the data (human, machine).
Machine = allow larger blobs but beware that without control can become overly time consuming.
Human = allow only moderate blobs because error prone, but can be protected by validation rules and human reviews.
Ultimately, data like code beyond certain sizes and increased complexity need restructuring whether managed by machines or humans.