Geobyte was AAPG's computer magazine (1986-1992). Operation Database (vol. 5 #1, 1990; https://bit.ly/2ruDJyP) inspired this that was never published. It was followed 14 years later by another manuscript submitted to GSA Memoir (Topical Session 85) 2004 (https://bit.ly/1qKYeBC)
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Â
Zolnai geobyte manuscript
1. NEW WAVE OR NEW AGE? THE QUEST FOR TRUE 3D DATABASES
Andrew Zolnai, 424 Memorial Dr NW, Calgary AB, T2N .3C3
Tel: (403) 283 6193, Fax: (403) 270 2158
(Submitted to Fred Wagner, Editor, GEOBYTE, Dec 3/90)
VISUALISATION SOFTWARE AND GIS ABOUND IN PETROLEUM
APPLICATIONS, BUT TRUE 3D DATABASE ENGINES DO NOT. WHILE
POINT, LINE AND SURFACE DATA CAN NOW BE TREATED IN MANY WAYS,
DATABASES HAVE TO BE RESTRUCTURED TO ALLOW MEANINGFUL 3D
QUERIES. SPATIALLY COHERENT SUBSETS OF DATA TABLES HAVE TO BE
SEPARATED, QUERIED AND REJOINED INTO NEW SPATIALLY COHERENT
SETS FOR DISPLAY IN CLASSICAL VISUALISATION TOOLS.
Introduction
Geological computer applications span from spreadsheet
utilites on PCs to 3D rendering on supercomputers: "Operation
Database" (Vol 4 # 2-4) and in-depth focuses (Vol 5 # 1, 5)
addressed the state-of-the-art. The Symposium on 3D Computer
Graphics in Modelling Geologic Structures and Simulation of
Geologic Processes held in Freiburg, Germany last October (Vol 5 #
6) brought together the tenors in the field with a healthy legion
of computer geologists (150 from N America and Europe, up from a
planned 50 Europeans): They asked burning questions about "how
these expensive toys are going to promote geologic investigation",
to quote John Tipper. The symposium was a showcase of the best
modelling and rendering tools, some of which do not know or care
where the data came from or what they are supposed to signify! Data
format standards have been addressed by number of consortia (Vol 5
# 2,5) to try and streamline the access to information. Caught in
the middle between input and rendering are true geologic database
engines.
The Present
While the geographic information system (GIS) issues showcase
an array of excellent tools, none appear to be true 3D databases:
This sampling of petroleum-oriented GIS offerings originated in
other fields, such as remote-sensing and facilities management,
excep~ing perhaps~r's ~eel~~o infeFmation system. The very
nature of, say, Landsat imagery, pipelines and land ownership lent
itself to 2D treatment, in fact 2~D if one considers the stacking
of layers. Contour mapping, including 3D visualistations with 20
data draped over, offer excellent renderings of spatial data,
2. layer-by-layer. With the advent of exclusive rights (ownership over
only a certain stratigraphic interval), horizontal drilling, better
cross-sectioning tools and the merger of geological, geophysical
and engineering software, true 3D databases become vital. Packages
abound in their ability to display ribbon-sections among deviated
wells and solids rendering. Beyond that, true 3D database engines
will allow to query data volumetrically, not just in horizontal or
vertical slices: The availability of hardware to handle massive
data manipulations and visualisation dwarfs that of software to
match. This stems in part from the difficulty in harnessing
structured query languages {SQL) to geographically constrained
data.
One implementation is to tie attributes (pieces of data) to
map data points. Attributes are stored in relational database
management system (RDBMS) tables that can be queried and sorted;
points, line or polygons satisfying given conditions can then be
brought out on a map in a graphic or computer-aided design {CAD)
system. New 3D incarnations ef CAB enable point and line attributes
to be queried in 3D. Polygon processing likewise allows area
attributes to be queried in 2D (2 ~D if one consideres stacking
layers). Surface modelling finally allows queries of complex 3D
entiti_es, even solids modelling; only one attribute is usually
attached, however: Horizons can be picked, volumes delineated with
faults or property boundaries, and parameters assigned such as
porosity or permeability; it is vastly more difficult, however, to
interactively query a database and automatically display the net
pay distribution of a target horizon with a minimum percentage
interest by two joint operators. So if one attribute can be had in
3D, or multiple attributes in less than 3D, WHERE IS THE HANG-UP?
Let us digress for a moment into parallel-processing (eg: the
Connection Machine, a cubic array of 65 536 chips or less, June
1987 Scientific American). Simple processors (PC-on-a-chip) can be
arrayed in 3D configurations for complex real-time 3D modelling;
each chip (cell) processes data as representative of an area in
space, working simultaneously (in parallel) with others. In
parallel-processing, linear programming no longer works, as each
cell evolves as a function of neighbouring cells, not just in a
sequential fashion. From "add, then multiply the result, then
substract it from ... ", one has to "add, wait for the (possibly
many) neighbours' results, multiply and feed that result to
neighbours, etc.": 3D processing rapidly evolves into a
data-communications concern, and network swithching devices are
heavily used. Operating systems (OS) have been written for the sole
purpose of "traffic control"; as such traffic depends heavily on
hardware configurations, such an OS is site-specific. UNIX was
written to handle multiple devices, hardware networks and file
transfer protocols, but parallel processing consists rather of
"software networks"!
3. The Future
Multi-processing has been used in on-line queries (real-time
changes in networked, or distributed, relational databases) to
off-load tasks on separate processors as SQL binds these queries
together. Splitting up tasks works with homogenous (dimensionless)
data that have no spatial constraints (just register addresses in
virtual space}. Financials and statistics for example have only
virtual locations, ie. tables can be broken up, manipulated and
rejoined; queries can furthermore be optimised without affecting -
indeed ensuring and improving - the integrity of the database.
This is much harder to do with geographically (spatially in real-
world coordinates) constrained data, as splitting up and rejoining
volumetric data affects the integrity of the database ... Complex
rules govern the coexistence of data in real space, and subsurface
data certainly doe not behave mathematically: If there is some
quantifiable logic, it has yet to be found! One item (eg: pressure,
temperature, composition, or some mathematical combination thereof)
can be modelled in massive 3D arrays, or complex tables (eg: well
headers, tops, DSTs and IPs) can be manipulated in 2D, but not in
3D. Consider an RDBMS-per-chip in parallel processors, and imagine
the spaghetti-links among tables: Assuming flawless data traffic
COntrol fWhQ haS Seenc.·~f~l~awleSS-"database tabl-e j OifiS anyway?) I the
delays in processing and sorting tables might just bog any system
down. Conversely, if tables with an XYZ component are queried,
ensuring the continuity and preserving the contiguity of data (via
software, rather than hardware as above) is not obvious.
Raster-imaging techniques may be applied to spatial table
queries: Quad-treeing is, for example, a technique to split up a
planar array of points into its smallest collection of coherent
areas (likewise in 3D arrays, at non-negligible computational
cost). If a 3D database were subdivided into a collection of
spatially coherent tables, queries could be performed within
predefined spatial boundaries without violating spatial continuity
criteria. Tables can, moreover, be split into subsets that maximise
their coherence over a larger spatial area, similarly to
quad-treeing. The difficulty still lies in defining those areas,
ie: having enough room in the tables to store strings of XYZ
coordinates for points, polylines or polygons; considerable
forethought has to be put into it, and there are software and
hardware limitations in table widths and string lengths, as well as
ways to circumvent these.
Once the spatially coherent subsets are defined, queries will
likely result in further subsets, which may furthermore be coherent
with other new subsets in contiguous, originally non-coherent
areas: Network traffic control (cf: parallel processing) will have
to be treated in software as well as in hardware; multi-processing
can be brought to bear on matching subsets among incoherent areas
into coherent subsets, provided data strings are long enough to
maintain positive identification! Once the new tables have been
created, they can be passed on to a visualisation device, using
XYZ keys to locate the data points and attached attributes.
4. Conclusion
Interactive processing may prove difficult, as the very
structure of the database, not just points, has to be ''sliced and
diced" to produce the queries. Hardware is constantly getting
faster and cheaper, but software often lags behind as RDBMSs have
to be rethought, much like OS in parallel-processing. This quick
tour will have delved into some issues and exposed some new
directions. When we ask programmers to deliver dream machines
according to our specifications, we can perhaps better grasp what
is involved: Managing expectations is key in any new technology.
The payoff, true 3D database engines, will enhance the already
impressive array of tools to access and massage data more
efficiently and better manage our resources.