JSON is the new XML! It’s everywhere, from NoSQL databases to REST APIs. Let me share with you how Oracle’s Autonomous JSON Database (AJD) makes short work of handling JSON-resident information, especially when paired with robust functions and features of Oracle 19c and 21c.
JSON, A Splash of SODA, and a SQL Chaser: Real-World Use Cases for Autonomous JSON Database (AJD)
1. JSON, A Splash of SODA,
and a SQL Chaser:
Real-World Use Cases for
Autonomous JSON
Database (AJD)
Jim Czuprynski
@JimTheWhyGuy
Zero Defect Computing, Inc.
2. My Credentials
• 40 years of database-centric IT experience
• Oracle DBA since 2001
• Oracle 9i, 10g, 11g, 12c OCP and ADWC
• Oracle ACE Director since 2014
• ODTUG Database Committee Lead
• Editor of ODTUG TechCeleration
• Oracle-centric blog (Generally, It Depends)
• Regular speaker at Oracle OpenWorld, COLLABORATE,
KSCOPE, and international and regional OUGs
E-mail me at jczuprynski@zerodefectcomputing.com
Follow me on Twitter (@JimTheWhyGuy)
Connect with me on LinkedIn (Jim Czuprynski)
4. The Future of IT: Big Data and IoT
• Documents
• Images
• “Metered” data
A sea change is well underway in many IT organizations:
Big Data is
simply
ubiquitous
1
The Internet of
Things (IoT) is
already here
2
Handling semi-structured and
unstructured data will vex DBAs for
the foreseeable future
3
July
2015
5. The World’s Gone JSON. Get Used To It
Data storage format of choice for MongoDB,
AWS, and other NoSQL databases
Standard for interchange of data
for APIs and microservices
Used extensively for geolocation
applications - e.g., GeoJSON for mapping,
routing, and directions
Preferred method for IoT-based
streaming services like Oracle
Streaming Service (OSS)
Best of all, no need to procure a
DBA’s permission and blessing
to add and/or remove columns,
carefully plan out a detailed
data model, or worry about
those pesky foreign keys
March
2021
7. A Splash of SODA: Simple Oracle Document Access
We’ll use SQL Developer Web to
create a new collection, show all
collections, and then insert a new
JSON document into that
collection
Now let’s add some
new documents to the
collection
Note that the filter
specification is itself a
JSON document!
8. A Splash of SODA: Simple Oracle Document Access
We can retrieve information
from within the collection
using SODA GET methods …
… or we can just use good old SQL
to report on the collection’s
contents
For details and examples of using SODA via nodeJS, Python, Java, and other environments,
see Chapter 5 of the Using Oracle Autonomous JSON Database documentation
9. SODA Collections vs. RDBMS Tables
CREATE TABLE t_travelerinfo (
id NUMBER(10,0) NOT NULL
,first_name VARCHAR2(40) NOT NULL
,last_name VARCHAR2(40) NOT NULL
,nationality VARCHAR2(20) NOT NULL
,dob DATE NOT NULL
,passport# NUMBER(10,0)) NOT NULL
,noflyflag CHAR(01) NOT NULL);
CREATE TABLE t_travelerbio (
id NUMBER(10,0) NOT NULL
,blood_type VARCHAR2(05) NOT NULL
,gender CHAR(01) NOT NULL
,preferred_pronouns VARCHAR2(2) NOT NULL);
CREATE TABLE t_flighthistory (
id NUMBER(10,0) NOT NULL
,traveltime TIMESTAMP NOT NULL
,airport_iata CHAR(03) NOT NULL);
Here’s a typical traditional
RDBMS implementation for
three aspects of traveler
information, including their
personal biographical fields as
well as their flight history
CREATE TABLE travelerinfo (
id NUMBER(10,0) NOT NULL
,created_on TIMESTAMP(6) NOT NULL
,last_modified TIMESTAMP(6) NOT NULL
,version VARCHAR2(255) NOT NULL
,json_document BLOB)
CHECK (json_document IS JSON
FORMAT OSON . . .)
LOB ("JSON_DOCUMENT")
STORE AS SECUREFILE (
TABLESPACE “DATA” . . .);
And here’s a SODA
implementation:
A single collection
represented as a table
with a single BLOB
column of type OSON
The document’s OSON format
enables several unique
searching, filtering, and
maintenance features as well
Note the implicit versioning
capabilities and unique UUID
for the document itself – but
that’s not the primary key for
the data stored within the
OSON document!
10. Building Sample JSON Documents via DataGenerator
Build a JSON document schema
1
Specify file generation parameters (to a local file system)
2 500K rows generated with random values in < 2s
3
11. Loading OSON Documents Using DBMS_CLOUD.COPY_COLLECTION
SET SERVEROUTPUT ON
BEGIN
DBMS_CLOUD.COPY_COLLECTION(
collection_name => 'TRAVELERINFO'
,credential_name => 'DEF_CRED'
,file_uri_list =>
'https://objectstorage.us-ashburn-1.oraclecloud.com/n/zdcaudb/b/XTFILES/o/TRAVELERINFO.json'
,format => JSON_OBJECT('recorddelimiter' value '''n''') );
END;
/
PL/SQL procedure successfully completed.
Note: The elapsed time to load 500K rows of this size was < 45s!
You’ll need to set up proper
OCI Cloud credentials and
then use them during this
invocation
SQL> SELECT * FROM COPY$13_LOG;
LOG file opened at 03/03/21 17:15:39
Bad File: COPY$14_212608.bad
Field Definitions for table COPY$H2E8ETI043DRWCTJRNJ0
Record format DELIMITED, delimited by
Data in file has same endianness as the platform
Rows with all null fields are accepted
Fields in Data Source:
JSON_DOCUMENT CHAR (1000000)
Terminated by "
LOG file opened at 03/03/21 17:15:39
Total Number of Files=1
Data File:
https://objectstorage.us-ashburn-1.oraclecloud.com/n/zdcaudb/b/XTFILES/o/TRAVELERINFO.json
Log File: COPY$14_8933.log
Every execution of
DBMS_CLOUD.COPY_COLLECTION
generates a separate
COPY$nn_LOG and COPY$nn_BAD
table that you can query to verify
results of loading
12. Accessing JSON Document Data Within AJD Via SQL
SELECT JSON_SERIALIZE(TI.json_document)
FROM travelerinfo TI;
. . .
{"ID":56407,"FirstName":"Man","LastName":"Laroux","Nationality":"Hungary","DOB":"16-Apr-1955 00:00:00", ...
{"ID":56408,"FirstName":"Shasta","LastName":"Deyak","Nationality":"South Dakota","DOB":"27-May-1961 00: ...
{"ID":56409,"FirstName":"Steven","LastName":"Berliner","Nationality":"Belgium","DOB":"05-Dec-1948 00:00 ...
{"ID":56410,"FirstName":"Rosalyn","LastName":"Bhardwaj","Nationality":"Italy","DOB":"07-Nov-1966 00:00: ...
{"ID":56411,"FirstName":"Janna","LastName":"Skillett","Nationality":"Illinois","DOB":"27-Feb-1960 00:00 ...
{"ID":56412,"FirstName":"Freddie","LastName":"Drach","Nationality":"Oklahoma","DOB":"04-Mar-1956 00:00: ...
{"ID":56413,"FirstName":"Gertha","LastName":"Boelk","Nationality":"Brazil","DOB":"02-Mar-1949 00:00:00" ...
{"ID":56414,"FirstName":"Eloisa","LastName":"Risso","Nationality":"France","DOB":"20-Dec-1968 00:00:00" ...
{"ID":56415,"FirstName":"Cheyenne","LastName":"Pizzini","Nationality":"California (Northern)","DOB":"20 ...
{"ID":56416,"FirstName":"Maira","LastName":"Kaizer","Nationality":"Denmark","DOB":"06-Jun-1948 00:00:00 ...
. . .
JSON_SERIALIZE shows the
entire contents of the
document as stored within its
container
SELECT
TI.json_document.ID
,TI.json_document.LastName
,TI.json_document.BioData[*].BloodType AS BloodType
,TI.json_document.BioData[*].Gender AS Gender
,TI.json_document.BioData[*].PreferredPronouns AS Pronouns
FROM travelerinfo TI
WHERE TI.json_document.ID BETWEEN 1000 AND 1010;
Blood
ID Name Type Gender Pronouns
-------- -------------------- ----- ------- -----------
1000 Mehlig B+ F She/Her
1001 Andren B+ F She/Her
1002 Moeckel A- F She/Her
1003 Layne A- F She/Her
1004 Gaut A- U She/Her
1005 Else O+ F She/Her
1006 Hillenbrand A- F She/Her
1007 Martucci B+ F She/Her
1008 Tibwell A- F He/Him
1009 Luton A- F She/Her
1010 Arrieta O+ U She/Her
Delving into the BioData
sub-collection
COL id FORMAT A08 HEADING "ID"
COL last FORMAT A15 HEADING "Name"
COL Airports FORMAT A15 HEADING "Airports"
COL FlightDate_0 FORMAT A20 HEADING "Flight Date #1"
COL FlightDate_1 FORMAT A20 HEADING "Flight Date #2"
COL FlightRecord_2 FORMAT A20 HEADING "Flight Record #3"
SELECT
TI.json_document.ID AS id
,TI.json_document.LastName AS last
,TI.json_document.FlightHistory.Airport AS Airports
,TI.json_document.FlightHistory[0].FlightDate AS FlightDate_0
,TI.json_document.FlightHistory[1].FlightDate AS FlightDate_1
,TI.json_document.FlightHistory[2] AS FlightRecord_2
FROM travelerinfo TI
WHERE TI.json_document.ID BETWEEN 3000 and 3010;
Looking through the contents of the
FlightHistory sub-collection
ID Name Airports Flight Date #1 Flight Date #2 Flight Record #3
-------- --------------- --------------- -------------------- -------------------- --------------------
3000 Uson ["MSP","LHR"] 31-Mar-2019 17:00:11 01-Apr-2019 18:14:44
3001 Turchi ["IAH","FRA"] 30-Mar-2019 17:26:25 02-Apr-2019 00:00:29
3002 Behizadeh ["MCO","DXB"] 30-Mar-2019 15:41:25 01-Apr-2019 11:13:50
3003 Esmond ["LGR","SVO"] 30-Mar-2019 18:52:53 01-Apr-2019 14:01:49
3004 Mominee ["FLL","CGK"] 31-Mar-2019 13:19:15 02-Apr-2019 09:46:44
3005 Stipp ["SEA","LHR"] 31-Mar-2019 00:17:36 02-Apr-2019 23:52:27
3006 Stolley ["DTW","FCO"] 30-Mar-2019 14:20:55 02-Apr-2019 17:19:20
3007 Nobriga ["DTW","CIN"] 31-Mar-2019 00:23:22 02-Apr-2019 00:45:30
3008 Letson ["ORD","SYD"] 31-Mar-2019 20:34:53 01-Apr-2019 19:16:16
3009 Marthe ["PHX","AMS"] 31-Mar-2019 00:06:52 01-Apr-2019 21:11:55
3010 Dimitt ["BWI","PVG"] 30-Mar-2019 11:23:24 02-Apr-2019 11:15:05
Note that
FlightHistory[2]
is treated as a
NULL result!
13. DBMS_SODA: Managing OSON Containers and Documents
-----
-- Remove an existing collection and its corresponding
-- table (if it exists)
-----
DECLARE
socodoc_sts NUMBER (1,0);
BEGIN
-----
-- Drop the collection (if it exists)
-----
socodoc_sts :=
DBMS_SODA.DROP_COLLECTION(
collection_name => 'TRAVELERINFO'
);
END;
/
DROP TABLE json.travelerinfo PURGE;
Here’s how to remove an OSON
container in a schema (as well as its
underlying table)
-----
-- Create a collection named TRAVELERINFO, which will create a
-- normal DBMS table named the same as the collection that +houses+
-- that collection, plus any corresponding metadata
-----
DECLARE
soco_coll SODA_COLLECTION_T;
BEGIN
soco_coll :=
DBMS_SODA.CREATE_COLLECTION(
collection_name => 'TRAVELERINFO'
,metadata => NULL
,create_mode => DBMS_SODA.CREATE_MODE_DDL
);
END;
/
And here’s how to
create a new container
using PL/SQL
14. Using Document Methods to Load New Data
DECLARE
soco_coll SODA_COLLECTION_T;
soco_doc SODA_DOCUMENT_T;
rtr_socodoc SODA_DOCUMENT_T;
upd_socodoc SODA_DOCUMENT_T;
socodoc_key VARCHAR2(100);
socodoc_sts NUMBER(1,0);
SQLERRNUM INTEGER := 0;
SQLERRMSG VARCHAR2(255);
BEGIN
-- Open an +existing+ collection
soco_coll :=
DBMS_SODA.OPEN_COLLECTION(
collection_name => 'TRAVELERINFO'
);
. . .
Re-open the OSON
collection just created
. . .
-----
-- Create and insert a new JSON document into the collection
-----
soco_doc :=
SODA_DOCUMENT_T(
b_content =>
UTL_RAW.CAST_TO_RAW(
'{"ID" : 500001, "FirstName" : "Rolf", "LastName" : "Bakko", . . .
"BioData" : [{"BloodType" : "O-", "Gender" : "M",
"PreferredPronouns" : "He/Him"}],
"FlightHistory" : [
{"FlightDate" : "31-Mar-2019 21:30:00","Airport" : "ZZV"},
{"FlightDate" : "05-Apr-2019 14:28:13","Airport" : "ZAM"}
]}’));
socodoc_sts := soco_coll.INSERT_ONE(document => soco_doc);
-- Commit these INSERTs
COMMIT;
. . .
Insert a document into the collection
via the INSERT_ONE method
. . .
-----
-- Insert the document, but then retain it within a JSON document datatype
-- for additional processing via INSERT_ONE_AND_GET
-----
soco_doc :=
SODA_DOCUMENT_T(
b_content =>
UTL_RAW.CAST_TO_RAW('{"ID" : 500002, . . .}]}'));
upd_socodoc :=
soco_coll.INSERT_ONE_AND_GET(
document => soco_doc
);
-----
-- Using the inserted JSON document, find its auto-generated key via GET_KEY
-----
socodoc_key := upd_socodoc.GET_KEY;
DBMS_OUTPUT.PUT_LINE('Auto-generated key is ' || socodoc_key);
-- Commit the insert
COMMIT;
Insert another document, but this
time, retain its data and metadata
in a different document
. . .
-----
-- Using the key we just found for the JSON document, retrieve it from its
-- collection via the FIND_ONE function, and then display several of its
-- automatically-generated attributes
-----
rtr_socodoc := soco_coll.FIND_ONE(socodoc_key);
DBMS_OUTPUT.PUT_LINE('Document components:');
DBMS_OUTPUT.PUT_LINE(' Key: ' || rtr_socodoc.GET_KEY);
DBMS_OUTPUT.PUT_LINE(' Content: ' || UTL_RAW.CAST_TO_VARCHAR2(rtr_socodoc.GET_BLOB));
DBMS_OUTPUT.PUT_LINE(' Creation timestamp: ' || rtr_socodoc.GET_CREATED_ON);
DBMS_OUTPUT.PUT_LINE(' Last-modified timestamp: ' || rtr_socodoc.GET_LAST_MODIFIED);
DBMS_OUTPUT.PUT_LINE(' Version: ' || rtr_socodoc.GET_VERSION);
EXCEPTION
WHEN OTHERS THEN
SQLERRNUM := SQLCODE;
SQLERRMSG := SQLERRM;
DBMS_OUTPUT.PUT_LINE('Unexpected error : ' || SQLERRNUM || ' - ' || SQLERRMSG);
END;
/
Information about the newly-inserted
document is accessible through calls
to the document’s inherent methods
Document components:
Key: 4CB36DBB505A4FE7BFD32FA80F2EA899
Content: {"ID":500001,"FirstName":“Rolf","LastName":"Bakko", . . .
Creation timestamp: 2021-03-04T20:08:05.723802Z
Last-modified timestamp: 2021-03-04T20:08:05.723802Z
Version: 28665AFC00224F15BF5B1C541569788A
Information about the newly-inserted
document is accessible through calls
to the document’s inherent methods
15. Adding a Data Guide To Improve Full-Text Searches
-- Add a Search index to an existing collection
DROP INDEX travelerinfo_sdx FORCE;
CREATE SEARCH INDEX travelerinfo_sdx
ON travelerinfo (json_document)
FOR JSON;
Creating a special
search index for the
collection
-- Now that a Search Index has been created for the existing collection,
-- generate a Data Guide (DG) for that collection
DECLARE
soco_coll SODA_COLLECTION_T;
dataguide CLOB;
BEGIN
soco_coll :=
DBMS_SODA.OPEN_COLLECTION(
collection_name => 'TRAVELERINFO'
);
-- Get the data guide for the collection.
dataguide := soco_coll.GET_DATA_GUIDE;
DBMS_OUTPUT.PUT_LINE(JSON_QUERY(dataguide, '$' pretty));
-- Important: Free the temporary LOB.
IF DBMS_LOB.ISTEMPORARY(dataguide) = 1
THEN
DBMS_LOB.FREETEMPORARY(dataguide);
END IF;
END;
/
Creating a Data Guide based
on the Search Index
{
"type" : "object",
"o:length" : 1,
"properties" :
{
"ID" :
{
"type" : "number",
"o:length" : 8,
"o:preferred_column_name" : "JSON_DOCUMENT$ID"
},
"DOB" :
{
"type" : "string",
"o:length" : 32,
"o:preferred_column_name" : "JSON_DOCUMENT$DOB"
},
. . .
}
The resulting Data Guide makes it simpler
for Oracle to determine how best to access
the collection’s data during queries,
especially during full-text searches
16. Combining JSON and non-JSON information via Oracle SQL
CREATE TABLE t_airports(
ap_id CHAR(03) NOT NULL
,ap_name VARCHAR2(40) NOT NULL
,ap_city VARCHAR2(40) NOT NULL
,ap_country_abbr VARCHAR2(03) NOT NULL
,ap_lat NUMBER(10,6) NOT NULL
,ap_lng NUMBER(10,6) NOT NULL
,ap_geolocation SDO_GEOMETRY
);
ALTER TABLE t_airports
ADD CONSTRAINT airports_pk
PRIMARY KEY (ap_id)
USING INDEX (
CREATE UNIQUE INDEX airports_pk_idx
ON t_smartmeters (ap_id)
TABLESPACE data
);
{ Load sample data via SQL Developer import utility }
. . .
Let’s create a normal
RDBMS table
containing standard
IATA airport codes
. . .
DELETE FROM user_sdo_geom_metadata
WHERE table_name = 'T_AIRPORTS';
INSERT INTO user_sdo_geom_metadata
VALUES (
'T_AIRPORTS'
,'AP_GEOLOCATION'
,SDO_DIM_ARRAY(
SDO_DIM_ELEMENT('Longitude', -180, 180, 10)
,SDO_DIM_ELEMENT('Latitude', -90, 90, 10)
)
,8307);
COMMIT;
. . .
Once it’s populated, we’ll
prepare to translate its
Longitude and Latitude
values into Oracle GIS
coordinates
. . .
DROP INDEX airports_spidx FORCE;
UPDATE t_airports
SET ap_geolocation =
SDO_GEOMETRY(
2001
,8307
,SDO_POINT_TYPE(ap_lng, ap_lat, NULL)
,NULL
,NULL
);
COMMIT;
CREATE INDEX airports_spidx
ON t_airports(ap_geolocation)
INDEXTYPE IS MDSYS.SPATIAL_INDEX_V2;
Finally, we’ll build the
corresponding GIS
coordinates and add a
spatial index for that
column’s values
17. Putting It All Together: Calculating Flight Times and Distances
SELECT
TI.json_document.FlightHistory[0].Airport AS Depart_IATA
,AP1.ap_name AS Departure_Airport
,TI.json_document.FlightHistory[0].FlightDate AS Depart_Time
,TI.json_document.FlightHistory[1].Airport AS Arrive_IATA
,AP2.ap_name AS Arrival_Airport
,TI.json_document.FlightHistory[1].FlightDate AS Arrive_Time
,ROUND(((TO_DATE(TI.json_document.FlightHistory[1].FlightDate,'dd-mon-yyyy hh24:mi:ss') -
TO_DATE(TI.json_document.FlightHistory[0].FlightDate,'dd-mon-yyyy hh24:mi:ss')) * 24),1)
AS Flight_Hours
,ROUND(SDO_GEOM.SDO_DISTANCE(AP1.ap_geolocation, AP2.ap_geolocation, 100, 'unit=MILE'),1)
AS Miles_Flown
FROM
travelerinfo TI
,t_airports AP1
,t_airports AP2
WHERE TI.json_document.FlightHistory[0].Airport = AP1.ap_id
AND TI.json_document.FlightHistory[1].Airport = AP2.ap_id
ORDER BY
TI.json_document.FlightHistory[0].Airport
,TI.json_document.FlightHistory[1].Airport
;
This uses the
SDO_DISTANCE function
to calculate the relative
distance between the
departure and arrival
airports
18. Search Indexes: Improving Performance Against OSON Documents
CREATE OR REPLACE VIEW v_FlightTimesAndDistances AS
SELECT
TI.json_document.LastName AS PassengerName
,TI.json_document.FlightHistory[0].Airport AS Depart_IATA
,AP1.ap_name AS Departure_Airport
,TI.json_document.FlightHistory[0].FlightDate AS Depart_Time
,TI.json_document.FlightHistory[1].Airport AS Arrive_IATA
,AP2.ap_name AS Arrival_Airport
,TI.json_document.FlightHistory[1].FlightDate AS Arrive_Time
,ROUND(((TO_DATE(TI.json_document.FlightHistory[1].FlightDate,'dd-mon-yyyy hh24:mi:ss') -
TO_DATE(TI.json_document.FlightHistory[0].FlightDate,'dd-mon-yyyy hh24:mi:ss')) * 24),1)
AS Flight_Hours
,ROUND(SDO_GEOM.SDO_DISTANCE(AP1.ap_geolocation, AP2.ap_geolocation, 100, 'unit=MILE'),1)
AS Miles_Flown
FROM
travelerinfo TI
,t_airports AP1
,t_airports AP2
WHERE TI.json_document.FlightHistory[0].Airport = AP1.ap_id
AND TI.json_document.FlightHistory[1].Airport = AP2.ap_id
ORDER BY
TI.json_document.FlightHistory[0].Airport
,TI.json_document.FlightHistory[1].Airport
;
Let’s convert that same
query into a VIEW for
easier reporting. Note
the addition of the
passenger’s last name
SELECT *
FROM v_FlightTimesAndDistances
WHERE Depart_IATA = 'EWR'
AND Arrive_IATA IN ('FRA', 'CDG')
AND PassengerName = 'Horger';
Here’s a pretty typical query:
Find all flights that a passenger with a last
name of Horger was on, leaving from New York
and landing in either Frankfort or Paris
19. AJD vs. ATP: So What’s the Difference?
19
Features / Commands ATP AJD
Maximum storage space for JSON data Unlimited1 Unlimited1
Maximum storage space for non-JSON data Unlimited1 20GB
Uses Simple Oracle Document Access (SODA) to access JSON
data and provide REST APIs
Yes Yes
SODA collections permitted to contain non-JSON data Yes No2
Storage format for JSON data in SODA collections JSON OSON
For complete details of all restricted commands and database features, see Appendix A: Autonomous JSON
Database for Experienced Oracle Database Users in Using Oracle Autonomous JSON Database documentation
1 Subject to limits of allocated storage for the ADB instance
2 See this blog post for a great description of how the new OSON format for JSON improves query performance
20. Promoting an AJD instance to ATP (1)
Select a target AJD instance
1
Perform a Full Clone as a backup
2
Cloning is initiated and completes in a few minutes
3
21. Promoting an AJD instance to ATP (2)
Initiate conversion of newly-cloned instance to ATP
1 Conversion begins …
2 … and completes in just a few minutes
3
22. AJD vs. MongoDB: A Comparison
Feature Autonomous JSON Database MongoDB Atlas
Maximum Document Size 32 MB 16 MB
Maximum nested depth of documents 1024 levels 100 levels
Indexes Per Collection Unlimited 64
Compound Index Fields Unlimited 32
Full Document Index JSON Search Index X
Server-Side Functions Functions, procedures, triggers Not recommended*
Multi-Document transactions Always ACID ACID only upon request via explicit API calls
Transaction Duration Unlimited 60-second default
Transaction Size Unlimited Maximum of 1000 documents*
Aggregation Data Size Unlimited 100 MB RAM + explicit allowDiskUse param
Serverless Auto-Scaling ✓ X
SQL Access Over JSON Documents ✓ X
Comprehensive Security ✓ X
Price $2.74 / hour $3.95 / hour
Source: See Beda Hammerschmidt’s blog post for complete details and explanation
23. ADJ: References and Further Reading
Getting Started With Autonomous JSON Database:
https://www.oracle.com/autonomous-database/autonomous-json-database/get-started/
Autonomous JSON Database for Experience Oracle Database Users:
https://docs.oracle.com/en/cloud/paas/autonomous-json-database/ajdug/experienced-database-users.html
Beda Hammerscmidt’s Blog Entry Announcing ADJ:
https://blogs.oracle.com/jsondb/autonomous-json-database
Autonomous JSON Database under the covers - OSON format:
https://blogs.oracle.com/jsondb/osonformat
http://www.vldb.org/pvldb/vol13/p3059-liu.pdf
24. SODA: Documentation and Examples
Using SODA:
https://docs.oracle.com/en/database/oracle/simple-oracle-document-access/adsdi/overview-soda.html
Oracle Database Actions with SQL Over SODA:
https://docs.oracle.com/en/cloud/paas/autonomous-json-database/ajdug/use-oracle-database-actions-sql-json-collections.html
SODA For PL/SQL Developer’s Guide:
https://docs.oracle.com/en/database/oracle/simple-oracle-document-access/plsql/19/adsdp/index.html
SODA Data Types and Methods:
https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/SODA_TYPES.html
SODA Index Specifications and Advantages:
https://docs.oracle.com/en/database/oracle/simple-oracle-document-access/adsdi/soda-index-specifications-reference.html
Hinweis der Redaktion
A document collection contains documents. Collections are persisted in an Oracle Database schema (also known as a database user). In some SODA implementations a database schema is referred to as a SODA database.
SODA is designed primarily for working with JSON documents, but a document can be of any Multipurpose Internet Mail Extensions (MIME) type.
In addition to its content, a document has other document components, including a unique identifier, called its key, a version, a media type (type of content), and the date and time that it was created and last modified. The key is typically assigned by SODA when a document is created, but client-assigned keys can also be used. Besides the content and key (if client-assigned), you can set the media type of a document. The other components are generated and maintained by SODA. All components other than content and key are optional.