Diese Präsentation wurde erfolgreich gemeldet.

Azure satpn19 time series analytics with azure adx

0

Teilen

Wird geladen in …3
×
1 von 63
1 von 63

Weitere Verwandte Inhalte

Diashows für Sie

Azure satpn19 time series analytics with azure adx

  1. 1. #azuresatpn Azure Saturday 2019 Azure ADX Time Series Analytics with Azure ADX
  2. 2. #azuresatpn Questions What about TIME SERIES DATABASE? When Have I to use it? Which are market possible choices? OpenTSDB? Kairos over Scylla/Cassandra? Influx? Why Have I to learn Another DB yet!? Why not SQL? Why not COSMOS?
  3. 3. #azuresatpn 1. Intro 2. Service 3. Trust 4. Basics 5. Tecniques 6. Dive into Scalar Functions 7. Real Use Cases (in IIoT) Multi-temperature data processing paths
  4. 4. #azuresatpn <INTRO/>
  5. 5. #azuresatpn • seconds freshness, days retention • in-mem aggregated data • pre-defined standing queries • split-seconds query performance • data viewing Hot • minutes freshness, months retention • raw data • ad-hoc queries • seconds-minutes query perf • data exploration Warm • hours freshness, years retention • raw data • programmatic batch processing • minutes-hours query perf • data manipulation Cold • in-mem cube • stream analytics • … • column store • Indexing • … • distributed file system • map reduce • … Multi-temperature data processing paths
  6. 6. #azuresatpn What is Azure Data Explorer Any append- only stream of records Relational query model: Filter, aggregate, join, calculated columns, … Fully- managed Rapid iterations to explore the data High volume High velocity High variance (structured, semi- structured, free-text) PaaS, Vanilla, Database Purposely built
  7. 7. #azuresatpn Fully managed big data analytics service • Fully managed for efficiency Focus on insights, not the infra-structure for fast time to value • No infrastructure to manage; provision the service, choose the SKU for your workload, and create database. • Optimized for streaming data Get near-instant insights from fast-flowing data • Scale linearly up to 200 MB per second per node with highly performant, low latency ingestion. • Designed for data exploration • Run ad-hoc queries using the intuitive query language • Returns results from 1 Billion records < 1 second without modifying the data or metadata
  8. 8. #azuresatpn <SERVICE/>
  9. 9. #azuresatpn When is it useful? 1. Analyze Telemetry data 2. Retrieve trends/Series from clustered data 3. Make regression over Big Data 4. Summarize and export ordered streams LAB 01 From IoT Hub to ADX https://docs.microsoft.com/it- it/azure/data-explorer/ingest-data- iot-hub
  10. 10. #azuresatpn Azure Data Explorer Architecture SPARK ADF Apps => API Logstash plg Kafka sync IotHub EventHub EventGrid Data Management Engine SSD Blob / ADLS STREAM BATCH Ingested DAta ODBC PowerBI ADX UI MS Flow Logic Apps Notebooks Grafana Spark
  11. 11. #azuresatpn How about the ADX Story? Telemetry Analytics for internal Analytics Data Platform for products AI OMS ASC Defender IOT Interactive Analytics Big Data Platform 2015 - 2016 Starting with 1st party validation Building modern analytics Vision of analytics platform for MSFT 2019 Analytics engine for 3rd party offers Unified platform across OMS/AI Expanded scenarios for IOT timeseries Bridged across client/server security 2017 GA - February 2019
  12. 12. #azuresatpn Available SKU Attribute D SKU L SKU Small SKUs Minimal size is D11 with two cores Minimal size is L4 with four cores Availability Available in all regions (the DS+PS version has more limited availability) Available in a few regions Cost per GB cache per core High with the D SKU, low with the DS+PS version Lowest with the Pay-As-You-Go option Reserved Instances (RI) pricing High discount (over 55 percent for a three- year commitment) Lower discount (20 percent for a three-year commitment) • D v2: The D SKU is compute-optimized (Optional Premium Storage disk) • LS: The L SKU is storage-optimized (greater SSD size than D SKU equivalent) D1-5 v2 instances are based on either the 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor or the 2.3 GHz Intel Xeon® E5-2673 v4 (Broadwell) processor and can achieve 3.1 GHz with Intel Turbo Boost Technology 2.0 Ls-series instances are storage-optimised virtual machines for low-latency workloads such as NoSQL databases (e.g. Cassandra, MongoDB and Redis)
  13. 13. #azuresatpn Azure Data Explorer SLA • SLA: at least 99.9% availability (Last updated: Feb 2019) Maximum Available Minutes: is the total number of minutes for a given Cluster deployed by Customer in a Microsoft Azure subscription during a billing month. Downtime: is the total number of minutes within Maximum Available Minutes during which a Cluster is unavailable. Monthly Uptime Percentage: for the Azure Data Explorer is calculated as Maximum Available Minutes less Downtime divided by Maximum Available Minutes. Monthly Uptime % = (Maximum Available Minutes-Downtime) / Maximum Available Minutes X 100 Daily: 01m 26.4s Weekly: 10m 04.8s Monthly: 43m 49.7s
  14. 14. #azuresatpn Pricing • Based on VM size, Storage and Network • Not based on DATABASE Numbers https://dataexplorer.azure.com/AzureDataExplorerCostEstimator.html Think to ADX as a ANALYSIS TOOL, in a MultiTenant Environment if you want to pay little money if you can afford a space shuttle Think to ADX as an INGESTION+RESILIENCY TOOL in order to break through your traditional «Live DWH»
  15. 15. #azuresatpn <TRUST/>
  16. 16. #azuresatpn First questions about ADX • Are we sure that is a mature Service? • which are the correct use cases where it is really useful? • Which are the OSS Alternatives that I should compare with?
  17. 17. #azuresatpn Typical use cases 1. You need a Telemetry Analytics Platform, in order to retrieve aggregations or statistical calcultation on historial series («As an IT Manager» I want a platform to load logs from various file types, in order to analyze them and focus graphically the problem during time) 2. You want to offer multi tenant SAAS Solutions («As a Product Lead Engineer» I want to manage the backend of my multitenant SAAS solution using a unique, fat, huge backend service) 3. You need, within an Industrial IoT solution deelopment, to have common backend to handle with process variables, to make a correation analysis using continuous stream query («As a Quality manager» I need a prebuilt backend solutions to dynamically configure time based query on data in order tofind out correlations from process variables )
  18. 18. #azuresatpn Why ADX is Unique Simplified costs • Vm costs • ADX service add on cost Many Prebuilt Inputs • ADF • Spark • Logstash • Kafka • Iothub • EventHub Many Prebuilt Outputs • TDS/SQL • Power BI • ODBC Connector • Spark • Jupyter • Grafana
  19. 19. #azuresatpn Azure services with ADX usage Azure Monitor • Log Analytics • Application Insights Security Products • Windows Defender • Azure Security Center • Azure Sentinel IoT • Time Series Insights • Azure IoT Central
  20. 20. #azuresatpn ADX vs Elastic Search From db-engines.com Azure Data Explorer Fully managed big data interactive analytics platform Elastic Search A distributed, RESTful modern search and analytics engine
  21. 21. #azuresatpn <BASICS/>
  22. 22. #azuresatpn Create a Cluster ADX follows standard creation process • Azure CLI • Powershell • C# • Python • ARM Login az login Select Subscription az account set --subscription MyAzureSub Cluster creation az kusto cluster create --name azureclitest --sku D11_v2 --resource-group testrg Database Creation az kusto database create --cluster-name azureclitest -- name clidatabase --resource-group testrg --soft- delete-period P365D --hot-cache-period P31D HOT-CACHE-PERIOD: Amount of time that data should be kept in cache. Duration in ISO8601 format (for example, 100 days would be P100D). SOFT-DELETE-PERIOD: Amount of time that data should be kept so it is available to query. Duration in ISO8601 format (for example, 100 days would be P100D)
  23. 23. #azuresatpn How to set and use ADX? Create a database Use Database to link Ingestion Sources [Optional] Choose a DataConnection EventHub | Blob Storage | IotHub
  24. 24. #azuresatpn How to script with Visual Studio Code • Use Log Analytics or KUSTO/KQL extensions ( .csl | .kusto | .kql) • Open VSC, create a file, save it and then edit • [Optional] Build a web application using MONACO IDE, then share kusto code with friends https://microsoft.github.io/monaco-editor/index.html
  25. 25. #azuresatpn How about the Tools? 3.VISUALIZE • Azure Notebooks (preview) • Power BI • Graphana 2.QUERY • Kusto.Explorer • Web UI 4.ORCHESTRATE • Microsoft Flow • Microsoft Logic App 1.LOAD • LightIngest • Azure Data Factory Load Query Visualize Orchestrate BI People IT People ML People
  26. 26. #azuresatpn • command-line utility for ad-hoc data ingestion into Kusto • pull source data from a local folder • pull source data from an Azure Blob Storage container • Useful to ingest fastly and play with ADX [Ingest JSON data from blobs] LightIngest "https://adxclu001.kusto.windows.net;Federated=true" -database:db001 -table:LAB -sourcePath:"https://ACCOUNT_NAME.blob.core.windows.net/CONTAINER_NAME?SAS_TOKEN" -prefix:MyDir1/MySubDir2 -format:json -mappingRef:DefaultJsonMapping -pattern:*.json -limit:100 [Ingest CSV data with headers from local files] LightIngest "https://adxclu001.kusto.windows.net;Federated=true" -database:MyDb -table:MyTable -sourcePath:"D:MyFolderData" -format:csv -ignoreFirstRecord:true -mappingPath:"D:MyFolderCsvMapping.txt" -pattern:*.csv.gz -limit:100 What is LightIngest LAB 0X LightIngest https://docs.microsoft.com/en- us/azure/kusto/tools/lightingest
  27. 27. #azuresatpn <TECNIQUES/>
  28. 28. #azuresatpn Ingestion capabilities Event Grid (using Blob as trigger) Ingest Azure Blobs into Azure Data Explorer Event Hub pipeline Ingest data from Event Hub into Azure Data Explorer Logstash plugin Ingest data from Logstash to Azure Data Explorer Kafka connector Ingest data from Kafka into Azure Data Explorer Azure Data Factory (ADF) Copy data from Azure Data Factory to Azure Data Explorer Kusto offers client SDK that can be used to ingest and query data with: • Python SDK • .NET SDK • Java SDK • Node SDK • REST API • Not only Azure Endpoints • As a ELK replacement, offers Logstash plugin • As a OSS LAMBDA replacement, offers Kafka connector
  29. 29. #azuresatpn Ingestion Tecniques LAB 02 Queued Ingestion https://docs.microsoft.com/en- us/azure/kusto/api/netfx/kusto- ingest-queued-ingest-sample For high-volume, reliable, and cheap data ingestion Batch ingestion (provided by SDK) the client uploads the data to Azure Blob storage (designated by the Azure Data Explorer data management service) and posts a notification to an Azure Queue. Batch ingestion is the recommended technique. Most appropriate for exploration and prototyping .Inline ingestion (provided by query tools) Inline ingestion: control command (.ingest inline) containing in-band data is intended for ad hoc testing purposes. Ingest from query: control command (.set, .set-or-append, .set-or-replace) that points to query results is used for generating reports or small temporary tables. Ingest from storage: control command (.ingest into) with data stored externally (for example, Azure Blob Storage) allows efficient bulk ingestion of data. LAB 03 Inline Ingestion https://docs.microsoft.com/it- it/azure/kusto/management/data- ingestion/ingest-inline
  30. 30. #azuresatpn Supported data formats For all ingestion methods other than ingest from query, format the data so that Azure Data Explorer can parse it. The supported data formats are: • CSV, TSV, TSVE, PSV, SCSV, SOH • JSON (line-separated, multi-line), Avro • ZIP and GZIP Schema mapping helps bind source data fields to destination table columns. • CSV Mapping (optional) works with all ordinal-based formats. It can be performed using the ingest command parameter or pre-created on the table and referenced from the ingest command parameter. • JSON Mapping (mandatory) and Avro mapping (mandatory) can be performed using the ingest command parameter. They can also be pre-created on the table and referenced from the ingest command parameter. LAB 04 Mapping example https://docs.microsoft.com/it- it/azure/kusto/management/data- ingestion/ingest-inline
  31. 31. #azuresatpn Use ADX as ODBC Datasource 1. Download SQL 17 ODBC Driver: https://www.microsoft.com/en-us/download/details.aspx?id=56567 2. Configure ODBC source (as a normal SQL SERVER ODBC DSN ) Than you can use your preferred tool: POWER BI DESKTOP, QLIK SENSE DESKTOP, SISENSE, ecc.
  32. 32. #azuresatpn Notebooks + ADX = KQL Magic KQL magic: https://github.com/microsoft/jupyter-Kqlmagic • extends the capabilities of the Python kernel in Jupyter • can run Kusto language queries natively • combine Python and Kusto query language LAB 05 Notebook example https://notebooks.azure.com/riccardo -zamana/projects/azuresaturday2019
  33. 33. #azuresatpn <DIVE-INTO-ADX/>
  34. 34. #azuresatpn Kusto for SQL USers • Perform SQL SELECT (no DDL, only SELECT) • Use KQL (Kusto Query Language) • Supports translating T-SQL queries to Kusto query language -- explain select top(10) * from StormEvents order by DamageProperty desc StormEvents | sort by DamageProperty desc nulls first | take 10 LAB 05 SQL to KQL example https://docs.microsoft.com/en- us/azure/kusto/query/sqlcheatsheet
  35. 35. #azuresatpn ADX Functions Functions are reusable queries or query parts. Kusto supports several kinds of functions: • Stored functions, which are user-defined functions that are stored and managed a one kind of a database's schema entities. See Stored functions. • Query-defined functions, which are user-defined functions that are defined and used within the scope of a single query. The definition of such functions is done through a let statement. See User-defined functions. • Built-in functions, which are hard-coded (defined by Kusto and cannot be modified by users). LAB 06 Function example https://docs.microsoft.com/en- us/azure/kusto/query/functions/user- defined-functions
  36. 36. #azuresatpn Language examples Alias database["wiki"] = cluster("https://somecluster.kusto.windows.net:443").database("somedatabase"); database("wiki").PageViews | count Let start = ago(5h); let period = 2h; T | where Time > start and Time < start + period | ... Bin: T | summarize Hits=count() by bin(Duration, 1s) Batch: let m = materialize(StormEvents | summarize n=count() by State); m | where n > 2000; m | where n < 10 Tabular expression: Logs | where Timestamp > ago(1d) | join ( Events | where continent == 'Europe' ) on RequestId
  37. 37. #azuresatpn Time Series Analysis – Bin Operator T | summarize Hits=count() by bin(Duration, 1s) bin(value,roundTo) USE CASE bin operator Rounds values down to an integer multiple of a given bin size. If you have a scattered set of values, they will be grouped into a smaller set of specific values. [Rule] [Example]
  38. 38. #azuresatpn Time Series Analysis – Make Series Operator T | make-series sum(amount) default=0, avg(price) default=0 on timestamp from datetime(2016-01-01) to datetime(2016-01-10) step 1d by supplier T | make-series [MakeSeriesParamters] [Column =] Aggregation [default = DefaultValue] [, ...] on AxisColumn from start to end step step [by [Column =] GroupExpression [, ...]] make-series operator [Rule] [Example] USE CASE
  39. 39. #azuresatpn Time Series Analysis – Basket Operator StormEvents | where monthofyear(StartTime) == 5 | extend Damage = iff(DamageCrops + DamageProperty > 0 , "YES" , "NO") | project State, EventType, Damage, DamageCrops | evaluate basket(0.2) basket operator Basket finds all frequent patterns of discrete attributes (dimensions) in the data and will return all frequent patterns that passed the frequency threshold in the original query. [Rule] [Example] T | evaluate basket([Threshold, WeightColumn, MaxDimensions, CustomWildcard, CustomWildcard, ...]) USE CASE
  40. 40. #azuresatpn Time Series Analysis – Autocluster Operator StormEvents | where monthofyear(StartTime) == 5 | extend Damage = iff(DamageCrops + DamageProperty > 0 , "YES" , "NO") | project State , EventType , Damage | evaluate autocluster(0.6) autocluster operator AutoCluster finds common patterns of discrete attributes (dimensions) in the data and will reduce the results of the original query (whether it's 100 or 100k rows) to a small number of patterns. [Rule] [Example] T | evaluate autocluster([SizeWeight, WeightColumn, NumSeeds, CustomWildcard, CustomWildcard, ...]) StormEvents | where monthofyear(StartTime) == 5 | extend Damage = iff(DamageCrops + DamageProperty > 0 , "YES" , "NO") | project State , EventType , Damage | evaluate autocluster(0.2, '~', '~', '*') USE CASE
  41. 41. #azuresatpn Export To Storage .export async compressed to csv ( h@"https://storage1.blob.core.windows.net/containerName;secretKey", h@"https://storage1.blob.core.windows.net/containerName2;secretKey " ) with ( sizeLimit=100000, namePrefix=export, includeHeaders=all, encoding =UTF8NoBOM ) <| myLogs | where id == "moshe" | limit 10000 To Sql .export async to sql ['dbo.MySqlTable'] h@"Server=tcp:myserver.database.windows.net,1433;Database= MyDatabase;Authentication=Active Directory Integrated;Connection Timeout=30;" with (createifnotexists="true", primarykey="Id") <| print Message = "Hello World!", Timestamp = now(), Id=12345678 1. DEFINE COMMAND Define ADX command and try your recurrent export strategy 2. TRY IN EDITOR Use an Editor to try command, verifying conection strings and parametrizing them 3. BUILD A JOB Build a Notebook or a C# JOB using the command as a SQL QUERY in your CODE LAB 08 Export example https://docs.microsoft.com/en- us/azure/kusto/query/functions/user- defined-functions
  42. 42. #azuresatpn External tables & Continuous Export It’s an external endpoint: • Azure Storage • Azure Datalake Store • SQL Server You need to define: • Destination • Continuous-Export Strategy EXT TABLE CREATION .create external table ExternalAdlsGen2 (Timestamp:datetime, x:long, s:string) kind=adl partition by bin(Timestamp, 1d) dataformat=csv ( h@'abfss://filesystem@storageaccount.dfs.core.windows.net/path;secretKey ' ) with ( docstring = "Docs", folder = "ExternalTables", namePrefix="Prefix" ) EXPORT to EXT TABLE .create-or-alter continuous-export MyExport over (T) to table ExternalAdlsGen2 with (intervalBetweenRuns=1h, forcedLatency=10m, sizeLimit=104857600) <| T
  43. 43. #azuresatpn Policy • Cache policy • Ingestion Batching policy • IngestionTime policy • Merge policy • Retention policy • Restricted view access policy • Row order policy • Streaming ingestion policy • Sharding policy • Update policy
  44. 44. #azuresatpn FACTS: A) Kusto stores its ingested data in reliable storage (most commonly Azure Blob Storage). B) To speed-up queries on that data, Kusto caches this data (or parts of it) on its processing nodes, Retention policy The Kusto cache provides a granular cache policy that customers can use to differentiate between two data cache policies: hot data cache and cold data cache. set query_datascope="hotcache"; T | union U | join (T datascope=all | where Timestamp < ago(365d) on X YOU CAN SPECIFY WHICH LOCATION MUST BE USED Cache policy is independent of retention policy !
  45. 45. #azuresatpn Retention policy • Soft Delete Period (number) • Data is available for query ts is the ADX IngestionDate • Default is set to 100 YEARS • Recoverability (enabled/disabled) • Default is set to ENABLED • Recoverable for 14 days after deletion .alter database DatabaseName policy retention "{}" .alter table TableName policy retention "{}« EXAMPLE: { "SoftDeletePeriod": "36500.00:00:00", "Recoverability":"Enabled" } .delete database DatabaseName policy retention .delete table TableName policy retention .alter-merge table MyTable1 policy retention softdelete = 7d 2 Parameters, applicable to DB or Table Use KUSTO to set KUSTO
  46. 46. #azuresatpn Data Purge The purge process is final and irreversible PURGE PROCESS: 1. It requires database admin permissions 2. Prior to Purging you have to be ENABLED, opening a SUPPORT TICKET. 3. Run purge QUERY, and identify SIZE, EXEC.TIME and give VerificationToken 4. Run REALLY purge QUERY passing Verification Token .purge table MyTable records in database MyDatabase <| where CustomerId in ('X', 'Y') NumRecordsToPurge EstimatedPurge ExecutionTime VerificationToken 1,596 00:00:02 e43c7184ed22f4f 23c7a9d7b124d19 6be2e570096987 e5baadf65057fa6 5736b .purge table MyTable records in database MyDatabase with (verificationtoken='e43c7184ed22f4f23c7a9d7b124d196be2e570 096987e5baadf65057fa65736b') <| where CustomerId in ('X', 'Y') .purge table MyTable records in database MyDatabase with (noregrets='true') 2 STEP PROCESS 1 STEP PROCESS With No Regrets !!!!
  47. 47. #azuresatpn KUSTO: Do and Don’t • DO analytics over Big Data. • DO and support entities such as databases, tables, and columns • DO and support complex analytics query operators (calculated columns, filtering, group by, joins). • DO NOT perform in-place updates
  48. 48. #azuresatpn Virtual Network ( preview) BENEFITS • USE NSG rules to limit traffic. • Connect your on-premise network to Azure Data Explorer cluster's subnet. • Secure your data connection sources (Event Hub and Event Grid) with service endpoints. VNET gives you TWO Independent IPs • Private IP: access the cluster inside the VNet. • Public IP: access the cluster from outside the VNet (management and monitoring) and as a source address for outbound connections initiated from the cluster.
  49. 49. #azuresatpn ADLS & AzureDataExplorer
  50. 50. #azuresatpn <REAL-USE-CASES/>
  51. 51. #azuresatpn Event Correlation Name City SessionI d Timestamp Start London 2817330 2015-12-09T10:12:02.32 Game London 2817330 2015-12-09T10:12:52.45 Start Manchester 4267667 2015-12-09T10:14:02.23 Stop London 2817330 2015-12-09T10:23:43.18 Cancel Manchester 4267667 2015-12-09T10:27:26.29 Stop Manchester 4267667 2015-12-09T10:28:31.72 City SessionId StartTime StopTime Duration London 2817330 2015-12- 09T10:12:02.32 2015-12- 09T10:23:43.18 00:11:40.46 Manch ester 4267667 2015-12- 09T10:14:02.23 2015-12- 09T10:28:31.72 00:14:29.49 Get sessions from start and stop events Let's suppose we have a log of events, in which some events mark the start or end of an extended activity or session. Every event has an SessionId, so the problem is to match up the start and stop events with the same id. Kusto let Events = MyLogTable | where ... ; Events | where Name == "Start" | project Name, City, SessionId, StartTime=timestamp | join (Events | where Name="Stop" | project StopTime=timestamp, SessionId) on SessionId | project City, SessionId, StartTime, StopTime, Duration = StopTime - StartTime Use let to name a projection of the table that is pared down as far as possible before going into the join. Project is used to change the names of the timestamps so that both the start and stop times can appear in the result. It also selects the other columns we want to see in the result. join matches up the start and stop entries for the same activity, creating a row for each activity. Finally, project again adds a column to show the duration of the activity.
  52. 52. #azuresatpn In Place Enrichment Creating and using query-time dimension tables In many cases one wants to join the results of a query with some ad-hoc dimension table that is not stored in the database. It is possible to define an expression whose result is a table scoped to a single query by doing something like this: Kusto // Create a query-time dimension table using datatable let DimTable = datatable(EventType:string, Code:string) [ "Heavy Rain", "HR", "Tornado", "T" ] ; DimTable | join StormEvents on EventType | summarize count() by Code
  53. 53. #azuresatpn <TOOLS/>
  54. 54. #azuresatpn Azure Data Explorer Easy to ingest the data and easy to query the data Blob & Azure Queue Python SDK IoT Hub .NET SDK Azure Data Explorer REST API Event Hub .NET SDK Python SDK Web UI Desktop App Jupyter Magic APIs UX Power BI Direct Query Microsoft Flow Azure App Logic Connectors Grafana ADF MS-TDS Java SDK Java Script Monaco IDE Azure Notebooks Protocols Streaming Bulk APIs Blob & Event Grid Queued Ingestion Direct Java SDK
  55. 55. #azuresatpn • Web GUI • https://dataexplorer.azure.com • KUSTO Explorer • https://docs.microsoft.com/it-it/azure/kusto/tools/kusto-explorer • Visual Studio Code KQL plugin • KusKus Tools
  56. 56. #azuresatpn Brief Summary
  57. 57. #azuresatpn GRAZIE
  58. 58. #azuresatpn <OFF.LINE.DEMO/>
  59. 59. #azuresatpn KQL MAGIC
  60. 60. #azuresatpn Let’s do an Example
  61. 61. #azuresatpn
  62. 62. #azuresatpn Create table, load data … and play! Create Table .create table TestTable (TimeStamp: datetime, Name: string, Metric: int, Source:string) Ingest sample data .ingest into table StormEvents h'https://kustosamplefiles.blob.core.windows.net/samplefiles/StormEvents.csv?st=2018-08- 31T22%3A02%3A25Z&se=2020-09-01T22%3A02%3A00Z&sp=r&sv=2018-03- 28&sr=b&sig=LQIbomcKI8Ooz425hWtjeq6d61uEaq21UVX7YrM61N4%3D' with (ignoreFirstRecord=true) 2.
  63. 63. #azuresatpn Example • .create table tbl001_AABA (Date: datetime, Open: int, High: int, Low: int, Close: int, Volume: int) • .drop tables (tbl001_AABA) ifexists

Hinweis der Redaktion

  • 1. SCOPO DI OGGI
    2. LE slide sono parlanti apposta!
  • - L’intento di oggi è
    - cosa NON DIREMO
    - perché gli esempi sono MS
  • SE SEI IN ANTICIPO, PROVA!!!!
  • PROVA SU VSC
    CTRL+P => kuskus
    Poi:

    cluster(adxclu001).database('db001').table('TBL_LAB01')
    | count
  • Fai vedere NOTEBOOKS
  • FARE UN PO DI PROVE
    FARE la distinct per introdurre la SUMMERIZE
  • .create-or-alter function with (folder = "AzureSaturday2019", docstring = "Func1", skipvalidation = "true") MyFunction1(i:long) {TBL_LAB0X | limit 100 | where minimum_nights > i}
    MyFunction1(80);


    explain SELECT name, minimum_nights from TBL_LAB0X

    .create-or-alter function with (folder = "AzureSaturday2019", docstring = "Func1", skipvalidation = "true") MyFunction1(i:long) {TBL_LAB0X | project name, minimum_nights | limit 100 | where minimum_nights > i | render columnchart} MyFunction1(80);
  • T | summarize Hits=count() by bin(Duration, 1s)
  • ×