Oracle Stream Analytics allows users to process and analyze large scale real-time information by using sophisticated correlation patterns, enrichment, and machine learning. It offers real-time actionable business insight on streaming data and automates action to drive today’s agile businesses.
Hands on!
Create an OCI Streaming topic in Oracle Cloud Shell, it will be created in the “DefaultPool” stream pool:
comp=ocid1.compartment.oc1..aaaaaaaa44....w62q oci streaming admin stream create --name creditcards --partitions 1 --compartment-id $comp
Go to the created stream topic, click in the stream pool and get the information provided in the [Kafka Connection Settings] tab: Bootstrap Servers and Connection Strings (the username).


Go to your user and crate an Auth Token, grab the generated token for later:


Provision GGOSA from the Marketplace:





Log in to GoldenGate Stream Analytics, credentials can be obtained by ssh-ing to osa machine as explained here

Create a new connection item in the catalog:


Provide boostrap server, user and auth token created previously:

Clone this repo to create a random credit card movements generator, put your own values in myppk and ccg.sh files and execute ./ccg.sh, output should something like as follows:

Now, we are gonna create an stream in OSA:





At the end we’ll have out stream created:
Let’s continue by creating a pipeline in OSA:


Start the ccg utility to create movements

Now, in the live output movements should appear:

Let’s create query groups to detect Amex and Visa cards:



For the Visa branch, we are gonna detect movements on the same card number in the last 24 hours


And several more stuff to construct a pipeline that does several tasks and targets visa card record to an object storage bucket in parquet format, a kafka topic and an oci notification topic, as examples of targets, between others supported by the tool, depending on the branch, like this once published:

Visa filtered records stored in object storage bucket in parquet format:

A notification send to email recipient:

Here the spark console:


In next post we’ll deep on other capabilities of the pipelines.
That’s all for now, hope it helps! 🙂