Creating a realtime streaming pipeline with Oracle Stream Analytics that gets data from an OCI Streaming topic containing credit card movements | Part one


Oracle Stream Analytics allows users to process and analyze large scale real-time information by using sophisticated correlation patterns, enrichment, and machine learning. It offers real-time actionable business insight on streaming data and automates action to drive today’s agile businesses.

Hands on!

Create an OCI Streaming topic in Oracle Cloud Shell, it will be created in the “DefaultPool” stream pool:

comp=ocid1.compartment.oc1..aaaaaaaa44....w62q
oci streaming admin stream create --name creditcards --partitions 1 --compartment-id $comp

Go to the created stream topic, click in the stream pool and get the information provided in the [Kafka Connection Settings] tab: Bootstrap Servers and Connection Strings (the username).

Go to your user and crate an Auth Token, grab the generated token for later:

Provision GGOSA from the Marketplace:

Log in to GoldenGate Stream Analytics, credentials can be obtained by ssh-ing to osa machine as explained here

Create a new connection item in the catalog:

Give a name and select type kafka

Provide boostrap server, user and auth token created previously:

Click on [Test Connection] to validate

Clone this repo to create a random credit card movements generator, put your own values in myppk and ccg.sh files and execute ./ccg.sh, output should something like as follows:

Now, we are gonna create an stream in OSA:

Add tags for better user experience if you like

At the end we’ll have out stream created:

Let’s continue by creating a pipeline in OSA:

Start the ccg utility to create movements

Now, in the live output movements should appear:

Let’s create query groups to detect Amex and Visa cards:

Visa filter
Amex filter

For the Visa branch, we are gonna detect movements on the same card number in the last 24 hours

And several more stuff to construct a pipeline that does several tasks and targets visa card record to an object storage bucket in parquet format, a kafka topic and an oci notification topic, as examples of targets, between others supported by the tool, depending on the branch, like this once published:

The pipeline with 2 branches and 2 more for the visa cards, each brach ends up in different storage target

Visa filtered records stored in object storage bucket in parquet format:

A notification send to email recipient:

Here the spark console:

In next post we’ll deep on other capabilities of the pipelines.

That’s all for now, hope it helps! 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.