GoldenGate Cloud Service is part of Oracle’s PaaS portfolio. From technical perspective it is just standard GoldenGate deployed on VM in Oracle Cloud. So same already proven architecture works in Cloud.
GGCS can be used for different cases from zero downtime migration to real-time DWH feeding. More cases like BigData and data pipeline feeding are on the way.
So what do you need to use GoldenGate Cloud Service. You should have:
- database instance in cloud (DBaaS or ExadataCS)
- subscription for GoldenGate Cloud Service.
- storage cloud service (it used for backup)
GGCS is available as Non Metered service now. If you use GGCS Non-Metered Service then you should pay money even if your GoldenGate instance is down.
Soon GGCS will be available as a Metered Service. So it will possible to pay on per hour basis. This capability will open new cases like Dev/Test Cloud Environment Synchronization. Just imagine you have database in cloud for testing purposes. You should periodically (every week/month) synchronize it with production database. So you don’t need GGCS running for all time but run it for 2 hours every Sunday to apply captured data. This approach can save a lot of money.
GoldenGate Cloud Service Architecture
GoldenGate replication solution for cloud consists of two parts: on-premise and public cloud part. On-premise part is absolutely same as we have for pure on-premise solution. Public part is separate compute instance with deployed GoldenGate. You can open ports on GoldenGate instance or tunnel data through SSH (preferred because of security) to transfer data.
You can see architecture of GoldenGate deployment for upstream case where GoldenGate synchronizes public cloud database instance using trail delivered from on-premise database.

Let’s configure GoldenGate for this case.
Configuring GoldenGate for uploading data to Cloud Database
GoldenGate replication for cloud consists of two parts: creating GoldenGate Cloud Service instance and configuring GoldenGate itself. First step is relatively unique but second part is absolutely trivial and should be problem for any experience GoldenGate professional.
Configure GoldenGate Cloud Instance
1. Login to My Services dashboard https://cloud.oracle.com/en_US/sign-in
2. You should see the following services to create GGCS instances: Compute, Database and Storage.
3. Click Create Instance and choose GGCS instance type. You will be redirected to GoldenGate Cloud Service Console. Click Create Service there.You will see main page with GoldenGate Cloud Service parameters
4. Let’s enter parameters, press next, agree to summary information and press Create
| Parameter name | Value | Description |
| Service Name | GGCS1 | Unique name for this instance |
| Service Description | GoldenGate Cloud Service 1 | Just description |
| OS Public User Key Text | Create new key using Edit Button | This is public key for user identification. You can upload existing key or create new using dialog |
| Compute Shape | OC5 — 4.0 OCPU, 30.0GB RAM | Size of GoldenGate compute node. This size doesn’t influence cost of GoldenGate Cloud Service. Early OC1 was available — now I see only OC5 |
| Cloud Storage Container | Storage-XXXXXXX/BackupStorage | Storage container name in format <storage service name>-<identity domain name>/<container name> You should pre-create it using REST API or create it as backup container in DBaaS Creation Wizard |
| Cloud Storage User Name | You username like you.name@company.com | |
| Cloud Storage Password | Your password | |
| Database Service | RYNDINGG1 | Database Service which will be target for GoldenGate replication. You create it using DBaaS wizard |
| PDB Name | PDB1 | Pluggable database name which will be target for replication |
| Username | SYS | Database user with SYSDBA privileges |
| Password | You password | Password for SYS |
5. Return to GoldenGate Cloud Service dashboard. You will see new created instance.
Configure SSH
6. Click on the service name (GGCS1). You will see page with GoldenGate Cloud Instance configuration
7. Write down ip address of this Cloud Instance. We will use it to create tunnel from source database.
8. If you’ve created public and private keys using wizard and plans to use Putty then you should convert your private key to putty format using puttygen utility. See my previous post about DBaaS for detailed steps.
9. Login to written ip addrss using SSH. Pass your private key. Use «opc» username to login. Then do «sudo su — oracle» to login as oracle user because everything is installed under this user credentials.
10. Return to GoldenGate Cloud Service Dashboard. Click on menu and check «Access Rules.
11. By default only 2 ports are opened: inbound ssh from internet and outbound from GoldenGate instance to DBaaS instance. So it is secure by default. Let’s use this configuration.
12. Open putty, load your session and add tunnel for port 1080, then save and reconnect.
13. Now you can you localhost:1080 as SOCKS5 proxy. You can access any port on GGCS instance using this proxy. For example we can access port 7809 (MGR) and port range for collectors (7740-7760). Data are tunneled and encrypted by SSH.
Configure MGR on GGCS side
14. Let’s login to ggsci console
|
1 2 3 4 5 |
[oracle@ggcs1-ggcs-1 ~]$ cd /u01/app/oracle/gghome/ [oracle@ggcs1-ggcs-1 gghome]$ ./ggsci GGSCI (ggcs1-ggcs-1) 1> info mgr Manager is DOWN! |
15. Let’s configure mgr
|
1 |
GGSCI (ggcs1-ggcs-1) 5> edit params mgr |
16. By default mgr parameter file contains multiple parameters.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
-- Manager port number --GGCS Added default port number to 7744 PORT 7744 -- As a Manager parameter, PURGEOLDEXTRACTS allows to manage trail -- files in a centralized fashion and take into account multiple -- processes. --For allocate dynamicportlist. Here the range is starting from port n1 through n2. Dynamicportlist 7740-7760 -- Enable secrule for collector ACCESSRULE, PROG COLLECTOR, IPADDR <IP_OF_GGCS_VM>, ALLOW PURGEOLDEXTRACTS <path to the trail file>, USECHECKPOINTS, MINKEEPHOURS <"x" hours> MINKEEPFILES <"y" number of files> - we will have predefine path to the trail file --If using DDL trigger then use the following parameters to purge the DDL history table. USERID <userid>, PASSWORD <passwd> -- Start one or more Extract and Replicat processes automatically after they fail. --AUTORESTART provides fault tolerance when something -- Temporary interferes with a process, such as intermittent network -- outages or programs that interrupt access to transaction logs. AUTORESTART REPLICAT *, RETRIES <x>, WAITMINUTES <y>, RESETMINUTES <z> --This is to specify a lag threshold that is considered critical, --and to force a warning message to the error log. Lagreport parameter --specifies the interval at which manager checks for extract / replicat --lag. --LAGREPORTMINUTES <x> --LAGCRITICALMINUTES <y> --Reports down processes --DOWNREPORTMINUTES <n> --DOWNCRITICAL |
17. Let’s remove some parameters for simplicity
|
1 2 |
PORT 7809 Dynamicportlist 7740-7760 |
18. Save parameters and start mgr
|
1 2 3 4 5 |
GGSCI (ggcs1-ggcs-1) 8> start mgr Manager started. GGSCI (ggcs1-ggcs-1) 9> info mgr Manager is running (IP port ggcs1-ggcs-1.7809, Process ID 7248). |
Configure on-premise GoldenGate
19. Now let’s configure source side.This is absolutely standard procedure except RMTHOST for pump which has some special parameters.
20. Extract for my source have the following configuration. I use FORMAT RELEASE for exttrail because GodlenGate Cloud Service uses GoldenGate 12.1 but my source has GoldenGate 12.1
|
1 2 3 4 5 6 |
EXTRACT xa setenv (ORACLE_HOME=C:\app\aryndin99\product\db\12.1.0.2\dbhome_1) USERIDALIAS cggadmincdb EXTTRAIL ./dirdat/ta, FORMAT RELEASE 12.1 MAP PDB.GGTEST.*; |
21. You can add it and start using following commands
|
1 2 3 4 5 6 7 8 9 10 11 |
add credentialstore alter credentialstore add user c##ggadmin@localhost:1521/orcl3.local password oracle dblogin useridalias cggadmincdb delete xa add extract xa, integrated tranlog,begin now add exttrail ./dirdat/ta, EXTRACT xa, MEGABYTES 2000 register extract xa database start xa |
22. Pump configuration is also simple
|
1 2 3 4 5 |
extract pa rmthost localhost, mgrport 7809, <strong>SOCKSPROXY localhost:1080</strong> rmttrail ./dirdat/ta, FORMAT RELEASE 12.1 passthru table pdb.*.*; |
Pay attention to SOCKSPROXY parameter. It says data to port localhost:1080 will be proxied, tunneled through SSH then it will be sent to respective port for SSH Server.
23. Add pump and start it
|
1 2 3 4 |
delete pa add extract pa, exttrailsource ./dirdat/ta extseqno 19 extrba 0 ADD rmttrail ./dirdat/ta, EXTRACT pa, MEGABYTES 2000 start pa |
24. Now you should see first trail file on GGCS side
|
1 2 |
[oracle@ggcs1-ggcs-1 gghome]$ ls /u01/app/oracle/gghome/dirdat ta000000 |
Configure Cloud GoldenGate
25. Create user ggadmin on GGCS side. It will be used by GoldenGate processes
|
1 2 3 4 5 6 7 8 |
CONNECT SYS/XXXXXXXXX@PDB1 as sysdba CREATE USER GGADMIN IDENTIFIED BY oracle; GRANT CREATE SESSION,RESOURCE to GGADMIN; GRANT CREATE VIEW to GGADMIN; GRANT SELECT ANY TRANSACTION to GGADMIN; GRANT UNLIMITED TABLESPACE to GGADMIN; grant select on sys.opqtype$ to GGADMIN; exec dbms_goldengate_auth.grant_admin_privilege('GGADMIN', GRANT_SELECT_PRIVILEGES=>TRUE); |
26. Connect to GGCS instance using ssh. And configure replicat
|
1 2 3 4 5 |
replicat da USERID ggadmin@XXX.XXX.XXX.XXX:1521/pdb1.XXXXXXX.oraclecloud.internal PASSWORD oracle discardfile ./dirrpt/da.dsc, purge assumetargetdefs map pdb.ggtest.*, target ggtest.*; |
27. Then add replicat and start it
|
1 2 3 4 |
dblogin userid system@XXX.XXX.XXX.XXX:1521/pdb1.XXXXXXX.oraclecloud.internal PASSWORD Welcome1_ add replicat da integrated exttrail ./dirdat/ta start replicat da info all |
28. Create source tables
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
-- Demo user drop user ggtest cascade; create user ggtest identified by oracle; alter user ggtest quota unlimited on users; grant connect,resource to ggtest; DROP TABLE ggtest.departments cascade constraints; DROP TABLE ggtest.employees cascade constraints; CREATE TABLE "GGTEST"."DEPARTMENTS" ( "DEPARTMENT_ID" NUMBER(20,0), "DEPARTMENT_NAME" VARCHAR2(30 BYTE), "MANAGER_ID" NUMBER(6,0), "LOCATION_ID" NUMBER(4,0), PRIMARY KEY ("DEPARTMENT_ID")) TABLESPACE "USERS" ; CREATE TABLE "GGTEST"."EMPLOYEES" ( "EMPLOYEE_ID" NUMBER(20,0), "FIRST_NAME" VARCHAR2(20 BYTE), "LAST_NAME" VARCHAR2(25 BYTE), "EMAIL" VARCHAR2(25 BYTE), "PHONE_NUMBER" VARCHAR2(20 BYTE), "HIRE_DATE" DATE, "JOB_ID" VARCHAR2(10 BYTE), "SALARY" NUMBER(8,2), "COMMISSION_PCT" NUMBER(2,2), "MANAGER_ID" NUMBER(6,0), "DEPARTMENT_ID" NUMBER(4,0) REFERENCES GGTEST.DEPARTMENTS (DEPARTMENT_ID), PRIMARY KEY ("EMPLOYEE_ID") ) TABLESPACE "USERS" ; CREATE INDEX "GGTEST"."EMP_DEPT_FK_IDX" ON "GGTEST"."EMPLOYEES" ("DEPARTMENT_ID") TABLESPACE "USERS" ; DROP SEQUENCE ggtest.EMP_SEQ; CREATE SEQUENCE ggtest.EMP_SEQ CACHE 100; DROP SEQUENCE ggtest.DEPT_SEQ; CREATE SEQUENCE ggtest.DEPT_SEQ CACHE 100; --TRUNCATE TABLE ggtest.employees; --TRUNCATE TABLE ggtest.departments; exit; |
29. Create target tables in DBaaS Database
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
-- Demo user create user ggtest identified by oracle; alter user ggtest quota unlimited on users; grant connect,resource to ggtest; DROP TABLE ggtest.departments cascade constraints; DROP TABLE ggtest.employees cascade constraints; CREATE TABLE "GGTEST"."DEPARTMENTS" ( "DEPARTMENT_ID" NUMBER(20,0), "DEPARTMENT_NAME" VARCHAR2(30 BYTE), "MANAGER_ID" NUMBER(6,0), "LOCATION_ID" NUMBER(4,0), PRIMARY KEY ("DEPARTMENT_ID")) TABLESPACE "USERS" ; CREATE TABLE "GGTEST"."EMPLOYEES" ( "EMPLOYEE_ID" NUMBER(20,0), "FIRST_NAME" VARCHAR2(20 BYTE), "LAST_NAME" VARCHAR2(25 BYTE), "EMAIL" VARCHAR2(25 BYTE), "PHONE_NUMBER" VARCHAR2(20 BYTE), "HIRE_DATE" DATE, "JOB_ID" VARCHAR2(10 BYTE), "SALARY" NUMBER(8,2), "COMMISSION_PCT" NUMBER(2,2), "MANAGER_ID" NUMBER(6,0), "DEPARTMENT_ID" NUMBER(4,0) REFERENCES GGTEST.DEPARTMENTS (DEPARTMENT_ID), PRIMARY KEY ("EMPLOYEE_ID") ) TABLESPACE "USERS" ; GRANT INSERT, UPDATE, DELETE, SELECT ON ggtest.DEPARTMENTS TO GGADMIN; GRANT INSERT, UPDATE, DELETE, SELECT ON ggtest.EMPLOYEES TO GGADMIN; |
30. Fill some test data on the source
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
-- -- does some inserts, updates, deletes -- -- ALTER SESSION SET CURRENT_SCHEMA=GGTEST; declare inserts_pct number:=90; updates_pct number:=5; deletes_pct number:=5; t number; commit_period number:=50; t_fname varchar2(50); t_lname varchar2(50); t_deptid number; n number:=10000; TYPE depts_array IS TABLE OF DEPARTMENTS.DEPARTMENT_ID%TYPE INDEX BY BINARY_INTEGER; TYPE empid_array IS TABLE OF EMPLOYEES.EMPLOYEE_ID%TYPE INDEX BY BINARY_INTEGER; depts_id depts_array; --:=depts_array(); empid_delete empid_array; --:=depts_array(); empid_update empid_array; --:=depts_array(); procedure insert_dept ( P_DEPARTMENT_ID number, p_DEPARTMENT_NAME varchar2) as begin INSERT INTO DEPARTMENTS (DEPARTMENT_ID,DEPARTMENT_NAME) VALUES (P_DEPARTMENT_ID , p_DEPARTMENT_NAME ); end; procedure fill_some_departments (p_dept_count number) as t_dname varchar2(50); begin for i in 1..p_dept_count loop t_dname:=DBMS_RANDOM.STRING('A',20); insert_dept (DEPT_SEQ.NEXTVAL, t_dname); end loop; commit; select DEPARTMENT_ID BULK COLLECT INTO depts_id from DEPARTMENTS; end; procedure fill_empid_to_update (pct number:=10) as begin select employee_id BULK COLLECT INTO empid_update from employees sample(10); end; procedure fill_empid_to_delete (pct number:=10) as begin select employee_id BULK COLLECT INTO empid_delete from employees sample(10); end; procedure insert_emp ( P_EMPLOYEE_ID number, p_FIRST_NAME varchar2, P_LAST_NAME varchar2, P_HIRE_DATE date, P_SALARY number, P_DEPARTMENT_ID number) as begin DBMS_OUTPUT.PUT_LINE (P_DEPARTMENT_ID); INSERT INTO EMPLOYEES (EMPLOYEE_ID,FIRST_NAME, LAST_NAME, EMAIL,HIRE_DATE, SALARY, JOB_ID, DEPARTMENT_ID ) VALUES (P_EMPLOYEE_ID , p_FIRST_NAME , P_LAST_NAME , p_FIRST_NAME||'@O.RU' ,P_HIRE_DATE , P_SALARY, 100, P_DEPARTMENT_ID); end; procedure update_emp as emp_id number; r number; begin r:=DBMS_RANDOM.value (1,empid_update.count); emp_id:=empid_update(r); update EMPLOYEES set SALARY=salary+0.5 where EMPLOYEE_ID=emp_id; exception WHEN OTHERS THEN NULL; end; procedure delete_emp as emp_id number; r number; begin r:=DBMS_RANDOM.value (1,empid_delete.count); emp_id:=empid_delete(r); delete from employees where employee_id=emp_id; exception WHEN OTHERS THEN NULL; end; begin fill_some_departments(10); fill_empid_to_update(10); fill_empid_to_delete(10); for i in 1..n loop t:=DBMS_RANDOM.value (1,100); if t<inserts_pct then t_fname:=DBMS_RANDOM.STRING('A',20); t_lname:=DBMS_RANDOM.STRING('B',10); t_deptid:=depts_id(DBMS_RANDOM.value (1,depts_id.COUNT)); insert_emp (EMP_SEQ.NEXTVAL, t_fname, t_lname,sysdate,t,t_deptid); elsif t<inserts_pct+updates_pct then update_emp; else delete_emp; end if; if mod(i,commit_period)=0 then commit; end if; --commit_period:=DBMS_RANDOM.value (1,20);end if; -- DBMS_LOCK.SLEEP(0.1); end loop; end; / select count(*) from employees; |
31. Now replication works and we can see data in cloud almost immediately.
Summary
GoldenGate Cloud Service is absolutely unique feature of Oracle Public Cloud. It allows you to upload data from local to cloud database almost instantly. It also allow you get data from cloud to on-premise system or even synchronize you cloud instances (for active-active case).
GoldenGate Cloud Service is relatively new service but really you see proven GoldenGate engine undercover. So the best replication and CDC engine is available for cloud now.