Providing that you’ve understood that with SAP ASE, you cannot shrink the device size, then you may be looking to shrink the size of a transaction log or datafile within a device.
From isql, list out the segments of the transaction log using “sp_helpdb <dbid>” as follows (I’ve cleaned up the output a little):
1> sp_helpdb saptools 2> go
name db_size owner dbid created durability lobcomplvl inrowlen status ——————————————————–
saptools 4628.0 MB sapsa 5 Jul 20, 2015 full 100 2000 trunc log on chkpt, ddl in tran, allow nulls by default, abort tran on log full, allow wide dol rows, allow incremental dumps,full logging for all
(1 row affected)
device_fragments size usage created free kbytes —————————— ————- ——————– ————————- —————- saptools_data_001 2048.0 MB data only Jul 20 2015 6:50PM 1626832 saptools_log_001 204.0 MB log only Jul 20 2015 6:50PM not applicable saptools_log_001 308.0 MB log only Oct 26 2015 10:16AM not applicable saptools_data_001 2048.0 MB data only Nov 10 2015 11:51AM 2088960 saptools_data_001 20.0 MB data only Nov 10 2015 11:53AM 20400 ——————————————————–
log only free kbytes = 522128
(return status = 0)
You can see that the saptools_log_001 has been extended in the past as there are multiple lines under “device fragments” for the saptools_log_001 device. There are two segments, one is 204MB and the other 308MB. We can remove one of these segments to free up space within the device saptools_log_001 (remember this will not return the space to the operating system, you can’t do that in ASE). Since this is a transaction log, we will need to ensure that the segment we are removing is no longer used for redo information. In our case we are happy to simply truncate, but you may wish to actually dump the transaction log to your backup tool or to disk.
1> dump tran saptools with truncate_only 2> go
NOTE: You may need to do the above multiple times before the log segments become free. Once cleared, we can then tell the saptools database to remove the 308MB segment, by specifying the exact size of the segment we would like to remove. NOTE: You cannot just choose a segment, you must start with the last segment and work backwards else you will create “holes” in your device.
1> alter database saptools log off saptools_log_001 = 308 2> go
Removing 19712 pages (308.0 MB) from disk ‘saptools_log_001’ in database ‘saptools’.
Processed 571 allocation unit(s) out of 640 units (allocation page 145920). 89%
completed.
Processed 635 allocation unit(s) out of 640 units (allocation page 162304). 99%
completed.
Now let’s check:
1> sp_helpdb saptools 2> go
name db_size owner dbid created durability lobcomplvl inrowlen status ——————————————————–
saptools 4320.0 MB sapsa 5 Jul 20, 2015 full 100 2000 trunc log on chkpt, ddl in tran, allow nulls by default, abort tran on log full, allow wide dol rows, allow incremental dumps,full logging for all
(1 row affected)
device_fragments size usage created free kbytes —————————— ————- ——————– ————————- —————- saptools_data_001 2048.0 MB data only Jul 20 2015 6:50PM 1626832 saptools_log_001 204.0 MB log only Jul 20 2015 6:50PM not applicable saptools_data_001 2048.0 MB data only Nov 10 2015 11:51AM 2088960 saptools_data_001 20.0 MB data only Nov 10 2015 11:53AM 20400 ——————————————————–
log only free kbytes = 207968, log only unavailable kbytes = 315392
(return status = 0)
Only one segment for the saptools_log_001 device remains. We have increased the “log only unavailable kbytes” as the space is still used by the device, but is there for us to expand into again if we wish. NOTE: The saptools_log_001 is a “log only” device. So we cannot use that new free space for a data segment expansion. Only for log expansion.
Even if administrator decided to create the database assigning it a lot of spare disk space, it comes to a moment, when a database needs to be expanded. Assuming database has currently size of 55 MB, additional 5 MB of disk space can be assigned to it using following command:
USE master
GO
ALTER DATABASE ALFA
ON AlfaDataDev='5M'
GO
The result, could be like:
Execution time: 0 seconds
Extending database by 1280 pages (5.0 megabytes) on disk AlfaDataDev
Warning: The database 'ALFA' is using an unsafe virtual device 'AlfaDataDev'. The recovery of this database can not be guaranteed.
Execution time: 0.031 seconds
Conditions that need to be met to succeed in database expansion are:
the current database needs to be master
a device with free space on it must be available
size mentioned in ALTER DATABASE command determines how much space will be added to the database (not a total size after expansion)
So after executing of that statement, current size of ALFA database is 60 MB.
Of course from time to time, administrator discovers databases consuming too much disk space. If this is not a temporal condition, he can also release the space, so it could be taken by other databases. We call such action "shinking database". This operation is slightly more complicated, as database needs to be in single user mode. Let's suppose dba wants to decrease the size by 3 MB:
The current database must be master at the begining
Database must be switched to single user mode, what is done in Sybase using sp_dboption procedure
The main command to shrink database is executed
Database can be brought back in multi user mode
USE master
GO
EXEC sp_dboption ALFA, 'single user', 'true'
GO
ALTER DATABASE ALFA
OFF AlfaDataDev = '3M'
GO
EXEC sp_dboption ALFA, 'single user', 'false'
GO
Now the size of the database is 57 MB.
The same way also the log could be expanded. If during shrinking an error would appear, suggested is to perform a dump tran and... try again.
The same way, log can be expanded (instead of ON clause use LOG ON). ALTER DATABASE allows also other database settings to be changed.
In previous versions of the SAP ASE database, database audit records were stored only in the sybsecurity database in a specific set of tables that you had to size and create and rotate yourself (or with a stored procedure).
Once the records were in the database, it was then up to you to define how and when those records would be analysed. Depending on whether your SIEM tool supports direct ODBC/JDBC access to SAP ASE or not, would depend on how complex the extraction process would be.
In SP04 a new parameter was introduced called “audit trail type” where you can now set the audit store to be “syslog”.
When setting the store to be “syslog”, the audit records are pushed out to the Linux syslogd daemon (or rsyslogd or syslog-ng) and written to the O/S defined location according to the configuration of syslogd:
Each audit record gets a tag/program name of “SAP_ASE_AUDIT”, which means you can define a custom syslogd log file to hold the records, and also then specify a custom rotation should you wish. Your syslogd logs may already be pulled into your SIEM tools, in which case you will simply need to classify and store those records for analysis.
With the new parameter set to “syslog” and the audit records being stored as file(s) on the file system, you will need to ensure that the file system has adequate space and establish a comfortable file retention (logrotate) configuration to ensure that audit records do not cause the file system to fill (preventing persistence of additional audit records).
Of course, should you enjoy torture, you can always go ahead and continue to use the database to store the audit records. Simply setting the new parameter “audit trail type” to “table”, will store the audit records in the database just like the previous versions of ASE.
In this multi-part post, explaining the basics behind how SAP Replication Server works when replicating from a SAP ASE database to a SAP ASE database as part of an HADR (ASE always-on) setup for a SAP Business Suite system.
The post is based on SAP ASE (Adaptive Server Enterprise) 16.0 HADR with SAP Replication Server (SRS) 16.0.
In Part 4 we stepped through the replication of Bob’s data change and saw how the transactional data was replicated first to the SRS and eventually to the companion (secondary) database.
This is the last part of my ASE 16.0 HADR mini-series, and in this final part I will discuss possible issues that can impact an ASE 16.0 HADR system, which might be useful when planning operational acceptance testing.
Companion ASE Unavailable
When the companion (secondary) database is unavailable for whatever reason (undergoing maintenance, it’s broken or for other reasons), then the replicated transactions will still continue to move through the SRS until they get to the outbound queue (OBQ). The assumption is that the active SRS (on same server as the companion DB) is still up and working.
In the OBQ the transactions will wait until the companion is available again. As soon as the companion is available, the transactions will move from the OBQ into the companion. The primary database will be unaffected during this time and transactions will continue through the SRS until the OBQ becomes full.
If the OBQ fills up, then transactions will start to accumulate in the inbound queue (IBQ).
If the companion database is down for a long period of time, you may need to make a decision:
Increase the stable queue partition space to hold more transactions. With the hope that the companion can be brought back online.
Disable replication, removing the Secondary Truncation Point (STP) from the primary database and acknowledging that the companion will need re-materialisation to bring it back in-sync with primary.
Inbound Queue Full
When the inbound queue becomes full, the transactions will start to build up into the simple persistent queue (SPQ). You should note that the IBQ, by default, is only allowed to use up to 70% of the stable queue partition size. The rest is for the OBQ. So “full” is not actually 100% of the queue space.
There can be two common reasons for the IBQ to fill:
The OBQ is also full due to an issue with the connection to the companion ASE database, or the companion is unavailable. or
There is an open transaction in the IBQ and the SRS is waiting for the “commit” or “rollback” command to come through from the SPQ for the open transaction.
To resolve the full IBQ, you are going to need to establish which of the two issues is occurring. An easy way to do this is to check the OBQ fill level. If transactions are moving from the OBQ to the companion, then the issue is an open transaction.
If an open transaction has caused the IBQ to fill, then the “commit” or “rollback” transaction could now be stuck in the SPQ. Since there is no space in the IBQ, the SRS is also unable to process the SPQ records, which leaves the IBQ open transaction in a stale-mate situation. You will need to make a decision:
Add more space to the stable queues to increase the IBQ size. or
Increase the proportion of stable queue size that the IBQ can use (if OBQ is empty). or
Zap (remove) the open transaction from the IBQ (will mean data-loss on companion so a rematerialise may be needed).
Normally, you can just add more space by adding another partition to the stable queues, hopefully resolve the issue, then remove the extra space again. How much is needed? Nobody will know. However, if you have to zap the open transaction, then make sure you dump the queue contents out first, so you can see what DML was open, you can then make a decision on how the missing transaction will affect the companion database integrity (could negate the need for rematerialisation).
During this problematic period, the SPQ has remained functional, which has meant that the primary database has been able to continue to send transactions to the active SRS and therefore allowed it to continue to commit data in a timely manner. The primary database will have no issues.
Simple Persistent Queue Full
This is probably the most serious of the scenarios. Once the SPQ becomes full, it immediately impacts the Replication Agent running in the primary ASE database.
Transactions are unable to move from the primary database transaction log to the active SRS. You will start to get urgent warnings in the primary ASE database error log, informing you that the SPQ is full and that the buffers in the primary Replication Agent are full.
You will also see that the Replication Agent will be producing error messages and informing you that it has switched from synchronous to asynchronous replication mode.
The integrity of your production ASE database is now at risk!
It is possible you can add more space to the SPQ if you think you can resolve the problem in the IBQ and have the time to wait for the IBQ to empty!
You should note that this scenario has the symptoms if the active SRS is not working at all. If the SPQ is not available, then you need to troubleshoot the SRS error log. It’s also possible you may have issues with the Replication Agent in the primary ASE.
Primary Transaction Log Full
If your active SRS has now filled up to the SPQ, your primary ASE database is now at serious risk. With the Replication Agent unable to move the Secondary Truncation Point (STP) then the transaction log of the primary ASE will start to fill up. You will not be able to release the segments, even with a transaction log dump, because they are still needed by the Replication Agent.
You have the following options available:
Add more space to the transaction log by adding a new device (recommended, instead of expanding the existing device). This will give you more time to maybe fix the SRS. or
Disable replication, which removes the STP and allows the transaction log to be dumped. A rematerialise of the companion will be needed at a later date. or
If the transaction log is already full, then even trying to disable replication will not work (no transactions will be permitted). You will have to use DBCC to remove (ignore) the “LTM” (Last Truncation Marker), which will have a similar outcome to disabling replication.
At this point, if all else fails and your transaction log is full DO NOT RESTART THE PRIMARY ASE! If you restart the primary ASE with a full transaction log, then you will be in a world of pain trying to get it to start up again. You need to stop all processing (it won’t be working anyway), then try and resolve the issue.
In this multi-part post, explaining the basics behind how SAP Replication Server works when replicating from a SAP ASE database to a SAP ASE database as part of an HADR (ASE always-on) setup for a SAP Business Suite system.
The post is based on SAP ASE (Adaptive Server Enterprise) 16.0 HADR with SAP Replication Server (SRS) 16.0.
In this penultimate part we step through the process of replication for an individual transaction, looking at how each of the previously discussed components plays it part along the way.
Step 1 -Begin Transaction
In the first step, our SAP Business Suite application is being used by Bob, our end-user. Bob has a screen open and saves a change to an item of data. The Business Suite application calls the Netweaver application stack to persist the change to the data according to the application’s dictionary model. The dictionary dictates the relationship of the business object (e.g. an invoice) to the relational database tables. The Netweaver code uses the SAP Kernel supplied libraries to save the change to the necessary system database tables.
There could be many tables involved in Bob’s one change. It would not be right to update just one table without also updating the others that are related. Otherwise we would not have consistency. For this reason, a single database transaction can include many data manipulation language (DML) statements. By grouping the DML statements for the related table updates into a single database transaction, the SAP Kernel can enforce consistency.
In our example, our transaction will include updates to 2 different tables: tableA and tableB. To designate the start of the database transaction, the SAP Kernel calls the database library to “BEGIN TRANSACTION”.
The affect of the database call to “BEGIN TRANSACTION”, is that a new transaction log record is OPENened:
Step 2 – Replication Agent – Open Transaction
Once a transaction has started (opened), the ASE HADR Replication Agent will see it. Remember, we discussed the Replication Agent in Part 2.
The Replication Agent sends the transaction data across the network to the target SAP Replication Server (SRS). It knows where to send the data because of the configuration applied to the Replication Agent during the HADR setup process.
The SRS receives the data from the Replication Agent and writes it to the Simple Persistent Queue (SPQ), then sends an acknowledgement back to the Replication Agent.
Step 3a – Update Tables – DML
So far, all that has happened is a new transaction has been opened. Now Netweaver will apply the required DML to the transaction:
UPDATE tableA SET column1=”ABC” UPDATE tableB SET column1=”123″
The Kernel will apply the required DML to the opened transaction, this will also update the transaction log, which will be seen by the Replication Agent and sent across to the SRS as before.
You will notice that at this point, we are still using transaction log space, but we are also consuming space in the SPQ.
At this step, if one of the required “UPDATE” statements was to fail, then the whole transaction could be cancelled (a rollback) and no changes would be permanently made to any of the tables. This is one of the requirements of the ACID principles.
Step 3b – The SRS Inbound Queue
At the same time as the DML is being applied to the open transaction in step 3a, the SRS continues to process the open transaction.
Inside the SRS, there are various component modules that process the incoming transactions from the SPQ, placing them in the correct order and compacting them (grouping) into larger, more efficient transactions. Once this initial processing has completed, the new transaction is placed into the inbound queue (IBQ).
Something you will notice, is that we now have consumed space in:
Primary database transaction log.
Simple Persistent Queue.
Inbound Queue.
Once the transaction is safely persisted into the IBQ, the record in the SPQ is now free for re-use.
Step 4 – End Transaction
In steps 3a and 3b, we see the opening transaction record and the DML move across to the SRS. At this point in time, Bob’s changes are still not replicated to the companion (secondary/standby) database. In fact, Bob’s changes are not even visible to other users of the primary database, because Bob’s changes are not yet committed.
Once all the DML in Bob’s transaction has been applied at the primary database successfully (still not committed), then the SAP Kernel can issue the “END TRANSACTION”. This signifies that this group of changes are finished. After the “END TRANSACTION”, the SAP Kernel can issue one of two things; a “COMMIT” or a “ROLLBACK”. In our case, the Kernel issues a “COMMIT”.
The “COMMIT” on the primary database is now performed, at the same time, the “COMMIT” record is also sent by the Replication Agent to the SPQ of the SRS.
I can hear the DB experts gasp at this point! Yes, in Part 3 I mentioned that the commit is not allowed on the primary until after the Replication Agent has successfully sent the commit to the SPQ. In actual fact, this is not a hard and fast rule. The Replication Agent will attempt to send the commit record to the SPQ; it will wait for a given amount of time, before switching to asynchronous replication mode (see Part 3 for a description of this mode). The commit to the primary database is therefore allowed to happen, even if it has not yet been acknowledged by the SPQ. This is the trade-off between performance and protection. The HADR solution has flexibility that allows a configurable amount of replication delay before synchronous replication is switched to asynchronous.
The acknowledgement that the “COMMIT” record has been successfully stored in the SRS SPQ, allows the primary database Replication Agent to move the Secondary Truncation Point (STP) forward and release the transaction log record, which allows it to be freed when the next “DUMP TRANSACTION” (transaction log backup) is performed.
The primary database data change becomes visible to all users of the database. Bob’s screen returns a message telling him that his data is saved.
Step 5 – The SRS Outbound Queue
Inside the SRS IBQ, all of our transaction is now complete, we have a “BEGIN” and an “END”, a “COMMIT” and some DML in between that contains the updates to the required tables.
Once the “COMMIT” record is seen by the Inbound Queue (IBQ) of the SRS, then the SRS will process the re-packaged transaction from the IBQ to the Outbound Queue (OBQ).
This processing could involve adjusting the SQL language used, if the target database is not ASE.
From the OBQ, the Data Server Interface (DSI) component of the SRS, applies the transaction to the target database.
Finally, the replicated transaction data is applied to the target secondary database and harmony is achieved.
Bob’s View
Throughout this whole replication cycle, Bob had no idea that his data was being replicated to the target secondary database hundreds of kilometres away. The response time of the database to the SAP Kernel was only slightly impacted by the addition of the Replication Agent processing time, plus the network transfer time to the active SRS, plus the processing and persistence time to the SPQ of the SRS.
As well as the response time, we noticed how the storage requirements of the SRS are bigger than the actual transaction log of the primary database due to the way the transaction is processed/transformed and re-processed through the SRS, then queued for application to the target database.
In Part 5, I will discuss some common issues that can occur with HADR, allowing you to comprehensively plan your operational acceptance testing.