A few days ago, a customer opened a case with me because he couldn’t understand the origin of the locks that were occurring on certain objects. In fact, this lock condition was causing problems for the application procedures, which clearly found the files busy and were unable to operate on them.
There are various solutions to this issue, but the one that seemed most convenient and functional to me was to use our friend SQL.
In the tool we will implement today, we will use the QSYS2.OBJECT_LOCK_INFO system view. This view returns a series of very useful information, such as the object being locked, the job that placed the lock, and the type of lock. However, it can also return very detailed information, such as the procedure/module and the statement.
Let’s look at an example of a query on this view without any filters: select * from QSYS2.OBJECT_LOCK_INFO
As you can see, it gives us a lot of information. Let’s go back for a moment to the customer’s primary need, which is to understand who placed blocks on a specific object and when, so we’ll start modifying the query. In this case, we’ll take less information, so I’ll select only a few fields: SELECT SYSTEM_OBJECT_SCHEMA, SYSTEM_OBJECT_NAME, SYSTEM_TABLE_MEMBER, OBJECT_TYPE, ASPGRP, MEMBER_LOCK_TYPE, LOCK_STATE, LOCK_STATUS, LOCK_SCOPE, JOB_NAME, PROGRAM_LIBRARY_NAME, PROGRAM_NAME, MODULE_LIBRARY_NAME, MODULE_NAME, PROCEDURE_NAME, STATEMENT_ID, CURRENT TIMESTAMP AS TIMESTAMP FROM QSYS2.OBJECT_LOCK_INFO WHERE SYSTEM_OBJECT_SCHEMA='XXXXXX' AND SYSTEM_OBJECT_NAME='XXXXX'
In my example, I took the display file for my system signon.
Okay, now that I’ve written the basic query, I’m ready to start the procedure.
Input parameters
Library name
Object name
Object type
Analysis duration
Analysis output file library
Analysis file name
Procedure body: first, I create the template file (the create table with no data) that I will use later to make all the entries I see. After that, I cycle until I have reached the time required for the analysis passed as a parameter. During this cycle, I insert the data extracted directly with the select into the temporary file I created earlier. I chose to run the analysis every 30 seconds, but feel free to change this time. Once I have reached the time I set, I copy all the output to the file that I passed as a parameter to the procedure
Calling the procedure: you can call it from RUN SQL SCRIPTS/STRSQL/VS CODE/RUNSQL, here you can find an example, let’s consider that if you want to run this analysis for a long time, is better to submit it in batch.
You need to be carefull! The query that I wrote works on every release >=7.5 because of the where statement. As you can see it’s quite complicated, and this kind comparison in where clause is not supported on older release, so, as I said a lot of time, PLEASE STAY CURRENT.
A few articles ago, we talked about the native integration of the syslog format within IBM i. We also looked at two SQL services (the history log and the display journal) that were able to generate data in this format in a simple way.
Today, in this article, I propose a possible implementation of a SIEM, which, starting from the logs on the IBM i system, feeds a Grafana dashboard. For those unfamiliar with Grafana, this tool is one of the leaders in interactive data visualization and analysis. It has native connectors with a wide range of different data sources, and can natively perform alert functions on the metrics being monitored.
The downside is that there is no DB2 datasource (at least in the free version). In our scenario, we chose to rely on a PostgreSQL instance running on AIX, which allowed us to build the dashboard with extreme simplicity.
In fact, our infrastructure consists of one (or more) IBM i partitions that contain all the source data such as JRNRCV, history logs, etc., a small Python script that queries the systems using the two specific views, and the collected data is then sent to a PostgreSQL database, which is then queried by dashboards built on Grafana for analysis purposes.
Below you will find the Python code written:
script information: this script can run directly on IBM i, but in fact, in an environment where there are several machines, it can be run from a centralized host. The only installation required is that of the ODBC driver, but there is a wealth of supporting documentation. In our case, there is a configuration file that contains both the database connection information and the host master data. The script can be invoked by passing LOG as a parameter (in which case the DSPLOG data will be analyzed) or JRN (in which case the QADUJRN entries will be looked at). Another parameter is the list of msgids of interest (valid in the case of LOG) or the type of entry (in the case of JRN), which is also supported as a value *ALL
getQhst: this function extracts entries with the msgid specified as a parameter. In fact, the extraction checks the last entry in the dsplog and reads all entries from the last extraction to the last entry detected now
getJrn: this function extracts entries from the system audio log. Again, the tool keeps track of the last entry read
As you can see, the extracted data is then dumped directly into a PostgreSQL table.
Below is the dashboard I built on Grafana:
As you can see, there is fairly basic information, a graph showing the types of entries and their counts, a graph showing the occurrence of events on different days, and finally the extraction of events in syslog format. The same dashboard is also available for the history log.
This week, I received a request from a customer to create a batch procedure that would restore files from the production environment to the test environment without transferring members due to disk space limitations. It seemed like a simple task, as the operating system allows this without any problems using the native RSTOBJ command thanks to the FILEMBR((*ALL *NONE)) parameter:
As mentioned above, it seemed simple, but the BRMS commands do not support this functionality. Let’s look at the RSTOBJBRM command, for example… If I try to pass it the parameter name, I get this error, precisely because the parameter does not exist in this case:
When talking to IBM support, I was told that the only solution at the moment to achieve my goal of restoring without members was to concatenate the native command with the information contained within the BRMS DB. This gave me the idea of creating a simple SQL procedure that would allow me to achieve my goal. Clearly, it was also feasible in other languages; I could have achieved the same result with an RPG programme. The choice of SQL was dictated by the need to find a quick alternative that did not require a great deal of development effort.
Let’s start with what we need… Let’s assume that the list of objects to be restored and the library in which they are located are passed as parameters, and let’s also assume that the library in which the restoration is to be performed is also passed as a parameter to the function. Now, what we need to calculate are the tape on which they are saved, the sequence (although we could use the *SEARCH parameter) and the device to be used for the restore.
Now, if you are using product 5770BR2 (with the most recent PTF), extracting this information is quite simple. In fact, there is a view in QUSRBRM called backup_history_object that returns information about the various saved objects. Alternatively, if you are using 5770BR1, you will need to query the QA1AHS file.
For example, we want to find media information for objects in QGPL (I will use QAPZCOVER as example)…
With 5570BR2 the SQL statement is: select VOLUME_SERIAL,DEVICE_NAMES, FILE_SEQUENCE_NUMBER from qusrbrm.backup_history_object WHERE SAVED_ITEM = 'QGPL' AND saved_object='QAPZCOVER' ORDER BY SAVE_TIMESTAMP DESC
If you are using 5770BR1: SELECT OBVOL AS VOLUME_SERIAL, OBHDEV AS DEVICE_NAMES, OBHSEQ AS FILE_SEQUENCE_NUMBER FROM QUSRBRM.QA1AOD where OBHLIB='QGPL' AND OBHNAM='QAPZCOVER' ORDER BY OBHDAT DESC
As you can see, the result is the same regardless of which of the two queries you use.
Now, in my case, I needed the most recent save, so I applied a LIMIT 1 in my function, sorting in descending order by save date (so I inevitably get the most recent save). If you also want to parameterise the date, you simply need to add a parameter to the procedure and add a condition to the WHERE clause.
Now we are ready to create our procedure: in the first stage we will create RSTOBJ command retrieving data from QUSRBRM, after that we will use SYSTOOLS.LPRINTF to write command executed on the joblog and after that we will execute command using QSYS2.QCMDEXC procedure. In my case, the RSTLIB parameter is optional, by defualt is *SAVLIB:
SET PATH "QSYS","QSYS2","SYSPROC","SYSIBMADM" ;
CREATE OR REPLACE PROCEDURE SQLTOOLS.RSTNOMBR2 (
IN OBJLIST VARCHAR(1000) ,
IN LIB VARCHAR(10) ,
IN RSTLIB VARCHAR(10) DEFAULT '*SAVLIB' )
LANGUAGE SQL
SPECIFIC SQLTOOLS.RSTNOMBR2
NOT DETERMINISTIC
MODIFIES SQL DATA
CALLED ON NULL INPUT
SET OPTION ALWBLK = *ALLREAD ,
ALWCPYDTA = *OPTIMIZE ,
COMMIT = *NONE ,
DECRESULT = (31, 31, 00) ,
DYNDFTCOL = *NO ,
DYNUSRPRF = *USER ,
SRTSEQ = *HEX
BEGIN
DECLARE CMD VARCHAR ( 10000 ) ;
SELECT
'RSTOBJ OBJ(' CONCAT TRIM ( OBJLIST ) CONCAT ') SAVLIB(' CONCAT TRIM ( LIB ) CONCAT ') DEV(' CONCAT TRIM ( OBHDEV ) CONCAT ') SEQNBR('
CONCAT OBHSEQ CONCAT ') VOL(' CONCAT TRIM ( OBVOL ) CONCAT ') ENDOPT(*UNLOAD) OBJTYPE(*ALL) OPTION(*ALL) MBROPT(*ALL) ALWOBJDIF(*COMPATIBLE) RSTLIB('
CONCAT TRIM ( RSTLIB ) CONCAT ') DFRID(Q1ARSTID) FILEMBR((*ALL *NONE))'
INTO CMD
FROM QUSRBRM.QA1AOD WHERE OBHLIB = TRIM ( LIB ) ORDER BY OBHDAT DESC LIMIT 1 ;
CALL SYSTOOLS . LPRINTF ( TRIM ( CMD ) ) ;
CALL QSYS2 . QCMDEXC ( TRIM ( CMD ) ) ;
END ;
Ok, when we have created this procedure we are also ready to test it… From 5250 screen you can use this command: RUNSQL SQL('call sqltools.rstnombr2(''QAPZCOVER'', ''QGPL'', ''MYLIB'')') COMMIT(*NONE)
This is the result:
If you put this command in a CL program you are able to perform this activity in a batch job. Also in this way you are able to make the restore from systems in the same BRMS network if they are sharing media information, in that case you should query QA1AHS instead because object detail is not shared.
Syslog, short for System Logging Protocol, is one of the cornerstones of modern IT infrastructures. Born in the early days of Unix systems, it has evolved into a standardized mechanism that enables devices and applications to send event and diagnostic messages to a central logging server. Its simplicity, flexibility, and widespread support make it indispensable across networks of any scale.
At its core, Syslog functions as a communication bridge between systems and administrators. It allows servers (also IBM i partitions), routers, switches, and even software applications to report what’s happening inside them—be it routine processes, configuration changes, warning alerts, or system failures. It is also possible that these messages are transmitted in real time to centralized collectors, allowing professionals to stay informed about what’s occurring in their environments without needing to inspect each machine individually.
This centralized approach is critical in environments that demand security and reliability. From banks to hospitals to government networks, organizations rely on Syslog not just for operational awareness but also for auditing and compliance. Log files generated by Syslog can help trace user activities and identify suspicious behavior or cyberattacks. That makes it an essential component in both reactive troubleshooting and proactive monitoring strategies.
So, in IBM i there al least three places in which you are able to generate Syslog.
The first place where you can extract syslog format is the system log. The QSYS2.HISTORY_LOG_INFO function allows you to extract output in this format. In my example, I want to highlight five restore operations performed today: SELECT syslog_facility, syslog_severity, syslog_event FROM TABLE (QSYS2.HISTORY_LOG_INFO(START_TIME => CURRENT DATE, GENERATE_SYSLOG =>'RFC3164' ) ) AS X where message_id='CPC3703' fetch first 5 rows only;
By changing the condition set in the where clause it is possible to work on other msgids that could be more significant, for example it is possible to log the specific msgid for abnormal job terminations (since auditors enjoy asking for extraction on error batches).
The second tool that could be very useful is the analysis of journals with syslog, in fact the QSYS2.DISPLAY_JOURNAL function also allows you to generate output in syslog format. In my example, I extracted all audit journal entries (QSYS/QAUDJRN) that indicated the deletion operation of an object on the system (DO entry type): SELECT syslog_facility, syslog_severity, syslog_event FROM TABLE (QSYS2.DISPLAY_JOURNAL('QSYS', 'QAUDJRN',GENERATE_SYSLOG =>'RFC5424') ) AS X WHERE syslog_event IS NOT NULL and JOURNAL_ENTRY_TYPE='DO';
Of course, it is possible to extract entries for any type of journal, including application journals.
The last place that comes to mind is the system’s syslog service log file. In a previous article, we saw how this service could be used to log SSH activity. In my case the log file is located under /var/log/messages, so with the QSYS2.IFS_READ function I can read it easily: SELECT * FROM TABLE(QSYS2.IFS_READ(PATH_NAME => '/var/log/messages'));
These are just starting points… as mentioned previously, these entries are very important for monitoring events that occur on systems. Having them logged and stored in a single repository for other platforms can make a difference in managing attacks or system incidents in general.
Do you use these features to monitor and manage events on your systems?
More and more frequently customers are reporting to us that after upgrading to Windows 11 24H2 there are problems with connecting to network shares via Netserver.
Small parenthesis, in itself, the fact that the IFS of an IBM i system is accessible is a great thing, but you have to be extremely careful about what you share and with what permissions you do it. Frequently I see that on systems there is root ( / ) shared read and write, this is very dangerous because in addition to IFS you can browse the other file systems on our systems such as QSYS.LIB and QDLS. So try if possible to share as little as possible with as low permissions as possible. Closing parenthesis.
Returning to the initial issue, indeed it seems that Microsoft with its update (now released a few months ago) has added issues related to the support of certain characters in IFS filenames. Thus, if a folder contains a file with the name consisting of one of the offending special characters, Windows loses access to that folder. The characters that generate these problems are the following: < (less than)
(greater than)
: (colon)
“ (double quote)
/ (forward slash)
\ (backslash)
| (vertical bar or pipe)
? (question mark)
* (asterisk)
Here, as indicated in this IBM documentation link, changing the file names by removing the famous characters will restore access to shared folders. Now, clearly in a production context it is imaginable that there are several shared folders and that IFS is an infinitely large place with infinite files (most of the time abandoned :-D), so it is necessary to find a clever way to check in which shared folders we might have problems. To do this we will rely on two SQL views, the first we need to list the list of folders we are sharing, the second we need to list the paths that contain special characters inside them.
Thanks to the QSYS2.SERVER_SHARE_INFO view, we will have the ability to list the paths that have been shared via netserver with the following query:
select PATH_NAME from QSYS2.SERVER_SHARE_INFO where PATH_NAME is not null
Now that we have the list of all directories shared, we only need to scan the content. Now that we have the list of all shared directories, we just need to analyze the contents. To do this we will use the procedure QSYS2.IFS_OBJECT_STATISTICS which takes as parameters the name of the starting path, any paths to be excluded and an indication of whether to proceed with scanning in the subdirectories, clearly in our case we will tell it to scan those as well. Now, we are not interested in taking all the files, but only those that contain special characters in their name that are not supported by Windows, for which we will apply a WHERE. Here is an example of the query on a small path (take care that this query could run for a lot of time):
SELECT PATH_NAME,CREATE_TIMESTAMP,ACCESS_TIMESTAMP,OBJECT_OWNER
FROM TABLE (
QSYS2.IFS_OBJECT_STATISTICS(
START_PATH_NAME => '/qibm/ProdData/Access/ACS/Base',
OMIT_LIST => '/QSYS.LIB /QNTC /QFILESVR.400',
SUBTREE_DIRECTORIES => 'YES')
)
WHERE PATH_NAME LIKE '%\%'
OR PATH_NAME LIKE '%<%'
OR PATH_NAME LIKE '%>%'
OR PATH_NAME LIKE '%|%'
OR PATH_NAME LIKE '%*%'
OR PATH_NAME LIKE '%:%'
OR PATH_NAME LIKE '%?%'
In my example I took a fairly small path (the one with the ACS installer) and it took a short time. Moreover, no file contains any wrong characters so I can rest assured, in fact it did not return any rows.
At this point, there is nothing left to do but combine the two queries into a very simple RPG program. Now, considering that the second scan query can take a long time, it is a good idea to submit its execution, saving the results in another table.
As you can see, my program is pretty short, only combining two easy queries, and in this way you are able to find every file that will break shares. At the end of the execution, please check MYSTUFF/SHARECHAR file, here you can find details about file as path name, owner, creation and last access timestamp.
Remember, this is SQL, so you can also change whathever you want such as column, destination file and so on.
I hope I give you a way to save you time with this that can be a rather insidious and annoying problem.
The past few days have been very full of announcements for IBM i system builders, in fact a new major release of the operating system was announced (strange since we are talking about an operating system that some people call dead :-D) namely IBM i V7R6M0. As enthusiastic as I am about it, I will wait to do an ad hoc article as soon as I have had the opportunity to test first-hand all the new features presented. In addition to version 7.6 last week the new Technology Refresh 6 was announced for version 7.5, again I am eager to see what new features they have thought of. A first alarm bell on the other hand must start ringing for all those who have systems with the 7.4 release, this is because it was announced that no new features will be released and this somewhat foreshadows the announcement of the end of support for that release, although you need not worry, from the date of the announcement you have one year to upgrade to 7.5 or 7.6.
In today’s article, however, we are talking about ACS, the tool that we all use on a daily basis to connect in greenscreen mode to systems. in fact, version 1.1.9.8 has been released which contains new and interesting features:
MFA support: this feature affects 7.6 systems, in fact it allows to use the client to manage MFA of the accounts on which it has been configured
binary object preview: thanks to this new feature it will be possible directly from RUN SQL SCRIPT to preview binary objects saved in our tables like BLOB, BINARY and VARBINARY without using external products. To do this just launch the query that includes the field of interest and right click on the field and select VIEW COLUMN DATA
formatting numeric values: how convenient is it to have numeric separators available especially when we are working with numbers with very large values? Here, with the new version of ACS it will be possible to go and insert separators of on the numeric columns in such a way as to simplify their reading
new IFS duplicate feature: from IFS the ability to duplicate IFS files has been added, to do this, right click on the object to duplicate and choose DUPLICATE. This is one of the features that I will use most IFS manager of the ACS. Another feature that I really like is the one related to the transfer of objects from one partition to another, it is very convenient, especially when you have files to transfer to several partitions
Updating the client is very simple, either you download it from the IBM site, or you install a PTF (depends on the release, you can find the documentation at the bottom of the article) and set ACS to search for the new version directly from that system, this is very convenient when you have several client installations on users’ PCs.