This week, I received a request from a customer to create a batch procedure that would restore files from the production environment to the test environment without transferring members due to disk space limitations. It seemed like a simple task, as the operating system allows this without any problems using the native RSTOBJ command thanks to the FILEMBR((*ALL *NONE)) parameter:
As mentioned above, it seemed simple, but the BRMS commands do not support this functionality. Let’s look at the RSTOBJBRM command, for example… If I try to pass it the parameter name, I get this error, precisely because the parameter does not exist in this case:
When talking to IBM support, I was told that the only solution at the moment to achieve my goal of restoring without members was to concatenate the native command with the information contained within the BRMS DB. This gave me the idea of creating a simple SQL procedure that would allow me to achieve my goal. Clearly, it was also feasible in other languages; I could have achieved the same result with an RPG programme. The choice of SQL was dictated by the need to find a quick alternative that did not require a great deal of development effort.
Let’s start with what we need… Let’s assume that the list of objects to be restored and the library in which they are located are passed as parameters, and let’s also assume that the library in which the restoration is to be performed is also passed as a parameter to the function. Now, what we need to calculate are the tape on which they are saved, the sequence (although we could use the *SEARCH parameter) and the device to be used for the restore.
Now, if you are using product 5770BR2 (with the most recent PTF), extracting this information is quite simple. In fact, there is a view in QUSRBRM called backup_history_object that returns information about the various saved objects. Alternatively, if you are using 5770BR1, you will need to query the QA1AHS file.
For example, we want to find media information for objects in QGPL (I will use QAPZCOVER as example)…
With 5570BR2 the SQL statement is: select VOLUME_SERIAL,DEVICE_NAMES, FILE_SEQUENCE_NUMBER from qusrbrm.backup_history_object WHERE SAVED_ITEM = 'QGPL' AND saved_object='QAPZCOVER' ORDER BY SAVE_TIMESTAMP DESC
If you are using 5770BR1: SELECT OBVOL AS VOLUME_SERIAL, OBHDEV AS DEVICE_NAMES, OBHSEQ AS FILE_SEQUENCE_NUMBER FROM QUSRBRM.QA1AOD where OBHLIB='QGPL' AND OBHNAM='QAPZCOVER' ORDER BY OBHDAT DESC
As you can see, the result is the same regardless of which of the two queries you use.
Now, in my case, I needed the most recent save, so I applied a LIMIT 1 in my function, sorting in descending order by save date (so I inevitably get the most recent save). If you also want to parameterise the date, you simply need to add a parameter to the procedure and add a condition to the WHERE clause.
Now we are ready to create our procedure: in the first stage we will create RSTOBJ command retrieving data from QUSRBRM, after that we will use SYSTOOLS.LPRINTF to write command executed on the joblog and after that we will execute command using QSYS2.QCMDEXC procedure. In my case, the RSTLIB parameter is optional, by defualt is *SAVLIB:
SET PATH "QSYS","QSYS2","SYSPROC","SYSIBMADM" ;
CREATE OR REPLACE PROCEDURE SQLTOOLS.RSTNOMBR2 (
IN OBJLIST VARCHAR(1000) ,
IN LIB VARCHAR(10) ,
IN RSTLIB VARCHAR(10) DEFAULT '*SAVLIB' )
LANGUAGE SQL
SPECIFIC SQLTOOLS.RSTNOMBR2
NOT DETERMINISTIC
MODIFIES SQL DATA
CALLED ON NULL INPUT
SET OPTION ALWBLK = *ALLREAD ,
ALWCPYDTA = *OPTIMIZE ,
COMMIT = *NONE ,
DECRESULT = (31, 31, 00) ,
DYNDFTCOL = *NO ,
DYNUSRPRF = *USER ,
SRTSEQ = *HEX
BEGIN
DECLARE CMD VARCHAR ( 10000 ) ;
SELECT
'RSTOBJ OBJ(' CONCAT TRIM ( OBJLIST ) CONCAT ') SAVLIB(' CONCAT TRIM ( LIB ) CONCAT ') DEV(' CONCAT TRIM ( OBHDEV ) CONCAT ') SEQNBR('
CONCAT OBHSEQ CONCAT ') VOL(' CONCAT TRIM ( OBVOL ) CONCAT ') ENDOPT(*UNLOAD) OBJTYPE(*ALL) OPTION(*ALL) MBROPT(*ALL) ALWOBJDIF(*COMPATIBLE) RSTLIB('
CONCAT TRIM ( RSTLIB ) CONCAT ') DFRID(Q1ARSTID) FILEMBR((*ALL *NONE))'
INTO CMD
FROM QUSRBRM.QA1AOD WHERE OBHLIB = TRIM ( LIB ) ORDER BY OBHDAT DESC LIMIT 1 ;
CALL SYSTOOLS . LPRINTF ( TRIM ( CMD ) ) ;
CALL QSYS2 . QCMDEXC ( TRIM ( CMD ) ) ;
END ;
Ok, when we have created this procedure we are also ready to test it… From 5250 screen you can use this command: RUNSQL SQL('call sqltools.rstnombr2(''QAPZCOVER'', ''QGPL'', ''MYLIB'')') COMMIT(*NONE)
This is the result:
If you put this command in a CL program you are able to perform this activity in a batch job. Also in this way you are able to make the restore from systems in the same BRMS network if they are sharing media information, in that case you should query QA1AHS instead because object detail is not shared.
Syslog, short for System Logging Protocol, is one of the cornerstones of modern IT infrastructures. Born in the early days of Unix systems, it has evolved into a standardized mechanism that enables devices and applications to send event and diagnostic messages to a central logging server. Its simplicity, flexibility, and widespread support make it indispensable across networks of any scale.
At its core, Syslog functions as a communication bridge between systems and administrators. It allows servers (also IBM i partitions), routers, switches, and even software applications to report what’s happening inside them—be it routine processes, configuration changes, warning alerts, or system failures. It is also possible that these messages are transmitted in real time to centralized collectors, allowing professionals to stay informed about what’s occurring in their environments without needing to inspect each machine individually.
This centralized approach is critical in environments that demand security and reliability. From banks to hospitals to government networks, organizations rely on Syslog not just for operational awareness but also for auditing and compliance. Log files generated by Syslog can help trace user activities and identify suspicious behavior or cyberattacks. That makes it an essential component in both reactive troubleshooting and proactive monitoring strategies.
So, in IBM i there al least three places in which you are able to generate Syslog.
The first place where you can extract syslog format is the system log. The QSYS2.HISTORY_LOG_INFO function allows you to extract output in this format. In my example, I want to highlight five restore operations performed today: SELECT syslog_facility, syslog_severity, syslog_event FROM TABLE (QSYS2.HISTORY_LOG_INFO(START_TIME => CURRENT DATE, GENERATE_SYSLOG =>'RFC3164' ) ) AS X where message_id='CPC3703' fetch first 5 rows only;
By changing the condition set in the where clause it is possible to work on other msgids that could be more significant, for example it is possible to log the specific msgid for abnormal job terminations (since auditors enjoy asking for extraction on error batches).
The second tool that could be very useful is the analysis of journals with syslog, in fact the QSYS2.DISPLAY_JOURNAL function also allows you to generate output in syslog format. In my example, I extracted all audit journal entries (QSYS/QAUDJRN) that indicated the deletion operation of an object on the system (DO entry type): SELECT syslog_facility, syslog_severity, syslog_event FROM TABLE (QSYS2.DISPLAY_JOURNAL('QSYS', 'QAUDJRN',GENERATE_SYSLOG =>'RFC5424') ) AS X WHERE syslog_event IS NOT NULL and JOURNAL_ENTRY_TYPE='DO';
Of course, it is possible to extract entries for any type of journal, including application journals.
The last place that comes to mind is the system’s syslog service log file. In a previous article, we saw how this service could be used to log SSH activity. In my case the log file is located under /var/log/messages, so with the QSYS2.IFS_READ function I can read it easily: SELECT * FROM TABLE(QSYS2.IFS_READ(PATH_NAME => '/var/log/messages'));
These are just starting points… as mentioned previously, these entries are very important for monitoring events that occur on systems. Having them logged and stored in a single repository for other platforms can make a difference in managing attacks or system incidents in general.
Do you use these features to monitor and manage events on your systems?
One of the tools that I still see underutilized on IBM i systems is function usage. Essentially, it is a tool that can centrally manage permissions and authorizations for the use of certain operating system functions. It is a very powerful tool in itself and must be handled with care. One of the use cases I have seen, for example, is the ability to inhibit access to remote database connections without writing a single line of code by working on user profiles or group profiles.
To view the current status, you can use Navigator or invoke the WRKFCNUSG command. Alternatively, there is a system view that will show the same configuration, you can easily query with: SELECT * FROM QSYS2.FUNCTION_USAGE.
In this window now you are able to see the current configuration of your system:
Now, since you are accessing Navigator, you are encountering a function usage. In fact, there is QIBM_NAV_ALL_FUNCTION, which establishes the access policies of various users to Navigator functions. By default, this function usage is set to prevent all users from using it, while users with *ALLOBJ authorization can use it.
This is because function usage has different levels of authorization: the default authorization that applies to all users, authorization for users with *ALLOBJ, and finally explicit authorizations that can be applied to individual profiles or individual group profiles.
When we talk about function usage, my advice is to choose the approach you want to follow and start applying it to the various components that may be affected by these changes. Let me explain: generally speaking, when it comes to security, there are two approaches: the first allows everything to everyone except those who have expressly denied authorization, while the second denies everything to everyone except certain explicitly authorized users. Obviously, I personally prefer the second approach, but it requires a more in-depth analysis and risk assessment.
Speaking of function usage, in addition to managing permissions on Navigator, it is also possible to manage permissions on BRMS (if installed) and on some TCP/IP servers, which we will now look at.
For example, let’s assume we want to block database connections (QZDASOINIT or DDM/DRDA connections). The strategy is to block access to all users, without distinguishing between *ALLOBJ and non-ALLOBJ, authorizing specific individual users. In this case you need to edit QIBM_DB_ZDA and QIBM_DB_DDMDRDA.
So, as I said in the rows above, we have set DENIED as default and Not Used (is like DENIED) for *ALLOBJ users. Here there is a list of users that is authorized.
A few articles ago I had talked about the pros and cons of Ubuntu on Power solutions, promising in some comments to do an article outlining the installation steps, and here it is.
Requirements
To proceed with the installation, it is essential to have created the partition. In my example, the partition was created with 0.1 core, 16 GB RAM and 50 GB on IBM storage presented to the partition via SAN. These are clearly my figures, you can change them if you want to give more resources or if you use different technology for storage access such as storage pools.
This is the detail on the disk created on the storage:
Now, once the infrastructure setup is done, so the partition sees the storage and the disk presented to it, we can proceed with downloading the Ubuntu image directly from the official site and upload it to the VIOS (see this documentation if you don’t know how). The last step before being able to turn on the machine involves the creation of a virtual optical, so again from HMC in the ‘Virtual Storage’ panel we go to select the tab for virtual optical and click on the add button selecting the VIOS on which the file with the operating system image has been uploaded.
Installation
Connect to HMC using SSH, choose the correct server in which partition is with VTMENU command and after that choose your partition:
Start partition in SMS mode (SMS mode is a function like the computer’s BIOS)
Now follow the steps proposed in these screens
If everything works fine, and you choose every time the correct option, grub will star in a few seconds
Ok, now the installation will proceed like in any other architecture, you need to choose which network card do you want to use, disk and file systems configuration and the first user for this server. Once the installation is completed you can restart the partition:
POST INSTALLATION CHECK
One of the most important things in my opinion is the multipath support that is natively installed, to check that run multipath -ll and it will show you all path for your disk. In my scenario, I willl have 4 active paths to my disk for each VIOS:
As another proof that everything works fine, as you can see Ubuntu gave me the same disk serial as the storage.
Take note that this short tutorial was written for Ubuntu, but also works fine with other distros such as Debian or any other distros that support PPC64LE architecture.
The past few days I had the opportunity to attend the Common Europe 2025 Congress in Gothenburg. There were several very interesting sessions on different topics, some on the IBM i part, some on the AIX part, and some on the infrastructure and storage part.
The event got off to an early start with some important numbers, in fact there were 463 participants, a record for the event, a record also for the sessions and sponsors present. the growing number of young people in the audience and among the various speakers also bodes well.
I was able to attend many sessions on AI, from the creation and execution of models that can run on IBM Power architecture, to the code assistant for RPG. Yes, because WCA (aka WatsonX Code Assistant) was one of the announcements made by IBM at that conference. It is a code assistant based on IBM’s existing WatsonX framework. The peculiarity of this model is that it has been trained exclusively for RPG programmes. This model will be available free of charge in public preview from July for users who have registered, during the first stages it will be possible to test the documentation and explanation of already existing code, at a later stage (still to be defined) it will also be possible to generate new code and convert existing code into other languages. During the demos seen, it was exciting to see how this model was able to read code and indicate precisely what was actually being done, and not what the programmer expected the code to do. WCA will run exclusively on VS Code, RDI will not support it (according to IBM, this product is too squashed in Eclipse’s logic and difficult to extend). This is because WCA is integrated with the Code4i extension with which it shares context.
In addition to AI, there were several sessions related to innovation and modernisation of the platform with newer and more user-friendly technologies. I was able to attend several sessions related to the open source world, several projects in which the community is involved in different ways, one example among all is the cooperative development of the Code4i extension, the extension that allows us to connect and develop on IBM i systems directly from VSCode. For this extension, a specific session was presented on the contribution that can be made to development, from interacting on GitHub discussions to fixing bugs and developing new functionalities… many small and simple actions that can, however, be of great help to other users.
The other major topic presented at this conference is related to the release of IBM i version 7.6 with all that this entails, from the new SQL services to the many new security features. As you well know, IBM is doing everything it can to secure the various systems, but the effort must be shared among all the players involved, from IBM to programmers to systems engineers. With the new release, several new security features have been released, including the use of standard or customised MFAs for security enhancements in the area of authentication. This is because the aim is to look for ways in which we can secure our environments and at the same time make them suitable for working in a world in which a thousand security certifications are needed to operate.
Not only face-to-face sessions, the event provided plenty of time for discussion and networking between technicians as well as for entertainment on the two evenings organised. Everything ended with a review of the numbers and, above all, with a “goodbye” to next year’s CEC2026 in Lyon.