Call for testers: play with the Code4i FS extension

During the Christmas holidays, between various lunches with relatives, an outing, and, of course, some rest, I had the opportunity to rework the Code4i FS extension a bit. It was very challenging in that it allowed me to learn a new language, TypeScript, and to learn about new aspects and SQL services of the IBM i operating system that I was completely unaware of before.

For those unfamiliar with it, this extension allows you to use and, in some cases, even manage additional objects beyond those traditionally supported by the standard extension. The aim is to improve the user experience for programmers (but not only them) by providing a single interface from which to work and get feedback, avoiding the need to switch frantically between applications. In addition, it is intended as a tool to help those who are just starting out and therefore have less experience, as the GUI greatly simplifies things.

New supported object types are 20, here you can find a list with the major features:

Data Queue

  • Send Message: Keyed/non-keyed support, UTF8 format, length validation, key validation
  • Clear Queue: Removes all messages with confirmation
  • View: Messages (standard & UTF8), queue info, sender details, timestamps

Data Area

  • Change Value: All types (*CHAR, *DEC, *LGL), substring modification (start/length), type-based validation, range checking
  • View: Current value, type, length, decimal positions, text description

Binding Directory

  • Add Entry: *MODULE/*SRVPGM support, *IMMED/*DEFER activation, path validation (library/object format)
  • Remove Entry: Individual entry removal with confirmation
  • View: Entries list, exported procedures from bound service programs

File

  • Query File: Opens SQL editor with pre-filled SELECT statement
  • View: File/table/view/index info, statistics, members, dependent objects, supports PF/LF/VIEW/INDEX

Job Queue

  • Hold/Release/Clear: Queue-level operations with status validation
  • Hold/Release/End Jobs: Individual job management with confirmation
  • View: Queue status, jobs list with details (status, submitter, timestamps)

Journal

  • Generate Receiver: Creates new journal receiver (CHGJRN JRNRCV(*GEN))
  • Display Entries: Opens SQL editor with DISPLAY_JOURNAL table function query
  • View: Journal configuration, receiver chain with statistics, sequence numbers, timestamps

Message Queue

  • Clear Queue: Removes all messages with confirmation (CLRMSGQ)
  • View: Messages with ID, first/second level text, severity, sender job/user, timestamps

Output Queue

  • Hold/Release/Clear: Queue management operations (HLDOUTQ/RLSOUTQ/CLROUTQ)
  • Manage Writer: Auto start/stop based on current state (STRRMTWTR/STRPRTWTR/ENDWTR)
  • Delete Old Spools: Age-based deletion with day input (SYSTOOLS.DELETE_OLD_SPOOLED_FILES)
  • Generate PDF: Download individual spool as PDF (SYSTOOLS.GENERATE_PDF)
  • Delete Spool: Remove individual spooled file (DLTSPLF)
  • View: Queue status, spooled files list with details (name, user, job, pages, size)

Save File

  • Download: Export SAVF to local file (copy to stream file, then download)
  • Upload: Import local file to SAVF (upload, then copy from stream file)
  • Clear: Remove all objects from SAVF (CLRSAVF)
  • Save: Save objects/libraries to SAVF with comprehensive options (SAVOBJ/SAVLIB)
  • Restore: Restore from SAVF with comprehensive options (RSTOBJ/RSTLIB)
  • View: SAVF info, objects list, file members, spooled files, IFS objects (multi-panel)

Subsystem

  • Start: Start subsystem with confirmation (STRSBS)
  • End: End subsystem with option selection *IMMED/*CNTRLD (ENDSBS)
  • End Jobs: End individual jobs in subsystem (ENDJOB)
  • View: Subsystem status, pools, autostart jobs, workstation entries, routing entries, prestart jobs, job queue entries, active jobs

User Space

  • Change Value: Modify user space data with start position and value input (QSYS2.CHANGE_USER_SPACE)
  • View: Size, extendable flag, initial value, domain, data (text and binary/hex)

Class (Read-Only)

  • View: Run priority, time slice, resource limits (CPU time, temporary storage, threads), default wait time, purge eligibility, usage statistics
  • Note: Uses QWCRCLSI API via SQL stored procedure (auto-created on first use)

Command (Read-Only)

  • View: Processing program, validity checking program, prompt override program, message/help files, execution environment settings (interactive/batch/REXX), threading attributes, CCSID

DDM File (Read-Only)

  • View: Remote location info (system name/address, port), access method, remote file name/library, connection settings
  • Note: Parses DSPDDMF output, handles multi-line field values

Job Description (Read-Only)

  • View: Job queue, output queue, library list, accounting code, routing data, message logging, job switches, hold on job queue

Journal Receiver (Read-Only)

  • View: Status, size, sequence numbers (first/last), attach/detach/save timestamps, linked receivers (previous/next), remote journal configuration, filter settings

Message File (Read-Only)

  • View: All messages with ID, first/second level text, severity, reply type, default reply, valid reply values/ranges

Module (Read-Only)

  • View: Basic info (creation date, source file, compiler options), size details (code, debug data, static storage), procedures list, imports/exports, referenced system objects, copyright strings
  • Note: Uses DSPMOD with multiple DETAIL options (*BASIC, *SIZE, *IMPORT, *EXPORT, *PROCLIST, *REFSYSOBJ, *COPYRIGHT)

Program/Service Program (Read-Only)

  • View: Program info, bound modules with source details, bound service programs with signatures, exported procedures (SRVPGM only), SQL settings, optimization details, activation group

Query Definition

  • Translate to SQL: Converts Query/400 definitions to SQL format using RTVQMQRY command
  • View: SQL translation of query definition with proper table notation (LIB.FILE instead of LIB/FILE)
  • Note: Uses temporary source file and alias for extraction, automatically cleans up temporary objects

I would like to point out that this is not the final version; it is currently being reviewed by the Code4i team as the extension is part of that ecosystem. It is therefore possible that it may undergo further changes.
However, it would be interesting for other users to be able to test it now in ALPHA so that any bugs can be checked and corrected, as well as to gather feedback on new features. So, if you want to be a tester, write a comment and I’ll get back to you with instructions.

Andrea

Let’s try bob – part 1

This week, the wait is finally over… a few weeks after signing up for the demo, BOB has finally arrived, and I have the chance to try it out in advance.

For those who are not yet familiar with it, BOB is a new IDE owned by IBM. It is not a completely new IDE; in fact, for those who regularly use Visual Studio Code, the environment will be very familiar, because BOB is based on it. The reasons for this choice are quite simple to identify. First, compared to RDi, Visual Studio Code is much lighter and more performant, as well as being significantly more modular. Second, Visual Studio Code is a standard IDE, convenient for any programming language. In fact, there are extensions for virtually any language, which makes development on IBM i much closer to industry standards.

Now, if BOB were just a copy-and-paste version of VSCode, it would be useless… The real added value of BOB is its native integration with artificial intelligence. In fact, when you open it for the first time, you immediately notice the prompt for interacting with the agent. In this case, compared to other competitors, the artificial intelligence offered here is based on the aggregation of several models, so you have a complete stack that allows you to respond to very different needs. Moreover, as you can imagine, since BOB is from IBM, it is well-trained in all IBM languages, such as RPG.

I’ll start today by attaching the link to BOB’s YouTube channel, where you can see BOB at work on some use cases tested directly by IBM: https://www.youtube.com/channel/UC-dkbPjzN2bh-k-V4rZQppQ

Now, let’s talk about my experience… as soon as it arrived, I put it to work on one of my Java projects. It’s actually the backend of an HTTP portal that we use to provide services to our customers. First, I asked it to generate some documentation (yes, I don’t like writing documentation), and in about an hour, it wrote all the javadoc for over 140 classes. In addition to writing the documentation, I was pleased to note that it is able to suggest possible improvements to the code. In this case, for example, it highlighted the possibility of SQL injection:

This is just an example; it also suggests possible code refactoring.

Now, let’s talk about evaluations… We are only at the beginning, and I haven’t been able to test it sufficiently yet, but let’s say that expectations are very high. As for the documentation I asked him to write for me, it seems to be very focused on the subject… I still have some doubts about code generation, but I also believe that it should definitely be an aid (and it is) to the programmer, not a replacement. Another definitely positive thing is that it asks for permission before accessing/modifying a file, showing all the changes in a preview and explaining them. On the other hand, the demo comes with a $20 budget, and I’ve already burned through about $14 just with the documentation I asked for, which means I can’t really test it thoroughly.

From my point of view, the next steps concern the RPG world, i.e., the automatic generation of test cases and code documentation. This is because, in addition to the IDE, it is also possible to invoke BOB’s APIs from the CLI, meaning it can be integrated into automatic compilation/release pipelines.

For completeness, I am attaching the link to sign up for the BOB demo in case you haven’t already done so: https://www.ibm.com/products/bob

Have you had the opportunity to test and use artificial intelligence tools in your work? What do you think?

Andrea

Using SQL to log lock to objects

A few days ago, a customer opened a case with me because he couldn’t understand the origin of the locks that were occurring on certain objects. In fact, this lock condition was causing problems for the application procedures, which clearly found the files busy and were unable to operate on them.

There are various solutions to this issue, but the one that seemed most convenient and functional to me was to use our friend SQL.

In the tool we will implement today, we will use the QSYS2.OBJECT_LOCK_INFO system view. This view returns a series of very useful information, such as the object being locked, the job that placed the lock, and the type of lock. However, it can also return very detailed information, such as the procedure/module and the statement.

Let’s look at an example of a query on this view without any filters: select * from QSYS2.OBJECT_LOCK_INFO

As you can see, it gives us a lot of information. Let’s go back for a moment to the customer’s primary need, which is to understand who placed blocks on a specific object and when, so we’ll start modifying the query. In this case, we’ll take less information, so I’ll select only a few fields: SELECT SYSTEM_OBJECT_SCHEMA, SYSTEM_OBJECT_NAME, SYSTEM_TABLE_MEMBER, OBJECT_TYPE, ASPGRP, MEMBER_LOCK_TYPE, LOCK_STATE, LOCK_STATUS, LOCK_SCOPE, JOB_NAME, PROGRAM_LIBRARY_NAME, PROGRAM_NAME, MODULE_LIBRARY_NAME, MODULE_NAME, PROCEDURE_NAME, STATEMENT_ID, CURRENT TIMESTAMP AS TIMESTAMP FROM QSYS2.OBJECT_LOCK_INFO WHERE SYSTEM_OBJECT_SCHEMA='XXXXXX' AND SYSTEM_OBJECT_NAME='XXXXX'

In my example, I took the display file for my system signon.

Okay, now that I’ve written the basic query, I’m ready to start the procedure.

  • Input parameters
    • Library name
    • Object name
    • Object type
    • Analysis duration
    • Analysis output file library
    • Analysis file name
  • Procedure body: first, I create the template file (the create table with no data) that I will use later to make all the entries I see. After that, I cycle until I have reached the time required for the analysis passed as a parameter. During this cycle, I insert the data extracted directly with the select into the temporary file I created earlier. I chose to run the analysis every 30 seconds, but feel free to change this time. Once I have reached the time I set, I copy all the output to the file that I passed as a parameter to the procedure
  • Calling the procedure: you can call it from RUN SQL SCRIPTS/STRSQL/VS CODE/RUNSQL, here you can find an example, let’s consider that if you want to run this analysis for a long time, is better to submit it in batch.

Here you can find my code:

You need to be carefull! The query that I wrote works on every release >=7.5 because of the where statement. As you can see it’s quite complicated, and this kind comparison in where clause is not supported on older release, so, as I said a lot of time, PLEASE STAY CURRENT.

Andrea

Write your own IBM i siem in Python

A few articles ago, we talked about the native integration of the syslog format within IBM i. We also looked at two SQL services (the history log and the display journal) that were able to generate data in this format in a simple way.

Today, in this article, I propose a possible implementation of a SIEM, which, starting from the logs on the IBM i system, feeds a Grafana dashboard. For those unfamiliar with Grafana, this tool is one of the leaders in interactive data visualization and analysis. It has native connectors with a wide range of different data sources, and can natively perform alert functions on the metrics being monitored.

The downside is that there is no DB2 datasource (at least in the free version). In our scenario, we chose to rely on a PostgreSQL instance running on AIX, which allowed us to build the dashboard with extreme simplicity.

In fact, our infrastructure consists of one (or more) IBM i partitions that contain all the source data such as JRNRCV, history logs, etc., a small Python script that queries the systems using the two specific views, and the collected data is then sent to a PostgreSQL database, which is then queried by dashboards built on Grafana for analysis purposes.

Below you will find the Python code written:

  1. script information: this script can run directly on IBM i, but in fact, in an environment where there are several machines, it can be run from a centralized host. The only installation required is that of the ODBC driver, but there is a wealth of supporting documentation. In our case, there is a configuration file that contains both the database connection information and the host master data. The script can be invoked by passing LOG as a parameter (in which case the DSPLOG data will be analyzed) or JRN (in which case the QADUJRN entries will be looked at). Another parameter is the list of msgids of interest (valid in the case of LOG) or the type of entry (in the case of JRN), which is also supported as a value *ALL
  2. getQhst: this function extracts entries with the msgid specified as a parameter. In fact, the extraction checks the last entry in the dsplog and reads all entries from the last extraction to the last entry detected now
  3. getJrn: this function extracts entries from the system audio log. Again, the tool keeps track of the last entry read

As you can see, the extracted data is then dumped directly into a PostgreSQL table.

Below is the dashboard I built on Grafana:

As you can see, there is fairly basic information, a graph showing the types of entries and their counts, a graph showing the occurrence of events on different days, and finally the extraction of events in syslog format. The same dashboard is also available for the history log.

Here is the code that I wrote:

Andrea

Finding broken netServer shares in a easy way

More and more frequently customers are reporting to us that after upgrading to Windows 11 24H2 there are problems with connecting to network shares via Netserver.

Small parenthesis, in itself, the fact that the IFS of an IBM i system is accessible is a great thing, but you have to be extremely careful about what you share and with what permissions you do it. Frequently I see that on systems there is root ( / ) shared read and write, this is very dangerous because in addition to IFS you can browse the other file systems on our systems such as QSYS.LIB and QDLS. So try if possible to share as little as possible with as low permissions as possible. Closing parenthesis.

Returning to the initial issue, indeed it seems that Microsoft with its update (now released a few months ago) has added issues related to the support of certain characters in IFS filenames. Thus, if a folder contains a file with the name consisting of one of the offending special characters, Windows loses access to that folder. The characters that generate these problems are the following: < (less than)

  • (greater than)
  • : (colon)
  • “ (double quote)
  • / (forward slash)
  • \ (backslash)
  • | (vertical bar or pipe)
  • ? (question mark)
  • * (asterisk)

Here, as indicated in this IBM documentation link, changing the file names by removing the famous characters will restore access to shared folders. Now, clearly in a production context it is imaginable that there are several shared folders and that IFS is an infinitely large place with infinite files (most of the time abandoned :-D), so it is necessary to find a clever way to check in which shared folders we might have problems. To do this we will rely on two SQL views, the first we need to list the list of folders we are sharing, the second we need to list the paths that contain special characters inside them.

Thanks to the QSYS2.SERVER_SHARE_INFO view, we will have the ability to list the paths that have been shared via netserver with the following query:

select PATH_NAME from QSYS2.SERVER_SHARE_INFO where PATH_NAME is not null

Now that we have the list of all directories shared, we only need to scan the content. Now that we have the list of all shared directories, we just need to analyze the contents. To do this we will use the procedure QSYS2.IFS_OBJECT_STATISTICS which takes as parameters the name of the starting path, any paths to be excluded and an indication of whether to proceed with scanning in the subdirectories, clearly in our case we will tell it to scan those as well. Now, we are not interested in taking all the files, but only those that contain special characters in their name that are not supported by Windows, for which we will apply a WHERE. Here is an example of the query on a small path (take care that this query could run for a lot of time):

SELECT PATH_NAME,CREATE_TIMESTAMP,ACCESS_TIMESTAMP,OBJECT_OWNER
      FROM TABLE (
        QSYS2.IFS_OBJECT_STATISTICS(
            START_PATH_NAME => '/qibm/ProdData/Access/ACS/Base',
            OMIT_LIST => '/QSYS.LIB /QNTC /QFILESVR.400',
            SUBTREE_DIRECTORIES => 'YES')
        )
      WHERE PATH_NAME LIKE '%\%'
        OR PATH_NAME LIKE '%<%'
        OR PATH_NAME LIKE '%>%'
        OR PATH_NAME LIKE '%|%'
        OR PATH_NAME LIKE '%*%'
        OR PATH_NAME LIKE '%:%'
        OR PATH_NAME LIKE '%?%'

In my example I took a fairly small path (the one with the ACS installer) and it took a short time. Moreover, no file contains any wrong characters so I can rest assured, in fact it did not return any rows.

At this point, there is nothing left to do but combine the two queries into a very simple RPG program. Now, considering that the second scan query can take a long time, it is a good idea to submit its execution, saving the results in another table.

As you can see, my program is pretty short, only combining two easy queries, and in this way you are able to find every file that will break shares. At the end of the execution, please check MYSTUFF/SHARECHAR file, here you can find details about file as path name, owner, creation and last access timestamp.

Remember, this is SQL, so you can also change whathever you want such as column, destination file and so on.

I hope I give you a way to save you time with this that can be a rather insidious and annoying problem.

Andrea

Ordering PTFs automatically

In the last post, we have seen how to manage defective PTFs automatically using SQL.Today, we will see how it’s easy to check current PTFs level directly from IBM servers.

Let me say that is quite important to keep systems update, both in terms of version and PTFs. In this way you are able to use all the new features, SQL services and last, but not least, is obtaining all the security-related patches that are needed to cover all the vulnerabilities that come out from day to day.

Let’s check our current PTFs group using GROUP_PTF_INFO view:

SELECT PTF_GROUP_NAME,PTF_GROUP_DESCRIPTION, PTF_GROUP_LEVEL,PTF_GROUP_STATUS
FROM QSYS2.GROUP_PTF_INFO

So, in my example I’ve got some groups in NOT INSTALLED status, it means that the system knows that there are several PTFs that are not installed… In my case is OK, that because I’ve ordered some PTFs using SNDPTFORD.

Now, let’s compare my levels and IBM official levels using GROUP_PTF_CURRENCY, listing only groups that have a difference between current are remote level:

SELECT PTF_GROUP_ID, PTF_GROUP_TITLE, PTF_GROUP_LEVEL_INSTALLED,PTF_GROUP_LEVEL_AVAILABLE
FROM SYSTOOLS.GROUP_PTF_CURRENCY 
WHERE PTF_GROUP_LEVEL_INSTALLED<>PTF_GROUP_LEVEL_AVAILABLE	

It’s quite fun, on my system I’m quite updated I need only to install SECURITY and HIPER groups. Let’s consider that these groups are the ones that are updated most frequently.

Now that we have understood all SQL services that we need to use, we will start creating a simple program that will check PTF currency and if there are some new PTFs we will proceed downloading them.

Here is the code to do that: firstly, we will count how many groups on my system are not current. If I found any, I will proceed applying permanently all LIC’s PTFs, that is quite useful when you install cumulative group. After that we will create an IFS path in which we will receive all the ISOs. At the end, we will order all groups PTFs creating an image catalog.

So, this is an idea, you can also choose to order only single groups or you can also choose to download only save files instead of bin-format images.

In this way you can automatically check and download updates for your system. Even in this case you need an internet connection, without this connection you are not able to query IBM servers. Another thing to consider is that before running this program in batch you need to add all contact information (CHGCNTINF command).

Even in this case, this source is available on my GitHub repo.

Let me know what you think about.

Andrea