Ordering PTFs automatically

In the last post, we have seen how to manage defective PTFs automatically using SQL.Today, we will see how it’s easy to check current PTFs level directly from IBM servers.

Let me say that is quite important to keep systems update, both in terms of version and PTFs. In this way you are able to use all the new features, SQL services and last, but not least, is obtaining all the security-related patches that are needed to cover all the vulnerabilities that come out from day to day.

Let’s check our current PTFs group using GROUP_PTF_INFO view:

SELECT PTF_GROUP_NAME,PTF_GROUP_DESCRIPTION, PTF_GROUP_LEVEL,PTF_GROUP_STATUS
FROM QSYS2.GROUP_PTF_INFO

So, in my example I’ve got some groups in NOT INSTALLED status, it means that the system knows that there are several PTFs that are not installed… In my case is OK, that because I’ve ordered some PTFs using SNDPTFORD.

Now, let’s compare my levels and IBM official levels using GROUP_PTF_CURRENCY, listing only groups that have a difference between current are remote level:

SELECT PTF_GROUP_ID, PTF_GROUP_TITLE, PTF_GROUP_LEVEL_INSTALLED,PTF_GROUP_LEVEL_AVAILABLE
FROM SYSTOOLS.GROUP_PTF_CURRENCY 
WHERE PTF_GROUP_LEVEL_INSTALLED<>PTF_GROUP_LEVEL_AVAILABLE	

It’s quite fun, on my system I’m quite updated I need only to install SECURITY and HIPER groups. Let’s consider that these groups are the ones that are updated most frequently.

Now that we have understood all SQL services that we need to use, we will start creating a simple program that will check PTF currency and if there are some new PTFs we will proceed downloading them.

Here is the code to do that: firstly, we will count how many groups on my system are not current. If I found any, I will proceed applying permanently all LIC’s PTFs, that is quite useful when you install cumulative group. After that we will create an IFS path in which we will receive all the ISOs. At the end, we will order all groups PTFs creating an image catalog.

So, this is an idea, you can also choose to order only single groups or you can also choose to download only save files instead of bin-format images.

In this way you can automatically check and download updates for your system. Even in this case you need an internet connection, without this connection you are not able to query IBM servers. Another thing to consider is that before running this program in batch you need to add all contact information (CHGCNTINF command).

Even in this case, this source is available on my GitHub repo.

Let me know what you think about.

Andrea

Managing Defective PTFs automatically

Several days ago, I opened a ticket to the IBM support for a problem that was affecting one of my production lpar. The technician ask me to generate a System Snapshot and once uploaded to website an automatic agent warned me about a defective PTF that was installed on my partition. I’ve also read the cover letter and wow, this PTF could make my savsys not good for a restore.

We are IBM i systems engineers, we must not panic, so let’s sling on our SQL buddy and figure out how to ferret out faulty PTFs installed on our systems.

The first way is to use QMGTOOLS utility that you can install following this page:

So from MG menu you can user option 24 (PTF MENU) and after that option 3 (COMPARE DEFECTIVE PTFS FROM IBM). Now, your IBM i system connects to IBM servers and check the list of installed PTFs with official list of defective PTFs.

This is one of the possible way to do that, but honestly is not my preferred one, that because it requires some manually actions, for instance you need at least to watch the spool file.

And here we are to my personally preferred method, using the DEFECTIVE_PTF_CURRENCY view in SYSTOOLS. This view is quite helpful because gives you all information you need when you talk about defective PTFs as ID of defective PTF, licensed product program and fixing PTF, now let’s test this query on my system:

select DEFECTIVE_PTF,PRODUCT_ID,APAR_ID, case when FIXING_PTF is null or FIXING_PTF='UNKNOWN' then '' else FIXING_PTF end as FIXING_PTF 
from systools.DEFECTIVE_PTF_CURRENCY

As you can see in my example, I have no rows, so it means that I haven’t got any defective PTF. If you look at my query I have tested the value of FIXING_PTF column, that because now we will create an easy RPG program that automatically check defective PTFs, list all fixing PTFs, order them and after that an email with report will be sent.

As you can see, if any defective PTF was found, the system will order the fixing PTF and will email me with the entire list of defective PTF.

You can find this code on my git repo. Consider that you need your partition is able to reach IBM servers for getting defective PTFs list and to order fixing PTFs.

And you, how do you manage defective PTFs on your system?

Andrea

All You need to know about Query supervisor

One of the dream of sysadmins is monitoring system SQL usage, to check how much temporary storage is used by SQL statements, to check long-running SQL statements and so on. Daily, in my job I am able to see a lot of situations in which user run a lot of SQL statements with ODBC tools in Excel and without them noticing, the temporary memory began to grow because of all the correlations made.

But, what happens if I tell you that in IBM i this feature is already in OS?

Since IBM i V7R3M0 there is a nested mechanism that is called Query Supervisor and its goal is to define metrics on the execution of SQL statements such as the execution time, temporary storage, I/O and CPU count.

So, as I said before this feature is in the operative system so you needn’t to get some additional feature, the only thing you need is only to understand what do you want to do when a threshold has been reached. Yes, that because with this real-time monitor you are able to set up some different actions as:

  • holding job
  • ending SQL execution
  • sending message to a message queue
  • logging SQL statement

At this point, we can start with some considerations about this topic:

  1. as mentioned earlier, this is real-time monitoring so unlike other mechanisms or tools it calculates thresholds on live data, not based on estimates or assumptions.
  2. metrics that can be monitored are CPU time (i.e., the amount of CPU seconds used during the execution of a SQL statement), elapsed time (i.e., the amount of seconds used to execute the SQL statement), temporary storage (i.e., the amount of temporary memory allocated expressed in MB), I/O count (i.e., the amount of I/O operations used to execute the SQL statement).
  3. not all SQL statements trigger this mechanism, in fact some statements that do not require any kind of processing by the system (such as a trivial INSERT) are not considered. In addition, for some SQL statements the metrics computation is not based on the entire execution but only on the execution of some steps (for example, in a DELETE only the search for records to be deleted in the WHERE clause is part of the metrics computation, while the actual deletion of records is not counted). If an SQL statement contains references or calls to functions or procedures, in that case the metric takes into account the time taken to execute each individual function.
  4. it is possible to go and define very precisely the thresholds and the mechanisms that trigger them; in fact, it is possible to go and work on individual subsystems, users, or jobs. By combining these filters, it is possible to manage individual situations in the optimal way.

Let’s add a new threshold on my dummy lpar!

So before starting, we are able to check if something was already registered with this query:

SELECT * FROM QUERY_SUPERVISOR 

As you can see, I don’t have any row (no threshold has been setted up).

Now we will create a new threshold that monitor every 5 minutes all statements that are running for more than 30 minutes:

CALL QSYS2.ADD_QUERY_THRESHOLD(THRESHOLD_NAME => 'MAXTIME', THRESHOLD_TYPE  => 'ELAPSED TIME', THRESHOLD_VALUE => 1800, DETECTION_FREQUENCY => 300, LONG_COMMENT => 'Check queries running for 30 mins');

In addition, I want to create another threshold only for job of BIUSER in BISBS; in this case I will monitor current temporary storage:

CALL QSYS2.ADD_QUERY_THRESHOLD(THRESHOLD_NAME => 'BIUSER',THRESHOLD_TYPE  => 'TEMPORARY STORAGE', THRESHOLD_VALUE => 1024, DETECTION_FREQUENCY => 300, INCLUDE_USERS=> 'BIUSER', SUBSYSTEMS=> 'BISBS', LONG_COMMENT => 'Check queries with tempstg > 1GB')

In order to check if everything is ok, let’s check the previous query, I expect to find two rows:

Please, pay attention to the DETECTION_FREQUENCY parameter, it indicates the distance between monitoring checks. It is necessary to identify a correct value that allows you to go and intercept critical conditions in the best possible way, here is a specific page about this parameter.

Now we have to build the program that will intercept and manage the attainment of the threshold. The IBM documentation presents some examples, I will also leave one below. It will simply e-mail me detailing the job that crossed the threshold:

To complete everything, we need only to register this program in exit point QIBM_QQQ_QRY_SUPER with this command:  ADDEXITPGM EXITPNT(QIBM_QQQ_QRY_SUPER) FORMAT(QRYS0100) PGMNBR(1) PGM(UTILITIES/QRYSUP) TEXT('Query supervisor EXITPGM')


Here you can find some documentation:

Andrea

Playing with SQL: translator function

Oh yes, playing with SQL on an IBM i system can reserve more and more fun. SQL on these systems can also help sysadmin (as me) that not know perfectly how to program in RPG & co. to optimize day by day activities; that because since V7R3M0 IBM invests a lot in native SQL services that now a day cover a lot of different fields in system management.

Let’s start with a small example in order to break the ice, a translator function.

Sometimes, in multilanguage environments, could be really comfortable getting information in your current language. IBM i with secondary language and their message file will do it for you only for operative system menu/commands, so for application you need to get a middleware unless you want to translate every data. In this short example, you will see how it’s so easy to create your own SQL service that helps to do that.

Firstly, in this example, I will use Azure’s public API. When I’ve built this solution at work I choose this service because it’s so easy to use and because it has a very free tier, you can translate 2.000.000 of characters a month. Here you can get the FAQ page. It’s obvious that in order to use this API, you need that your partition is able to go on the internet. Another thing that you need to consider is to trust via DCM some public CAs, that because without doing that you will get some SQL errors.

At this point, we only need to play with SQL, have fun!


So, let’s test in your SQL IDE (ACS, VSCode, DataGrip, ecc) QSYS2.HTTP_POST function in order to test your credential and how this API works:

values QSYS2.HTTP_post('https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=it&to=en', '[{"text":"proviamo a tradurre qualcosa con questa API"}]','{"headers":{"Content-Type":"application/json","Ocp-Apim-Subscription-Key":"SUBSCRIPTION-KEY","Ocp-Apim-Subscription-Region":"REGION"}}')

As you can see, this API give me back a JSON string with the translation.

So as not to overcomplicate the example, we assume that we will give to our API only one string to translate, that because as you can see in this example below, it’s able to translate as much string as you want:

So, if we know that we have only one string, it’s quite easy manipulating this JSON, to do that we will use JSON_TABLE. This table function convert a JSON object (or array) into a relation database table, so you only need to give as parameter the JSON object and the relative path:

select *
from JSON_TABLE(QSYS2.HTTP_post('https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=it&to=en','[{"text":"proviamo a tradurre qualcosa con questa API"}]','{"headers":{"Content-Type":"application/json","Ocp-Apim-Subscription-Key":"SUBSCRIPTION-KEY","Ocp-Apim-Subscription-Region":"REGION"}}'),'lax $' COLUMNS(dataout varchar(3000) PATH 'lax $[0].translations[0].text'))

I know that my example is not difficult, my JSON object hasn’t so many attributes so we only get back the string in the target language (in my example is DATAOUT varchar(3000)).

Now we can easily proceed creating our function. In my example I will put three parameters like language source, language target and text to translate:

CREATE OR REPLACE FUNCTION sqltools.translate (lanin char(2), lanout char(2), textin varchar(10000))
    RETURNS varchar(10000)
    LANGUAGE SQL
    BEGIN
        DECLARE txtout varchar(10000);
        SELECT dataout
        into txtout
        from JSON_TABLE(QSYS2.HTTP_post('https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=it&to=en','[{"text":"proviamo a tradurre qualcosa con questa API"}]','{"headers":{"Content-Type":"application/json","Ocp-Apim-Subscription-Key":"SUBSCRIPTION-KEY","Ocp-Apim-Subscription-Region":"REGION"}}'),'lax $' COLUMNS(dataout varchar(3000) PATH 'lax $[0].translations[0].text'));
        RETURN txtout;
    END;

To check if its everything OK, you could check if there is an object in your library like the screen:


Now let’s try our function with a sample database (you can create with this statement: CALL
 QSYS.CREATE_SQL_SAMPLE('DEMODB')
):

SELECT DEPTNAME,sqltools.translate('en','it',DEPTNAME) as DEPTIT FROM demodb.DEPT

As you can see, the DEPTIT column contains the translation of the column DEPTNAME.


So, this would be only an example of what you can do with SQL on IBM i, we will soon see some example more complex of SQL usage, as monitoring tool and so on.

And… what about you, do you have or use any SQL function that you have built? Please tell us your experience on the comments.

Andrea