Last week, I had the opportunity to deploy a new partition with Ubuntu on Power installed. This was not the first time I was venturing into such a situation; I had previously deployed some Linux machines, but they were not production machines. This time however is different, a real production partition, internal for my team but still production.
I think some of you are wondering why Ubuntu, well actually the choice was quite simple as I already use it regularly on different architecture, in fact I have the possibility to use it in the x86 world for some tools and services. So building on these skills, I decided to venture out with the installation of a new instance of my favourite architecture, the Power world.
My setup…
My setup involves installing an Ubuntu 24.04 partition on an S1022 server. The disks were presented via fibre channel using one V7300 Storwize.
What I like…
The installation process is the classic one for all versions of Ubuntu and starts automatically once the image is loaded on the Vios and the optic is selected as the device for booting the system. The OS natively supports the network cards that are presented to it, but most importantly, it has native support for MultiPath; no configuration on my part was necessary to define it. This I must say, is a nice plus.
The only thing that needs to be taken into consideration is the need to update the /etc/fstab file when disks are changed, for example in storage migration cases or following a clone of such a disk.
Another very interesting point from my point of view is related to performance, the system turns out to be very performant by taking advantage of all the 8 threads made available by the processor.
What I’m still not sure about…
Here, my experience can be considered satisfactory, I am aware that Ubuntu is a free operating system, which can be installed on our Power servers without any problems but which clearly do not enable us in any way to open a call to the various vendors. One point on which I am noticing improvements is undoubtedly related to the packages available in the various APT repositories. What I am noticing is a continuous improvement in the amount of packages made available, which means that there is a growing interest from developers on the platform, which for me is only a positive aspect. As I said, the number is continuously growing, however the number is perhaps still too low compared to the “little brother” of the x86 version. Unfortunately, there are still too many packages missing that are certainly useful at best and at worst are actually required to run certain software. Once this point is fixed, the diffusion can certainly be improved.
Another point, perhaps the most serious from my point of view, is related to the lack of support for the RMC connection. The RMC connection is the connection established between the HMC console and the single partition and is preparatory to activities of modifying the resources of the partition (such as CPU and RAM), rather than for adding new adapters to the partition, and finally it is essential for the Live Partition Mobility mechanism. Here, without this type of connection it is not possible to perform this type of activity with the machine turned on and this is a problem, because in a scenario in which you want to have the systems always available, having to turn off the system to add RAM is very limiting. I found some packages from IBM, but they are very (maybe too) old, so they are not compatible with new versions due to missing dependencies.
And you, what do you think about adopting Power systems with Linux OS for production workloads?
Many, many times I read in reports from customers or in conversations made in the thousand calls on Teams still misunderstandings about the use of certificates on IBM i systems.
Let’s try today to provide clarity once and for all.
Speaking of certificates in the field of information security, we must distinguish different types of certificates. The first type is certificates with private keys; generally these certificates are used in authentication both from the server’s point of view but also for the client (as in the case of SSH keys). In fact, they are enablers for the use of secure protocols, such as FTPS, HTTPS, TELNETS, etc.
Using secure services, however, it is critical to be sure you are talking to the right peer, otherwise one of the building blocks of cybersecurity. Hence, the need arises to define specific entities that are responsible for signing certificates, whether they are clients or servers. These entities are called Certificate Authorities. However, we can say that CAs are themselves also certificate that enable systems to communicate securely with other systems by asserting ownership of their child certificates.
Each operating system has its own certificate repository, my Mac for example has the key fob, IBM i systems also have their own, the DCM (digital certificate manager).
The DCM’s gui is available at http://yourip:2006/dcm. By default, every certificate is stored in *SYSTEM archive but you can also define a custom one.
As you can see, even the DCM treat CA and client/server certificate in different way because of their different nature. From this portal you are able to import/export certificate using easy download and upload feature. In this way you are able to use a temporary path for uploading certificate directly from your browser without transferring it to your system before. In addition, you can also use DCM to create a certificate sign request (CSR), in this case the form asks you to set some values such as name, ip and not forget to put subject alternative name, it’s quite important for HTTPS services, and honestly I don’t know why they do not put it as mandatory.
Sometimes could be very helpful import easily some AC… maybe you need to use HTTP functions in QSYS2 and you need to validate server CA. With QMGTOOLS is quite easy. From MENU use option 7 (EBIZ) and after that option 1 DCM/SSL.
Here you have a rich list of feature, for instance you can retrieve or import a certificate, you can check a certificate or you can test some application using OPENSSL. In our case we will try to retrieve and import a certificate into our CA store.
In our case, we are trying to import the CA that has issued certificate for google.com (service listening on 443 port). Consider that it works only if all certificate path before this CA is already trusted, in some cases on DCM you can find a useful list of CA that you could enable. This list is automatically updated with ptfs.
And you, have you ever used DCM or GETSSL command in your environment?
In the last post, we have seen how to manage defective PTFs automatically using SQL.Today, we will see how it’s easy to check current PTFs level directly from IBM servers.
Let me say that is quite important to keep systems update, both in terms of version and PTFs. In this way you are able to use all the new features, SQL services and last, but not least, is obtaining all the security-related patches that are needed to cover all the vulnerabilities that come out from day to day.
Let’s check our current PTFs group using GROUP_PTF_INFO view:
SELECT PTF_GROUP_NAME,PTF_GROUP_DESCRIPTION, PTF_GROUP_LEVEL,PTF_GROUP_STATUS
FROM QSYS2.GROUP_PTF_INFO
So, in my example I’ve got some groups in NOT INSTALLED status, it means that the system knows that there are several PTFs that are not installed… In my case is OK, that because I’ve ordered some PTFs using SNDPTFORD.
Now, let’s compare my levels and IBM official levels using GROUP_PTF_CURRENCY, listing only groups that have a difference between current are remote level:
SELECT PTF_GROUP_ID, PTF_GROUP_TITLE, PTF_GROUP_LEVEL_INSTALLED,PTF_GROUP_LEVEL_AVAILABLE
FROM SYSTOOLS.GROUP_PTF_CURRENCY
WHERE PTF_GROUP_LEVEL_INSTALLED<>PTF_GROUP_LEVEL_AVAILABLE
It’s quite fun, on my system I’m quite updated I need only to install SECURITY and HIPER groups. Let’s consider that these groups are the ones that are updated most frequently.
Now that we have understood all SQL services that we need to use, we will start creating a simple program that will check PTF currency and if there are some new PTFs we will proceed downloading them.
Here is the code to do that: firstly, we will count how many groups on my system are not current. If I found any, I will proceed applying permanently all LIC’s PTFs, that is quite useful when you install cumulative group. After that we will create an IFS path in which we will receive all the ISOs. At the end, we will order all groups PTFs creating an image catalog.
So, this is an idea, you can also choose to order only single groups or you can also choose to download only save files instead of bin-format images.
In this way you can automatically check and download updates for your system. Even in this case you need an internet connection, without this connection you are not able to query IBM servers. Another thing to consider is that before running this program in batch you need to add all contact information (CHGCNTINF command).
Even in this case, this source is available on my GitHub repo.
Several days ago, I opened a ticket to the IBM support for a problem that was affecting one of my production lpar. The technician ask me to generate a System Snapshot and once uploaded to website an automatic agent warned me about a defective PTF that was installed on my partition. I’ve also read the cover letter and wow, this PTF could make my savsys not good for a restore.
We are IBM i systems engineers, we must not panic, so let’s sling on our SQL buddy and figure out how to ferret out faulty PTFs installed on our systems.
The first way is to use QMGTOOLS utility that you can install following this page:
So from MG menu you can user option 24 (PTF MENU) and after that option 3 (COMPARE DEFECTIVE PTFS FROM IBM). Now, your IBM i system connects to IBM servers and check the list of installed PTFs with official list of defective PTFs.
This is one of the possible way to do that, but honestly is not my preferred one, that because it requires some manually actions, for instance you need at least to watch the spool file.
And here we are to my personally preferred method, using the DEFECTIVE_PTF_CURRENCY view in SYSTOOLS. This view is quite helpful because gives you all information you need when you talk about defective PTFs as ID of defective PTF, licensed product program and fixing PTF, now let’s test this query on my system:
select DEFECTIVE_PTF,PRODUCT_ID,APAR_ID, case when FIXING_PTF is null or FIXING_PTF='UNKNOWN' then '' else FIXING_PTF end as FIXING_PTF
from systools.DEFECTIVE_PTF_CURRENCY
As you can see in my example, I have no rows, so it means that I haven’t got any defective PTF. If you look at my query I have tested the value of FIXING_PTF column, that because now we will create an easy RPG program that automatically check defective PTFs, list all fixing PTFs, order them and after that an email with report will be sent.
As you can see, if any defective PTF was found, the system will order the fixing PTF and will email me with the entire list of defective PTF.
You can find this code on my git repo. Consider that you need your partition is able to reach IBM servers for getting defective PTFs list and to order fixing PTFs.
And you, how do you manage defective PTFs on your system?
A very important and often fascinating challenge (at least from my perspective) is to give a new way of reading and then using what are often unfortunately referred to as legacy systems. Being heavily involved in research and development at the place where I work, I have the opportunity to fulfil this challenge both out of personal ambition but more importantly to offer new solutions to new needs from customers.
In this article/guide, we will see how it is definitely easy to install and configure the relational DBMS PostgreSQL on AIX.
Why PostgreSQL? It is one of the most widely used open-source relational DBMSs in the world, some estimates say it is one of the most widely used and growing. There are several reasons to choose it as a DBMS, a rich support community, a natural propensity for scalability and security, support for some non-standard data types that make it extremely convenient to use in the context of users who are less experienced on the database part but more oriented towards development, and so on…
Why AIX? AIX is a Unix-like operating system from IBM, it is one of the most robust operating systems with less downtime required for enterprise contexts, the ideal system for a company that cannot stop its production. As for the IBM i, AIX is also facing a very important step, that is, the opening to the outside world and the integration with new open source technologies. There are many (even if not enough) packages that can be easily installed using the DNF package manager.
Unfortunately, PostgreSQL is not one of them, but let’s not get discouraged, it will still be possible to install it with some ease!
Firstly, go to PostgreSQL documentation in order to check if the minimum requirements are met. Let me say that you won’t have a problem in this case, that because you need to have at least AIX 6.1 (that is quite old :-D). In addition, you need to have installed GMAKE and GCC, if you don’t have that you can easily install via DNF. Now, save on your favourite tray this link, it gives you a lot of information about compiling PostgreSQL from source.
Getting source code: to do that, you have to choose what version of PostgreSQL you want to install, after that you can use WGET command (you can install this command via DNF) directly from your system, in my case I have chosen 16.6:
Extracting source code: now we have to extract our sources file from tar.gz archive, so in this case you can use GUNZIP command in this way gunzip postgresql-XXXX.tar.gz and with TAR you can extract all the files, use tar-xvf postgresql-XXXX.tar
Compiling source code: now go into the folder and run ./configure --without-icu, this will automatically check dependency and so on:
Now, you are ready to compile with make and when is completed with make all, and yes, as you can see you are compiling your PostgreSQL.
Installing: now the last point is to move all binaries file to the default bin path with make install, and yes, now your installation is completed.
The last point is to configure service account and your first database:
adduser postgres
mkdir -p /usr/local/pgsql/data
chown postgres /usr/local/pgsql/data
su - postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start
/usr/local/pgsql/bin/createdb test
/usr/local/pgsql/bin/psql test
We are at the end of this tutorial, you have successfully installed your PG instance on your AIX. Let me take one moment to talk about High Availability. One of the best possible configurations is to use the PowerHA product for AIX to manage a resource on a cluster defined on multiple lpars. In this case, the database and its configuration must reside on a shared disk that can be switched between the various partitions. In addition, the last step is to define simple scripts that allow the service to be started and stopped when the resource group is activated/deactivated or simply moved from one node to another.
And you, have you ever tried to install and use open source apps on AIX? Let me know about