Python on IBM i

When you look into non-RPG languages for developing software on IBM i, you’re bound to find information about Java or PHP. It would appear that PHP was made popular on IBM i due to Zend’s efforts to port their PHP server technology to IBM i. IBM has also made languages such as NodeJs, Ruby, and Python available too. As a regular user of Python, I am very excited by this!

I initially tried to get Python working on PUB400, and while python3 was available, I was unable to get locate the necessary Python libraries/files/eggs to be able to run queries against files on my account. The required python library, ibm_db_dbi, is not available, or at least I could not find it after searching for some time. However, I found that Zend has some free courses to learn open-source on IBM i, and along with it you get an IBM i instance to log into to complete the course work.

Since I don’t have access to an IBM i system where I can go over installing the PTFs, I will write this assuming that the IBM i system already has the PTFs with open source and python installed.

To start using Python, SSH into the UNIX environment on the IBM i and get ready to add a repository to yum:

yum-config-manager –-add-repo http://public.dhe.ibm.com/software/ibmi/products/pase/rpms/repo

Next, install Python v3!

yum install python3
yum install python3-pip

This ensures that the python3 packages are installed, and the Python PIP tool (which will be needed for any serious Python development later). While you’re at it, go ahead and install the IBM packages needed to talk with the IBM i system database.

yum install python3-ibm_db
yum install python3-itoolkit

The first package, ibm_db, allows one to make connections to the database on the localhost system and run SQL queries on the files tables. The second package, which I’m not going to cover, allows for communicating with the system to run commands.

To start accessing your database tables, the below program is a very simple example. Keep in mind if you want to connect to the IBM i remotely, you need IBM’s Connect licensed product–and you’ll need to modify the connect() method arguments. If you just want to connect to the IBM i on the localhost, the below code works.

import ibm_db_dbi as db2

# Establish a connection to the DB2 system on localhost
db2_connection = db2.connect()

# create a cursor for working with the database
my_cursor = db2_connection.cursor()

# execute an SQL query
my_cursor.execute(“…SQL QUERY HERE…”)

# iterate over the results
for row in my_cursor:
#do something with output row

It’s a simple as that–just like if you were connecting to a PostgreSQL or MySQL server from Python on an x86_64 server! When I get more time I’d like to experiment with building some REST APIs, and I’d like to experiment with NodeJs on the system too.

OpenPOWER Systems Reference List

I’ve been reading about the OpenPOWER Summit that just wrapped up to try to catch up on developments with this platform. It looks like the OpenPOWER ecosystem is really starting to grow. I have been looking at some of the systems available and provided links below for reference.

Supposedly Inspur, Winstron, and Teamsun in China have or are working on OpenPOWER systems, but I could not find them on their websites. Same thing for SuperMicro, I could not easily find their systems. The worst website hands down goes to Hitachi, where its impossible to actually find anything useful on their web page since it is too full of marketing drivel. Apparently they develop HPC systems…

POWER Systems

Chips, Silicon, Components:

Viewing data set members in OMVS

I learned a net trick today to view PDS members using the z/OS UNIX interface, also known as OMVS. I am going to assume a remote ssh connection to the z/OS system. From the unix prompt, input:

cat “//’username.source(cobolfun)'”

The entire path is enclosed in double quotes, the double forward-slashes escape the path as a z/OS DSN rather than an OMVS/unix path, and finally, the DSN is enclosed in single quotes so that the shell doesn’t get upset about the parenthesis in the DSN name.

This got me thinking…

While I don’t have anything against editing program code using the ISPF editor, I’d prefer something more powerful, and I am just much more productive using the vim editor. For JCL ISPF is fine, but for longer COBOL programs I’d prefer vim.  At some point I’d like to figure out if there is a workflow where I can edit COBOL source code on my Linux workstation and push it to z/OS OVMS via scp, run a script to convert the ASCII to EBCDIC, and then copy the unix file to a DSN.  Then from my 3270 terminal I would compile the code.

And of course eventually I’d like to see a git workflow. I’m sure the proper IBM answer is just to shell out money for Rational Studio for z/OS, and there is probably is a plug-in that allows the text editor to act like vim. However, I’d love to see something more light-weight than having to use a GUI program..not a fan of GUIs.

TCP/IP Tools on z/OS with TSO

Coming from Linux, UNIX or Windows and wondering how to deal with TCP/IP networks on z/OS? z/OS has all of the basic tools needed for troubleshooting and testing connectivity issues with TCP/IP networks. TSO/E offers an easy way to access these commands, drop out of ISPF and into a TSO/E session first. The examples below are very basic, but to get more details about command options, type the primary TSO/E command following by a question mark to see the command options, for example, “PING ?”.

Connectivity Test with Ping

To test if you have connectivity to a host, use the TSO command “PING” specifying the IP address and other options, such as the number of attempts (COUNT #) or message length (LENGTH #).

TSO_ping.png

Routing Tables

To view the TCP/IP routing table, use the TSO command “NETSTAT ROUTE”, as shown below in the green font.

TSO_Route_table

The output is very similar to what one would expect on a UNIX, Linux or Windows system.

Tracing packet flow

The equivalent of the traceroute command is also available, but alas, it is not “tracert” like in Windows or “traceroute” in UNIX, but “tracerte”, as shown below.

TSO_tracerte

State of Connections

The “NETSTAT ALLCON” command can be used to view the state of TCP/IP connections.

TSO_netstat_1

TSO_netstat_2

Note the format: 192.86.32.178..23, where 23 is the local port and the local IP is 192.86.32.178. The external connection is to IP 115.135.9.72 on port 54977.

Network Adapters

Information about network adapters can be retrieved with “NETSTAT DEV”. Note that the adapter “DevName” is what we see in the routing table output in the Interface output area.

TSO_netstat_dev1TSO_netstat_dev2

Conclusion

All of the TCP/IP tools needed for troubleshooting TCP/IP are readily available and easily accessible with TSO/E. Furthermore, the commands are similar to what one would find on other platforms, making it painless to work on this platform.

 

 

Tyan GT75: 1U OpenPOWER server

Just thought I’d share this link: Anandtech review of Tyan’s Habanero-based 1U OpenPOWER server. At the end of the day, the Xeon E5-2640v4 configuration outperformed the 8-core OpenPOWER configuration on most of the benchmarks. Anandtech’s take is that POWER is best for in-memory workloads. For the cost and power consumption figures alone, the fiscally responsible part of myself says to look away from the POWER architecture.

I also believe in a healthy ecosystem of offerings and competition, and engineer in me also likes to see different computer architectures existing and thriving. The past 10 years have seen Intel consolidate and dominate the market. SPARC is all but dead except for niche Oracle and Fujitsu customers, AMD seems to be content with the consumer market segment and not really pushing innovation, and ARM is still mostly in the embedded systems market. It will be interesting to see what direction Softbank starts to pull the ARM platform, however. So I must say I do support and wish the best for the OpenPOWER initiative. It will be interesting to see if Inspur can make headway in China and Asia with the POWER platform.

Low-end IBM i system?

According to the twitter buzz feeds and The Register, IBM will be releasing a new version of the POWER S812 aimed at light workloads. The article links to the IBM announcements, and for convenience I have linked to the US-based announcement.

The system is a single-socket POWER8 model that will be capable of running IBM i on a single core. It looks like the system is capable of 4 cores, but only for AIX. I wonder if it is possible to run Linux on the other cores?

It will be interesting to see where this is priced at in the end. I last looked at the S814, and for hardware alone it was approximately $11k, and the IBM i license was another $10k nearly. It sill makes sense for the small developer to acquire cloud-based hosting for development requirements. But hardware is so much fun!

Update!

It looks like I found an answer with respect to the cores and running VMs. According to IT Jungle, the IBM i version of the S812 is not capable of running VIOS. I found the following quote interesting:

The Power8 chip in the machine cores running at just a hair over 3 GHz – 3.026 GHz, if you want to be precise – and in the case of the IBM i model, the “Murano” dual chip module with two six-core Power8 chips in a single package has but one core actually working. These are no doubt chips that would have otherwise ended up in the garbage bin, but as I have been saying for a long time, many IBM i ships don’t need more than one core.

Is IBM using faulty chips that came out of the semiconductor fab? I suppose that is a good use for them, and hopefully there are not other issues with the chips. All very interesting. I suppose it will all depend on the price tag. This article mentions that the hardware is 20% below that of the S814, which is still not so cheap. It all really comes down to the IBM i licenses though. Hope to hear more about the IBM i Express Edition!

MTM2016 – Well, I finished it!

master-the-mainframe-part-3

I finished the contest back in December, but am finally getting around to posting my “achievement” badge. The contest was really fun, I learned a lot, and I am getting more confident with z/OS. There is still much, much more to learn!

Highlights of the contest for me were working with data sets and disk management (Part 3, Challenge 8), and then the DB2 and SQL analytics exercise for the capstone (Part 3, Challenge 15).

The only part of the challenge that I disliked was working back and forth with hexadecimal, ASCII, and EBCDIC (Part 3, Challenge Part 3). No doubt something that one probably has to deal with, but it was quite tedious.

HMC in a Hypervisor: vHMC

I was reading some IBM documentation on HMCs recently and learned that a vHMC product available. Essentially, it is an HMC image that runs in popular virtualization hosts like KVM or VMWare. Immediately in my mind this seems like the way to go versus a standalone HMC hardware product.

In this day and age every shop is likely to have some x86_64 hardware in their IT infrastructure, and it just makes sense to virtualize the HMC. The HMC is not really doing any heavy workloads, so it is an ideal candidate for virtualization. Furthermore, the HMC uses Ethernet and TCP/IP to manage the host, not a serial console, so there is no driving requirement to have a physical link to the box besides making sure the network routes data between the vHMC and the target server. Not having the bare-metal HMC also frees up space in a server rack and cuts down on power consumption.

I wondered, however, how much is the cost for the vHMC? I was in luck, I found a blog post on Simply i that talks about the costs of an HMC versus a vHMC. In the end, the reduction in cost for the vHMC versus a bare-metal HMC is more than half! That is significant savings, and really makes it easier to sell the idea to the accountants.

While I find the vHMC to be an exciting capability, I need to think through things a bit further. I need to ponder a bit more on the use cases are for a bare-metal HMC. After the hefty investment and capital outlay in the acquisition of POWER hardware, I certainly wouldn’t want to have to make such a big investment just for a HMC.

 

Expanding the POWER ecosystem

I found an interesting blog post about the POWER8 platform that is worth reading. First of all, I have to concur with the author that the platform is not accessible to technology enthusiasts. The reality is that POWER systems are strictly an “enterprise” platform today. I would venture to guess that the majority of organizations acquiring Power servers do so because of AIX or IBM i, or perhaps because such organizations transitioned from those platforms at some point and went with Linux for POWER. But for anything else…well…there are Intel x86_64 machines.

Part of the success of the Intel platform is that software developers can write code and test on a wide range of systems, such as notebook computers, desktops, entry-level servers, and high-end enterprise servers. While these systems have varying configurations, an application written on a desktop PC is going to work on the enterprise-grade Xeon server. Also, techs with a home computer can learn all about hardware, Linux, virtualization, and many other technologies at home, continually developing and improving their skills. This helps to create a feedback loop in the workplace too.

For better or worse, this is not possible on the POWER platform. At best one can hope for is that the technologies and tools available for Intel platforms will also be available for POWER.  Another issue is that for the longest time now IBM has been the sole POWER vendor. Recently, through the OpenPower initiative, I have seen is Tyan’s offerings based on the OpenPower platform, but I have been unable to find a reliable cost estimate on their baseline offering. It looks like Penguin Computing also has an OpenPower product line called Magna Servers. I would venture it is priced at or near IBM’s equivalent offerings. I must say though that it is nice to see another system vendor enter the market! [Update: Thomas Krenn AG also has an OpenPOWER offering.

I personally would like to see a more entry-level Power machine available that is aimed at open-source developers and home-office start-ups. A sub $3000 machine would not only cover such a target group, but I am sure in the corporate space such a machine would also be popular as an entry-level option, or even for just setting up clusters.

Perhaps this might be possible with OpenPower one day? As IBM rolls out Power9 and future Power platforms, perhaps Power8 might become more affordable to entry-level users?

Compiling on z/OS

The following are more notes for myself on terminology used in z/OS software development. While the software construction flow is very similar to that on Linux and Windows platforms, the terminology is somewhat different.

Compilation

A source module is a basic unit of code (COBOL, Java, etc.), and is usually a member from a PDS or PDSE dataset. The source module is run through a compiler from a tool-set such as Enterprise COBOL for z/OS of the Java SDK. The process of compiling a source module results in the output of an object deck.  Addresses are assigned by the compiler using displacement and relative addresses, and at this stage references to external objects are unresolved. The object deck also contains a control dictionary with information used to resolve references.

A new concept for me is that of the copybook, a source library that contains source modules that can be pulled into a project at compile time. I guess it is not completely new, it is similar to a library of python code perhaps. Code from a copybook can be included in COBOL source code using the COPY statement.

IDENTIFICATION DIVISION.
. . .
COPY MYALGO.
. . .

The compiler will search for the code in libraries supplied when submitting the COBOL source code from a JCL build script

//COBOL.SYSLIB DD DSN=AU00195.LIBCOBOL

Unlike a shared library or other binary construct, a copybook contains source code only. Contrast this with the subroutine, a fully constructed executable that takes arguments and returns a result, and can be called from other programs.

In SYS1.PROCLIB, a set of cataloged procedures, JCL scripts that support some common action, are provided to help with building software. For example, for simply compiling a COBOL source module, the IGYWC cataloged procedure can be used which in turn calls the compiler IGYCRCTL.

//COMPCOB   JOB
//COMPILE   EXEC IGYWC
//COBOL.SYSIN   DD *
IDENTIFICATION DIVISION.
. . .
/*
///SYSLIN DD DSNAME=. . .

For IGYWC, the cataloged procudure expects that the source module is provided by the COBOL.SYSIN DD statement in JCL. The snippet above is a JCL script which calls the cataloged produced to compile a COBOL source module.  The “COBOL.SYSIN DD *” statement specifies that the COBOL source is actually inline in the JCL script, and the “/*” marks the end of the COBOL source code. The COBOL code could also reside in a separate dataset (file):

//COBOL.SYSIN DD DSNAME=AU00195.COBOLSRC(TESTPGM1), DISP=SHR

Note that the output from the compiler, the object deck, is stored in a dataset specified in the SYSLIN DD statement in the JCL script.

Precompilation

Some programs will have embedded SQL statements, or have mark-up language, such as COBOL code using CICS, that make it easy for a programmer to work with SQL or CICS constructs. Before such source modules can be compiled, a precompiler must scan the source module and convert EXEC-SQL and EXEC-CICS statements into COBOL. Note that the cataloged procedure DFHEITVL precompiles, compiles, and link-edits a source module. Refer to the cataloged procedure for explicit instructions on the JCL required to perform this operation. Note that the SYSLMOD DD statement provides the location were the output is stored.

Output

Finally, object deck output is identified by the SYSLIN DD statement, and the output can be stored in a PDS or even a temporary PDS used only within the JCL build script, and upon completion of the execution of the batch job the object deck is discarded.