The Case For Implementing Exit Points

By Rich Loeber

Someone recently asked me if there was someplace on the Internet where they could see a case made for implementing exit points on their IBM i system.  I was at a loss for a comprehensive source and this got me thinking that it might be a good idea to just create one here.

Security exit points on the IBM i (and its predecessor OS/400) have been in existence since the mid 1990′s.  When the system was opened up to network access, the need for additional security over and above the standard IBM i OS security was apparent.   IBM’s solution was to let their customers solve the issues on their own by giving them access to specific decision points in the various network server functions that were being rolled out.  Server functions were being added to the IBM i OS to support network access to the system like FTP, ODBC, SQL, mapped drives in the IFS, file upload and download, remote command calls and a lot more.  Since that time, even more network functions have been added along with related new exit points.

To be fair and above board, I must also disclose here that my company, Kisco Information Systems, jumped on the exit point bandwagon right away when the exit points were initially rolled out.  Since 1996 we have been selling a comprehensive general use exit point solution called SafeNet/i, now in its 11th release.

The question I was asked was “Why does my shop need to implement exit point controls?”.  That is what I want to address here.  I will do so by describing several cases where additional security is needed over and above the already excellent security features that are built into the IBM i OS.

Case #1:

The classic case for exit point implementation comes from the 5250 terminal application days.  If you have a Payroll Application that runs on your IBM i and is maintained by one or more clerks, OS security has to give access to the payroll files for those clerks, but the application and terminal menu system can easily be used to restrict what operations they can do on the payroll master files.  That access will probably grant then *USE access so they can update files and generate payroll checks and reports.

The above scenario is secure from an application perspective, but you would never want your payroll clerk to be able to download the payroll master files and take them home on a USB drive, would you?  An exit point implementation can prevent this access.  The exit point process runs on top of the IBM i OS and can be used to restrict server functions by user profile, source IP address and even by objects accessed.  This leaves the IBM i OS security in tact for the 5250 terminal application and also prevents unauthorized access via the network connection.

Case #2:

Many IBM i shops have one or more “regular users” defined with *ALLOBJ access in their user profile.  This can happen for lots of reasons and in many cases, it would take a very long time to correct.  I never recommend granting *ALLOBJ access to regular users, but if your system has evolved with this issue, it cannot be fixed overnight.  In many cases, the application itself is providing the security.  The issue, however, is that these users literally have access to ALL OBJECTS on your system.  With network access to your system, one of these users could easily download sensitive data from your system, including credit card information and customer identity information, and hide it on a USB drive and walk out the front door and nobody would be the wiser.

An exit point implementation can address this issue.  Using exit points, you can restrict object access by user profile even though the user is set up with *ALLOBJ.  In fact, object access can even be restricted for the QSECOFR security user profile.  This can help to protect your system from abuse by a user profile that has been granted more access rights than they really need.

Case #3:

Since the TCP/IP communications utility FTP was added to the IBM i OS, a very easy to use network application lets users interact with the IBM i system without using a 5250 interface.  The FTP user can browse objects on your system and upload or download them.  A talented FTP user and even execute IBM i commands through FTP.  For some shops, you want a user to have these capabilities, but you wouldn’t want them granted on a broad basis.

Exit points can help with this too.  First, you can easily restrict which user profiles are allowed to use FTP.  Then, you can further restrict which FTP commands they are allowed to use letting them do a PUT, for example, but disallowing a GET.  Then, you can even give the user contextual access rights by only allowing an FTP connection from a known and trusted IP address, such as an internal IP address.  Then, if the user’s credentials are compromised, the FTP connection will still have to be established from a trusted source.

To sum up:

These are just a few examples of why IBM i shops should consider exit point implementation for additional security on your IBM i system.  There are literally dozens of additional scenarios that can be described, but these should get you started on making a case for exit points.  It is my belief that every IBM i shop should have some form of exit point controls in place in order to be secure.  If you are interested, I can heartily recommend Kisco’s SafeNet/i software if you want to jump in and get started.

If you have questions about details of this tip, feel free to contact me directly by email: rich at kisco.com.

Securing Your Programmers

By Rich Loeber

Updated December 11, 2018

If you are like most IBM i security officers, you probably cut your teeth in the IT field by doing some programming.  It might even be that you still get involved in programming and your role as security officer is a part-time role.  So, you have an appreciation of the special problem that programmers pose for system security in your environment.  This article will take a look at this issue and offer some suggestions.

The problems are many.  First, programmers know how things are handled internally in your systems and know how to get around in the system.  If a programmers wants to get at some secured information, they probably have the knowhow to do it.  Secondly, programmers have a regular need to access all of the data on your system in their testing role during project implementation.  Lastly, programmers tend to see security as a hindrance to getting their work done.  (I once knew a programmer who knew OS internals on a S370 I was working on and found a trick he could use to submit his program compiles so they would always go to the head of the line in the compilation job queue.  Nobody could figure out how he was getting so many compiles done while everyone else had to sit around and wait.)

Your responsibility as security officer is to create an environment for programmers that is secure yet lets them get their important work done effectively.  These are not always compatible goals.  Here are some ideas you can consider:

  • Even though they will tell you they need it, do not grant all special authorities to your programmers.  Only give them the special authorities that they need to get their work done.  Nobody except a security officer profile should have *ALLOBJ authority.
  • Set your programmers up in a group, but don’t associate them with the special QPGMR profile provided by IBM as that has some special qualities that you don’t want associated with your programmers.
  • Don’t let your programmers have direct access to your production libraries.  Set up test libraries and control the distribution of live data into these test libraries.
  • To create test data, set up a special copy program that adopts the necessary authority to create copies of production files in your test environment.  Monitor the use of that program including maintaining an internal log of when it is used and by whom.
  • Programmers often, like my friend from years ago, like to get their compiles right away by running them interactively.  This can wreak havoc on your system performance.  Consider changing the compile commands so that they will only run in batch.  Also, set up your programmers so that they default to the QPGMR subsystem and make sure that it is set to priority 30 so they aren’t stealing valuable CPU cycles from your production environment.  Consider restrict access to the CHGJOB command.
  • When you move an application from testing into production, review all of the data and program objects to make sure that programmer ownership has been removed and that the objects are now all owned by a profile that will be used to control production access.
  • Keep a separate set of source files for program source that is being worked on.  Do not give your programmers open access to the production version of the source code for a program they need to work on.  Move the source in and out of test mode in a controlled way and log when source members are moved in either direction.  You can do this from a special program that adopts the necessary authority to make the source member moves and logs use activity.
  • Don’t let your programmers have passwords that don’t expire.  Every programmer I’ve ever met has a favorite password that they just like to keep.  Don’t let them get away with that practice.  If you programmers don’t practice good password controls, how can you expect your end users to take this seriously.
  • If a programmer insists on *ALLOBJ and can make their case, consider adding security to their user profile by requiring a 2 Factor Authentication (2FA) logon protocol.  If you need a 2FA solution, take a look at Kisco’s i2Pass product.
  • Since IBM i OS 6.1, you can host a client LPAR on your system fairly easily.  Using this capability, you can create a separate partition on your system where the programmers can have full access while still restricting access in your production environment.  You can find an IBM document on how to set that up HERE.  If you are using data from the production environment for testing in your programmer’s partition, you will still need to guard the data.  But, if you only work with test data, then this is a good solution for you.

This list just scratches the surface.  If you have more ideas in this area, let me know so I can share them in a future tip.  You can reach me at rich at kisco.com,  I’ll try to answer your questions.  All email messages will be answered.

Securing IBM i Socket Connections

By Rich Loeber

IBM i has included security exit points for adding protection for platforms that work with connected systems.  These exit points have been included in the IBM i OS for years and have regularly expanded to provide for more and more network applications.  Using the exit points, you can add protection for FTP, ODBC, File Server, Remote Command submissions and much more.  This has gone a long way towards making the IBM i a truly secure platform.

For many years, however, there were a whole group of applications that could run under the radar on the IBM i and not get picked up by the existing exit point traps.  These are socket level communication applications.  The good news is that starting with IBM i 7.2, IBM has added three new exit points to address this issue.

The three exit points added let you control Socket Accepts, Socket Connects and Socket Listens.  Applications that use socket connections can sometimes also use other network services, like FTP or remote commands, which existing exit points will cover; but some applications bypass all other network services and work directly with data at the socket level.  Having control over socket connections is critical to having a secure environment on your IBM i.

The TCP Accept exit point watches for systems trying to establish a connection to your system at the socket level.  Using the exit point, you can control which IP addresses to accept or reject, what user profiles to allow use and even control what TCP/IP port numbers that the connection can use.

The TCP Connect does the same thing as Accept except it is for outward connections from your IBM i system to remote systems.  The same controls can be implemented controlling the IP address, user profile and port number.

The TCP Listen runs on your IBM i and watches for incoming TCP Connects from remote systems.  A typical socket connection starts with a Listen and then proceeds on to the Connect with subsequent Accept.  Following this sequence, socket communication can take place without any other network services being involved.

Some of the existing applications that run on the IBM i and use these socket connections include the Apache HTTP server which does not have any additional exit point control available.  If you want to control who can use a browser on your system, this is a way to approach this issue.

Another area where this can help improve security on your system is by denying that initial contact with your system.  By denying the initial contact, a potential hacker is denied information about the system they are attempting to connect with.  For example, when you try to sign on to your system via FTP and before you even attempt to enter a user profile, the following is what you see for your FTP session:
This clearly identifies your system as IBM i by virtue of the reference to the QTCP user profile.  A savvy hacker will know that and this information will inform their future attempts to gain access to your system.  Many of the other TCP network services on the IBM i return this kind of identifying information in response to connection requests even when an invalid user profile is used.

When you implement socket level controls on your IBM i, you can deny access to all but approved IP addresses and port numbers.  When you do this, then this is what that same FTP access attempt looks like to a hacker:
Denying this connection at the socket level stops the session before any information is sent back to the requester.

Adding implementation of the socket exit points on your system can help you achieve your security goals for the IBM i.  The good news is that you don’t need to code your own solution if you don’t have the time or the talent.  Kisco’s SafeNet/i software, in its recently announced Release 11 of the product, now includes general use implementations of all three of these exit points along with all other security exit points available in the IBM i OS.

If you have questions about details of this tip, feel free to contact me directly by email: rich at kisco.com.

More About Controlling Access to Spool Files

By Rich Loeber

In my last tip, I talked about controlling access to spool files through implementation of IBM i OS object authority at the output queue level.  In this tip, I’ll be taking a look at three additional parameters that are associated with IBM i output queues that can extend the level of control you have over sensitive reports on your system.

The three parameters in question are:

  • Display any file (DSPAUT)
  • Operator controlled (OPRCTL)
  • Authority to check (AUTCHK)

These three work to give you more control over access to spool files beyond what is available through object level controls on the output queue.

One thing to keep in mind is the proliferation of user profiles with special authority of *SPLCTL.  This is the equivalent of the evil *ALLOBJ authority, but as applied to spool files.  You should restrict granting of *SPLCTL to only those user profiles where it is absolutely required.  As you read on in this tip, remember that if a user profile has *SPLCTL authority, then they can cut through these restrictions as they will not apply (with one exception as noted).

“Display any file” (DSPDTA) is intended to protect the contents of a spool file by setting authority requirements.  There are three values available, *YES, *NO and *OWNER.  Each of these provides progressively increased levels of authority requirements to view, copy or send spool files in the output queue. *YES allows anyone with READ authority to work with files in the output queue. *NO restricts that to the owner, those with *CHANGE authority and those with *SPLCTL special authority. *OWNER further limits this to just the owner profile and any profile with *SPLCTL authority.

“Operator controlled” (OPRCTL) controls whether or not a user with *SPLCTL special authority is allowed open access to this output queue.  The default value on the Create Output Queue (CRTOUTQ) command in the IBM i OS is *YES which is why most output queues are open season for users with *SPLCTL authority.  Changing this value to *NO will force normal object authority rules to control access to the output queue.  If you have an output queue with sensitive information stored and you are concerned about *SPLCTL users gaining access, this is the key parameter value that can save the day for you.

“Authority to check” (AUTCHK) controls how users with *CHANGE authority to the output queue will be given access to change, delete or copy spool files in the queue.  When this is set to *OWNER, only the owner profile of the spool file can change or delete spool files.  Using the value of  *DTAAUT changes this control so that it looks at object level controls for the output queue.

Using these parameters intelligently can give you much added control over how users access (or don’t access) spool files on your system.  Using them in combination can be a little confusing, but if you look in your IBM i OS Security Reference manual under the Work Management section on Securing Spool Files, you will find a full page chart for this set of parameters and how they can be used in combination to achieve your specific objectives.

If you have any specific questions about this topic, you can reach me at rich at kisco.com,  I’ll try to answer your questions.  All email messages will be answered.

Controlling Access to Spool Files

By Rich Loeber

A nice feature of the IBM i OS lets you view the contents of spool files before they have been printed or distributed electronically.  For a lot of users, this saves time and paper and provides a lot of convenience.  But, not all spool files should be able to be viewed by every user.  This tip will take a look at some ways to control who is allowed to see which spool files on your system.

Print spool files are special objects on your system that are stored in the QSPL library.  You cannot control access at the spool file level on your IBM i system.  Access to the spool files must be controlled through the output queue that is associated with each spool file.

If you have sensitive or confidential information that is being stored in a print spool file, the best way to secure it is to create a special output queue (or set of output queues) that are secure to a known set of users.  To direct the spool file into the right output queue can sometimes be tricky.  IBM i generally checks the following sequence of things to direct the output from a job:

  • the printer file
  • job attributes
  • user profile
  • workstation device description
  • system print device (QPRTDEV) system value

To direct your sensitive output to the right output queue, your best bet is to specify it in the printer file being used by your application.

Output queues can be created in any library on your system.  Output queues for printer devices generally get created in the QUSRSYS library, but you are not limited by that.  To improve security, create your secure output queue in a separate library that has limited access at the library level.

Once the output queue has been created, you can then limit access to it using the Grant Object Authority (GRTOBJAUT) command and Edit Object Authority (EDTOBJAUT) command.  To specifically limit general access to the output queue, the PUBLIC setting for the *OUTQ object must be set to *EXCLUDE.  Then, in the individual user authorities, you can provide for access for specific user profiles or (better yet) group profiles.

You should also note that user profiles with the special authority of *SPLCTL will be able to view and work with spool files regardless of their access control limitations to your secure output queue.  This is a form of all object authority, but only applied to spool files.  You should limit the number of profiles on your system that have the *SPLCTL authority in order to maintain the security of your sensitive output queues.

There is also a special setting on the output queue called the “Display Data” (DSPDTA) parameter that can be used to control viewing spool files.  When you set this to *NO, then only the spool file owner profile can view the contents of the spool files in the output queue.  You can check the value of this parameter using the Work with Output Queue Description (WRKOUTQD) command to see how it is set up for your secure output queue.

There are some other intricacies that are covered in the IBM i security manual.

If you have any questions about this topic, you can reach me at rich at kisco.com,  I’ll try to answer your questions.  All email messages will be answered.

Command Line Security – Part 2

By Rich Loeber

Last time, in part one of this two part series, we talked about limiting access to the use of the IBM i OS command line.  For some installations, however, you may have some good business reasons for providing command line access.  In part two, we’ll take a look at how you can restrict access to specific commands in IBM i OS.

Every command in the OS exists as an object on the system with object type *CMD.  The best way to control access to use of the commands is through IBM i  object security.  Command objects can exist in any library.  The OS commands are generally all found in the QSYS library.  Command objects can be part of the IBM i OS and they can also be part of application programs installed on your system either from your own homegrown applications or from software providers other than IBM.

Most OS commands are shipped from IBM with the public authority set to *USE.  This means that anyone on your system can run any command.  To restrict a command, change the public authority to *EXCLUDE.  When you make this change, then only users with all object authority (generally a no-no in a security conscious installation) will be able to run the command.  Then, using either an authorization list or by granting specific user profile access, you can control who can run the command.

For example, suppose that you decide that you want to restrict the use of the Work with Output Queue (WRKOUTQ) command.  This is one of the commands that is shipped with public authority of *USE.  To change the public authority to *EXCLUDE, run the following Grant Object Authority (GRTOBJAUT) command:

GRTOBJAUT OBJ(QSYS/WRKOUTQ) OBJTYPE(*CMD) USER(*PUBLIC) AUT(*EXCLUDE)

Now, if you have a set of users that you specifically want to allow access to the command, you can grant them individual access using the following command format:

GRTOBJAUT OBJ(QSYS/WRKOUTQ) OBJTYPE(*CMD) USER(MYPROFILE) AUT(*USE)

The USER parameter can point to a specific user profile or to a group profile.  If you have implemented group profile security, this is the better way to approach this issue.

When setting up command security using this method, you can use wildcard characters for the object name in the Grant Object Authority command.  Using this method, you can update the public and private authority for many related commands all at the same time.  The IBM i OS Security Guide suggests controlling all of the commands that change device configurations as an example.  Using that example, the following command would do the trick:

GRTOBJAUT OBJ(QSYS/CHGDEV*) OBJTYPE(*CMD) USER(*PUBLIC) AUT(*EXCLUDE)

Your best approach is still to limit user’s ability to run commands directly from the command line.  But, if you absolutely have to allow it, then make sure that an inquisitive users doesn’t accidentally (or purposefully) run a command that you don’t want them running.

If you have any questions about this topic, you can reach me at rich at kisco.com,  I’ll try to answer your questions.  All email messages will be answered.

 

Command Line Security – Part 1

By Rich Loeber

When a user on your IBM i system signs on to a terminal session, they will be presented with a command line.  Given enough security permissions, a user can do just about anything from that command line, if they are inquisitive enough.  This article will discuss several options for controlling what a user can, and more importantly what they cannot do, when they are presented with a command line.

Controlling use of the command line begins with the way your user profile is set up.  Specifically, the option to “Limit capabilities” (LMTCPB).  This will define what, if any, controls the system will impose over use of the command line.  Unfortunately, many systems just use the default “*NO” setting for this value and that leaves the command line wide open for use (and abuse).

There are three possibilities for the LMTCPB parameter in the user profile:

 

  • *NO – means there are NO limits on the user of the command line.  In addition to processing commands from the command line, the user can also make certain changes to their user profile that you might not want them making.
  • *PARTIAL – this is a little better than the *NO option and it limits certain actions that the user can take at signon and from the command line, but they can still run commands.
  • *YES – this is the best option for most of your users.  The user cannot specify different parameters for menu and library from the signon screen and they cannot change the setup for their user profile.  The user also is not permitted to run any IBM i OS commands from the command line.

“But,” your user says, “I need to be able to check my output reports using the WRKSPLF or WRKOUTQ command!”  This is a common issue in some shops, but setting the LMTCPB for the user profile to *NO or *PARTIAL is not the answer.  If a user needs to use a very limited set of IBM i OS commands, the best way to solve that issue is by creating menu options for them to use.  They can continue to run the commands from the menu option with no problem.

One thing to also be careful about is the starting menu that you present to your user.  Again, the default that comes from IBM is to give your users access to the IBM i OS “MAIN” menu in QSYS.  This menu can easily lead an inquisitive user to options and capabilities that you probably don’t want them seeing or using.  If you follow the menu options, you can easily get into areas where a user just does not belong.  So, make sure that you specify a starting menu that strictly limits where the user can go.  Spend some time testing your menu structures to make sure that they do not lead a user to capabilities that they should not be granted.

Next time around, in Part 2 of this article, I’ll take a look at how to effectively limit how users use of commands in the IBM i OS when you absolutely have to let users have access to the command line.

If you have any questions about this topic, you can reach me at rich at kisco.com,  I’ll try to answer your questions.  All email messages will be answered.

Changing Your Signon Screen – A Good Idea

By Rich Loeber

The classic IBM i signon screen has been around since forever.  I first saw it in 1988 when I took delivery of my first AS/400 system, a lowly B10.  In the old days, the appearance of the signon screen made no difference since the system was a closed system.  With the advent of networks, this situation changed dramatically.

Today, all IBM i systems are networked and users connect via that network connection.  The signon screen is projected to terminal emulation software throughout the network and even over the Internet for users that are accessing the system from remote locations.  Because of this, the signon screen standard context can be easily recognized by people with malicious intent and scrubbed (sniffed) for user id and password information.

Granted, for many users, this information is encrypted.  But, with the proliferation of open access protocols, there are many emulators that do not encrypt this information.  Examples of this are hand-held devices (tablets and phones) and the Telnet capabilities of Windows platforms.  For my own system, I access it when traveling via my Android smartphone and no encryption is taking place.

A second reason is that the classic signon screen presents a field that could provide a saavy user with a way to bypass your intended signon process sequence.  Next time you sign on using this screen, just type QCMD in the “Program/procedure” field and you will get a demonstration of what I mean.

For these reasons, it is probably a good idea to design your own signon screen and that you change the standard terminology used to identify the User and Password fields and disallowing the “Program/procedure” field.  Making the change is fairly easy, but you need to be careful and you need to test your new screen before rolling it out for general use.

IBM ships the source code for the standard signon screen in a source physical file named QAWTSSRC in library QSYS.  In this source file, you will find two sets of code for the two possible standard screens on your system, QDSIGNON and QDSIGNON2.  The first is used when you have standard 10 character passwords configured and the latter is used when you have set your system up for long (128 character) passwords/pass-phrases.  I recommend that you move the source that you want to use into a separate library, thereby preserving the original source in case you get in trouble.

Once you have the source moved into your own library, you can then use Screen Design Aid (SDA, PDM option #17) to make your changes.  When working on your screen, make sure that you observe the following:

  • Do not delete any of the input capable fields that are on the signon screen.
  • Do not change the sequence of any of the input capable fields.  You can move them around on the screen, but keep their sequence in tact.
  • Do not change the characteristics, especially field lengths, for any of the input capable fields.
  • Do not attempt to use any DDS HELP capabilities for the signon screen.

Since one objective is to change the reference to “User” and “Password”, pick out suitable replacements for these and make sure to change the text for those areas.  I would suggest alternatives here, but that could just start a new default standard which would defeat the objective of this tip.

The second objective can be accomplished by removing the text field for the “Program/procedure” field and then changing the PROGRAM field so that it is non-display.  This will keep the integrity of the signon screen while preventing this field from being used.

When you are all done, compile the screen into a library other than QSYS.  To implement the new screen, you will need to update the subsystem description.  You can use the Change Subsystem Description (CHGSBSD) command; press the F10 key to display all parameters and you’ll find one that controls the signon screen in use.  Test your new screen in the QPGMR subsystem to make sure it works as desired before rolling it out to QINTER and other production subsystems.  I strongly recommend that you NOT use an alternate signon screen for your system console which is typically associated with the QCTL subsystem.

If you have any questions about this topic, you can reach me at rich at kisco.com, I’ll give it my best shot.  All email messages will be answered.

Security and Performance Issues

By Rich Loeber

Normally, you would not think of system performance in terms of a security issue.  But, if someone with the right know-how is abusing privileges on your system, then it becomes a security issue.  This tip will help you to identify some performance issues that fall into this category.

A performance issue that has security implications can happen when someone with the right special authorities on their user profile abuses those and consumes excessive system resource in their own interest.  This can happen, for example, when programmers boost the execution priority for their jobs at the expense of interactive processing.  It can also happen when someone runs a batch job interactively, thereby bringing other interactive users to a crawl.  When this occurs, it is clearly a security issue as the user or users in question are abusing their assigned privileges.

Controlling the execution priority of a job is a function of the Job Priority.  This is set by the Job Description that is used for the job.  It can also be changed on the fly by someone with *JOBCTL special authority associated with their user profile.  If you see this happening, you might want to just remove *JOBCTL from their user profile.  Restricting access to the CHGJOB command can also help.  The CHGJOB command is shipped from IBM with public access set to *USE, so any user profile can use the command.  Restricting access could affect applications running on your system, so you should consider this change carefully.

To restrict access to the CHGJOB command, run the following command on your system:

GRTOBJAUT OBJ(CHGJOB) OBJTYPE(*CMD) USER(*PUBLIC)
AUT(*EXCLUDE)

This will change the command so that only authorized user profiles can use it.  To add a user profile to those allowed access to this command, use the following command:

GRTOBJAUT OBJ(CHGJOB) OBJTYPE(*CMD) USER(MYUSRPRF)
AUT(*USE)

This will allow the profile MYUSRPRF to use this command while excluding all others.  Of course, any user profile with All Object Authority (*ALLOBJ) will still have access, so that wrinkle also has to be allowed for.

Limiting access to command objects on your system is a good way to control who can do what.  Another command that you should consider for similar treatment is the Change Shared Storage Pool (CHGSHRPOOL) command.  This command can be used to control performance characteristics for jobs running on your system through the allocation of memory resources and processing time slices.

If you still have problems with performance issues preventing production from getting done efficiently, there may be a problem of users running batch jobs interactively.  If your applications are running from IBM i OS commands, you can change the commands so that they will not function when called in an interactive environment.  You can do this using the Change Command (CHGCMD) command, setting the ALLOW parameter to remove the *INTERACT, *IPGM and *REXX options.

If you make changes to any IBM i OS commands, you should keep a list of the commands changed and the specific changes made.  Installing PTFs or OS upgrades from IBM could change them back, so you should keep your list with your IBM i OS documentation to serve as a reminder to check the commands following a PTF install or OS upgrade.

If you have any questions about this topic, you can reach me at rich at kisco.com, I’ll give it my best shot.  All email messages will be answered.

Annual Checkup

By Rich Loeber

A few years ago, when I passed the age when I thought I might live forever and started maturing (a little), I decided that it would be a good idea to go see my doctor once a year for an annual checkup.  It was paid for by insurance and there was just no good reason not to go.  That first checkup (after many years of neglect, I might add) turned out OK.  The doctor told me a few things that I already knew (loose weight, get more exercise) and generally thought that I was doing OK.

After that first checkup, I got the annual appointment into my schedule and started going faithfully.  Then, after we moved up to the mountains of Northern New York, the doctor at our new home came back with a different response to my checkup.  He saw some things that didn’t look right and wanted to schedule some additional tests.  To make a long story short, he found a blocked cardiac artery and we were able to deal with it well before the onset of a heart attack.

What, you ask, does this have to do with computer security on your IBM i system?  Just this …. you need to do a full system checkup at least once a year just to see if there are any surprises.  I have done dozens of these checkups over the years on systems under my responsibility and I ALWAYS find something that needs attention.  If you’re responsible for system security, you need to do this, and year end is a good time to be thinking about it.  Nobody gets much work done during the last couple of weeks of the year and it’s a good time to go tinkering around in your system.

So, what should you include in your checkup?  Here’s a list of things to start with.  It is by no means comprehensive but will probably get you started and lead you into the areas where you need to be concerned:

●    Check the security settings in your system values using the Print System Security Attributes [PRTSYSSECA] command and reconcile differences on your system from the recommended settings.
●    List the user profiles on your system and check for employees who have left or changed their job assignment.
●    Create a database of your user profiles using the DSPUSRPRF command with the *OUTFILE option, then run a series of query reports to search for expired passwords, profiles with *ALLOBJ authority, and so on as appropriate for your installation.
●    Run the Security Wizard in the IBM i  Navigator (Or Access Client Solutsion) and check any differences on your system from the recommendations suggested.
●    Using the user profile database already created, list your user profiles by group to make sure that the groups are set up as you expect to see them.
●    Create a database of all *FILE objects on your system using the DSPOBJD command with the *OUTFILE option.  Then generate a report using your favorite query tool of new files created since your last audit and make sure that security on these new objects complies with established policies.
●    Run the Analyze Default Passwords [ANZDFTPWD] command to make sure that no default passwords exist on your system.
●    Check *FILE objects on your system with *PUBLIC access authority using the Print Publicly Auth Objects [PRTPUBAUT] command.  Make sure that the objects with public access all comply with established policies.
●    Go to the SECTOOLS menu and see if any of the options available can be of specific help to your audit efforts.
●    Review your backup process and offsite storage arrangements.  Do a physical inspection of the offsite location and make sure you can quickly and easily identify and retrieve backup sets.

Due to space constraints, this is not a comprehensive list but is intended to get you started on the audit process.  As you go through it, document both what you are doing and your findings.  That way, when next year end rolls around, you’ll be better prepared for the process and you’ll have a baseline to compare your results with.  Good luck, and I hope you don’t find any clogged arteries!

If you have any questions about this topic, you can reach me at rich at kisco.com, I’ll give it my best shot.  All email messages will be answered.