Snooping On Critical Files

By Rich Loeber

If you’re a regular reader, you’ll know that last time around, I presented a method where you can use IBM i/OS object auditing to keep track of who is doing what with selected files and objects.  That’s a good way to keep a record of what’s happening with these critical resources on your system, but no matter how often you check results, it is always after the fact.  This time, I’ll give you a way to track and report on file access for read, change and/or delete with immediate, real time notification.  Using this method, you’ll know right away when someone is in the critical file you want to keep an eye on.  And, nobody will know about it except you, if you can keep quiet about something this cool.

To accomplish this trick, we’ll create a very simple trigger program and then associate it with the file you want to track.  Keeping the trigger program simple is a key to success for this method of object tracking control.  Keep in mind that every time the file is accessed for the method you choose (which you will see as you read on), the trigger program will be run by the system and it will run in line with the application that is accessing your file.  I’ll put in some cautions along the way to point out where this might be an issue.

A trigger program is nothing more than a standard IBM i/OS *PGM object.  When it is associated with a *FILE, the OS will call the program according to the instructions you provide with the trigger file registration.  Your trigger program must have two parameters, one for information about the event and the other a simple binary two byte number that will provide you with the length of the first parameter information.  The first parm can be variable, but for a simple application like this one, you can code it at a fixed length.  The variable length is there to support multiple record length files and the actual record contents are passed to the trigger program, but for our purposes, we won’t be using that part of the information.  I code the first parameter with a length of 136 and the second parameter with a length of 2.  The first 30 positions of parameter 1 contain the file name, library name and member name in that order.  Position 31 will have an indicator as to the trigger event and position 32 has the trigger time indicator.

You associate the trigger program with the file by using the Add Physical File Trigger (ADDPFTRG) command.  The parameters on this are fairly self explanatory.  Since you don’t want to hold up production, use the *AFTER option for your trigger time setting.
The trigger event will indicate when you want the trigger to be called and the values are:

  • *INSERT – whenever a record is added to the file
  • *DELETE – whenever a record is deleted from the file
  • *UPDATE – whenever an existing record is changed in the file
  • *READ – whenever an existing record in the file is read

A word of warning about using the *READ option, this can generate a huge number of calls to your trigger program and it is probably for the best if you avoid using it.  If you want to track multiple events, you will have to register each one with a separate use of the ADDPFTRG command.

When you’re all done with your tracking project, remember to clear your trigger file registrations.  This is done using the Remove Physical File Trigger (RMVPFTRG) command.  Just use the defaults once the file is specified and all of the trigger registrations can be removed at once.

How you code your trigger program itself depends on what you want to find out.  If you’re looking for a specific user profile, then check for it.  If you’re looking for a specific time of day or day of the week, check for that.  When you’ve found something that qualifies, or if you just want to report on everything, use the SNDMSG to send a message to your user profile (or a list of user profiles that you store in a data area) and you’re done.  If you use the SNDDST to send an email notification, it would be best to do this via a SBMJOB so that the application processing is not held up while you get this sent.

To better explain this technique, I’ve written a simple CLLE program that can be used with any file and contains comments along the way to show different options that you might want to implement.  If you’d like a copy of this trigger file shell program for free, just let me know and I’ll send you a copy via Email.

If you don’t want to bother with coding your own solution, Kisco’s iFileAudit software has this capability built in along with a lot of other neat ways to keep track of what’s going on with your files.  It is available for a free trial on your system.

You can reach me at rich at, I’ll do my best to answer.  All email messages will be answered.

Tracking Use On Critical Files/Objects

By Rich Loeber

Most shops have at least one, and probably more than one, mission critical information assets stored on the IBM i system.  If you’re doing your job as security officer, that asset is locked up tight to make sure that only authorized user profiles can get to it.  But, do you know for a fact who is actually accessing that critical data?

Here is one way that you can review who is reading and even who is changing data on an individual object-by-object basis on your system.  That is by using object auditing, a built-in feature of the IBM i OS.

For starters, you have to have to have Security Auditing active on your system.  You can do a quick double check for this using the Display Security Auditing (DSPSECAUD) command.  If security auditing is not active, you will need to get it up and active on your system.  That is a process for a different tip.  If you need help getting this started, send me an email (see below).

With security auditing active, you can set up access tracking on an object-by-object basis using the Change Object Auditing (CHGOBJAUD) command.  Depending on what you’re objective is, you can set the OBJAUD parameter to a number of values.  Check the HELP text for more information.  If you want to check everything, just set it to *ALL.  If you are only tracking usage for a limited time period, be sure to change this value back to *NONE when you’re finished as this will reduce some system overhead.

Once object auditing has been activated, the system will start adding entries to the system audit journal whenever any activity happens on the object you have activated.

To view the journal information, you use the Display Audit Journal Entries (DSPAUDJRNE) command.  The first parameter, ENTTYP, selects the specific information that you want to see.  Setting this value to ‘ZC’ will produce a listing of all of the times that the tracked object was changed.  If any applications are deleting the object, using the report for value ‘DO’ will report those happenings.  Using the value of ‘ZR’ will produce a larger listing showing all of the times that the tracked object was read.  Depending on how your object is used, you might find that the ZR report is just too huge without filtering it down …. read on.

The generated reports are simple Query listings.  The reports are generated from a file that the DSPAUDJRNE command creates in your QTEMP library.  The database file is named QASYxxJ4 where “xx” is the value you used on the ENTTYP parameter.  Once this database file has been created, you can use it to generate your own reports.  This way, you can slice and dice the data for your own unique needs.  For example, if you are looking for specific user profiles, you can add that as a selection criteria.  Or, if you want to analyze access by time-of-day or day-of-the-week, you can do that too.  The possibilities are quite open at this point.

I set this up on my test system to track accesses to an obscure data area that I was quite sure is only rarely used.  I set the tracking and left it for a few hours, then went back to it.  Even on this test system, I was surprised by the number of times the data area was used, and I’m the only user on the system!  Who knows what surprises you will turn up.

If you have any questions about anything in this tip, just ask me and I’ll give you my best shot.  My email address is rich at  All email will be answered.

Monitoring For Security Events

By Rich Loeber

Your system is in use by your user community all day long.  Depending on the size of your shop and the number of users, there could be hundreds or even thousands of security decisions being made by your security setup on a minute by minute, hour by hour, day by day basis.  If you’ve done your homework well, those security arrangements will all work to protect your data from being used incorrectly.

But, how do you know when a security violation has been made?

One way is to keep security auditing active on your system and run regular reports from the security audit journal.  In fact, that is a good practice to implement, but it is not going to give you quick feedback when a serious security violation occurs.

When a critical security violation happens, an error notice is posted to the system operator message queue (QSYSOPR).  The problem, however, is that LOADS of messages in most shops go to the system operator message queue and it is easy to loose one in the haze of all that activity.

To address this problem of the security messages getting lost in the system operation message queue, the IBM i OS has an alternate message queue capability set up.  Check your system to see of the QSYSMSG message queue exists in QSYS library.  If you don’t see one, just create using the CRTMSGQ command.

Once the QSYSMSG message queue is on your system, all critical security related messages will also be posted to this message queue along with your system operator queue.  Now, all you need to do is make sure that you end up knowing when a message has been posted.

The quick and easy way is to log on to the system and run the following command:


Once this is done, whenever a message is posted to the QSYSMSG message queue, it will be displayed on your terminal session as a break message.

But, this could be a problem.  First, it requires that you always be logged on and it limits the number of people who can monitor for security events to one.  A different solution is to create a little CL program to “watch” the message queue for you and then forward the message on to your user profile (or a series of user profiles) when they happen.

This way, you and your security team can find out about security problems in real time and won’t have to wait for audit journal analysis to see that serious security violations are happening.

I have put together a simple little message monitor CL program that works with a set of up to 5 user profiles stored in a simple data area.  If you’re interested in getting a copy of this code, or if you have any questions about this tip, send me an email (rich at

An even better solution is to implement a flexible message queue monitoring software tool such as Kisco Information Systems’ iEventMonitor software.  This will add email and text notification for you and you can implement many of the other features to monitor your system.

If you have any questions about anything in this tip, just ask me and I’ll give you my best shot.  My email address is rich at  All email will be answered.

Terminal Session Security

By Rich Loeber

Like all modern systems, the IBM i requires a user profile and password before you can log on and use the system.  You might think that this simple requirement would always ensure that only authorized users will have access to your system.  But, with the proliferation of devices that can connect to the system, it is not always that simple.

In the old days, we used to have devices that are now called “dumb terminals”.  To use the system, you’d log on to the sign on screen and when you were done, you’d log off.  You could tell by looking at the screen whether the session was active or not.  If the signon screen was displayed, then the session was inactive.

Today, with a proliferation of PCs, tablets and cell phones and with easy access to Telnet based terminal emulation software, it is not always that clear.  On a PC using IBM i Access, the first time you log into the system for the day, there is an IBM i Access logon that establishes connection from the PC to your host system.  Then, there may or may not be another logon for your terminal session.  If you have your PC set up to bypass terminal sign on to the host, then there will be no second signon process.  Once your connection to the host system has been established, the only way to break it is to either log off from Windows altogether or reboot your system.

There are a couple of potential problems with this configuration.  It makes working with your system a lot easier just like leaving the keys in your car makes getting going a lot easier, but you wouldn’t want to do it on a regular basis.

If you are using bypass signon, once your initial connection has been established, anyone can come by and start up your terminal emulation session and gain access to your system without knowing either your user profile or your password.  If you’re a programmer or a systems administrator, that could be a significant exposure to your system as you will probably have very generous access rights to objects on your system.  If your PC is located in a public or semi-public setting, you should think twice about having this setup.

Another exposure, which can happen when you leave a terminal session active, is that anyone can come along and use the Client Access upload or download functions to gain access to your system, again without knowing your user profile or password.  If you have any virtual drives mapped to your host, those could also be compromised by someone using your PC without your knowledge or approval.

One simple solution is to activate your PC’s screen saver with a password requirement to unlock the keyboard when it goes into screen saver mode.  That way, if you go for coffee and get delayed by a dumb question from the boss, the screen saver will kick in and protect your system in your absence.  The problem comes from user systems that you, as security officer, are responsible.  Each user can probably reset their screen saver settings on their own, thereby defeating this important additional security measure.  A periodic inspection of all PCs installed in public and semi-public settings for these exposures would probably be a good idea.

Most terminal emulation software for use on tablets allow you to build in a macro for the signon process.  So, anyone picking up your tablet, might be able to establish a connection to your system.  If tablets are available in public areas, then disabling the signon macro function would be a good idea.

If you have questions about details of the report, feel free to contact me directly by email (rich at

Is The Light On, but the Door Unlocked?

By Rich Loeber

IBM i owners regularly boast about the security built into their systems, and rightly so, but if you don’t implement and use the features, they’re not going to do anything for you.

I have mentioned before that I live in upstate New York, in the heart of the Adirondack Mountains.  In our neck of the woods (literally), security is not much of an issue for most people.  In fact, most of our neighbors never lock their homes or cars since theft is just not a problem.  At our house, we have extensive outdoor “security” lighting installed, and we use it whenever we go out at night.  We even have one light on a motion detector that comes on automatically in case we forget the other lighting.  But, even with the lighting on, we usually leave the door unlocked just because it is easier to get back in when we return home.  If we ever get ripped off, we shouldn’t be surprised as to how it happens.

I’m surprised, however, when I hear about and work with IBM i shops that have this same approach to computer security.  An alarming number of shops just do not pay attention to security issues and are surprised when a problem develops.  The IBM i OS provides robust security capabilities and tools, but too often they go unused just because it is easier without them.

I remember an IT director I knew, I did some consulting work for his company.  I encouraged him to move up to security level 30 and implement object level controls on several mission critical files on their system.  He gave it a try and, without any planning, moved the security level from 20 to 30 and IPL’d their system.  When nobody could sign on except the security officer from the console, he backed the system back to level 20 and never tried it again.  It would still be running at level 20 today if the company had not gone out of business.

My company sells a number of security solutions for the IBM i market.  I am always amazed at the number of customers who buy our solutions and then never fully implement them.  Some of these, it turns out, purchased our software just to satisfy an audit recommendation or someone else’s concern.  For others, they probably just don’t have the time or the people resources to do the implementation correctly, so they shelve it or put it on the back burner.

The same is true for the shop that never bothers to set up the IBM i OS security.  They’ve made a significant investment in IBM i, but are not bothering to use what they’ve paid for.  Security is just as much of an investment as the computer hardware that it runs on.

You would probably never think of leaving the front door of the building open all night with the lights on.  By that same measure, you should not leave your system exposed to intentional or even accidental abuse when you have it within your grasp to correct the situation and you have all the tools to do so at your disposal.

If you’re reading this and see your own shop (or even yourself), don’t worry.  Its not too late to do something.  Take an incremental approach and develop a plan.  Don’t rush into it, like my friend above, and do something you’ll regret, but don’t just sit there leaving your system exposed.  The important thing is to get started and stop putting this off or waiting for enough resources or budget support.

If you have questions about details of the report, feel free to contact me directly by email (rich at

Securing the Save/Restore Function

By Rich Loeber

On today’s open IBM i system, the save/restore function can be used to transfer critical files and programs between systems.  With this comes the specter of data and program theft, so it is important to make sure that this avenue of data transfer is secure.

Most IBM i installations run a scheduled save of their system to transfer files, programs and other objects to tape or other media for safe keeping.  Saving the system on a scheduled basis is just good practice since the catastrophic loss of a single disk unit on your system can mean that you loose everything on your system.  Since introduction, IBM has hounded its customers (and rightly so) to do a regular save on your system.  Once objects have been saved, you can also use IBM i/OS functions to restore objects from the saved media.  This can be part of either a full system restore or to recover objects that get damaged or corrupted in any number of ways.

The problem, from a security viewpoint, is that once data and programs have been saved to offline media, they can then be transported to another system and restored.  Mission critical information must be guarded to prevent theft from occurring.  Physical security on the media is critical for this, but I want to talk more about securing the save/restore function on your system.

Today, with an open TCP/IP world in which we work, you can save a critical file to a save file and then FTP or SNDNETF it to any system in the world.  IBM i software developers regularly use the FTP save file to transfer program updates onto customer systems.  With the ease of data transfer that this provides, restricting the use of the save/restore functions on your system is more critical than ever.

The first line of defense for this is found in the user profile.  Before someone can use the Save/Restore commands in the IBM i/OS, they must have *SAVSYS special authority in their user profile.  If you have not done so, I recommend that you review your user profile base to find out which user profiles have *SAVSYS configured and make sure that they have a real business reason for it.  Certainly, your operator(s) will need this authority, as will anyone who runs the scheduled backup or restore functions.  But, I would be hard pressed to think of any other users in a regular environment (including programmers) who really need to have this ability.  I know some programmers are going to howl at this, but they will have to be able to make their business case before you give them these keys to the kingdom, so to speak.  You can always look at granting them this authority on an as needed basis, revoking it once the task to be done has been completed.  Some shops even keep a special user profile around for this use.  When needed, they activate it temporarily and the deactivate it with full documentation kept on how it was used.

The next place to look for restricting Save/Restore use on your system is the IBM i/OS authority setup for the the RSTxxx and SAVxxx commands.  The RSTxxx commands are shipped from IBM with public *EXCLUDE, but the SAVxxx commands have a public setting of *USE.  You might want to consider setting up an authorization list for these commands and then listing the users that you want to be able to use them in the list.  Once the list is built, associate it to the commands and then change the SAVxxx commands to be public *EXCLUDE.  (You can also do this with direct authority, but having the authorization list will make IBM i/OS upgrades easier to implement.)

There are several system values that you should take a look at too.  The QALWOBJRST value lets you restrict certain objects at restore time.  These include system-state programs, programs that adopt authority and objects with validation errors.  QVFYOBJRST controls restoring signed objects.  QFRCCVNRST wil force object recreation on certain objects at restore time.  Lastly, you can specify *SAVRST in the QAUDLVL command to audit save restore operations on your system.

If you have any questions about anything in this tip, just ask me and I’ll give you my best shot.  My email address is rich at  All email will be answered.

Hiding Places for Malicious Code

By Rich Loeber

The last time I wrote, it was about tracking down hidden programs on your system that you might not know about (see article).  That time, it was trigger programs that could be sitting on your system just waiting for a specific event.  But, as I’ve thought about this issue since then, there are other places where someone could “hide” a call to a malicious program and easily get overlooked.

This time, we’ll look at two other areas for concern.  These are the system job scheduler and exit programs.  Both are ways that someone intent on doing harm to your system could hide some malicious program waiting for something to happen so it can jump out and cause problems.  In each case, the IBM i OS contains a way to review the programs that are sitting there and you should take a look periodically to see how each is being used on your system.

The IBM i OS has had a nice, easy to use job scheduler built into it for a long time now.  Most shops where I’ve done consulting work seem to know about it and use it for regularly scheduled jobs.  But, that also means that the programming staff is aware of it and could misuse or abuse it.

To review the current contents of the system job scheduler, use the i/OS command Work with Job Schedule Entries (WRKJOBSCDE).  This command will display information about every job in the system job scheduler.  It will tell you what the job is, how it is invoked and when it is next scheduled to run.  You should review each entry to make sure that you know what it is doing and when it is next scheduled to run.  A suspicious job, to me, would be one that is not set to run for quite a while in the future.  Most scheduled jobs happen frequently, either on a daily, weekly or monthly basis.  If you see something on a different schedule than one of these, I’d pay particular attention.

Another place you need to periodically review are the registered exit point programs on your system.  Exit points are hooks into i/OS processes.  These are provided in the i/OS so customers can add their own customized processing called from the OS during normal operations.  For example, many of the third party network security products now available on the market (including our own SafeNet/i) use exit points to add security checking to the various network operations in i/OS.  The potential problem is that a rogue program could get registered to an exit point just waiting for a specific OS event to occur before it jumps up and gets noticed.

To review exit programs registered on your system, use the i/OS command Work with Registration Information (WRKREGINF).  This will display a list of the i/OS exit points on your system, and remember that they are different for every level of the OS.  For each exit point, use the option ‘8′ to see if there is a registered exit program for the exit point.  If you find any, make sure that you know what they are there for.  Don’t be surprised to find some exit programs already registered.  If you are using a network security system, you should find many programs registered.  Also, some come registered with i/OS.  For example, you will find that the IBM product Service Director, uses some of the exit points as does the i/OS Mail Server Framework (MSF).  Just make sure that you can identify each program that shows up as a result of your review.

If you have any questions about anything in this tip, just ask me and I’ll give you my best shot.  My email address is rich at  All email will be answered.

Can You Trust All Those Trigger Programs?

By Rich Loeber

If you’ve seen the movie “Troy”, or if you were paying attention in history when you were in school, you know that the Greeks brought down the city of Troy with the gift of a large wooden horse.  Of course, unbeknownst to the Trojans, the horse was filled with soldiers and as soon as things settled down in Troy, the soldiers broke out of the horse and took the city.

These days of rampant computer viruses, worms and other malicious programs moving their way around the Internet, the concept of the Trojan Horse is alive and well.  But, you say, I’m sitting here working on my completely safe and secure IBM i system running the most secure operating system in town!  These things can’t affect me.

Think again.  A malicious program can still get written and installed on your system, hidden away waiting for the right event to come out and strike.  How, you ask?  As a trigger program.

When I heard about this, I was much like most of you thinking that this just didn’t apply to me.  But then I ran the audit report that IBM is included in IBM i OS.  Boy, was I surprised at the number of trigger programs that are installed on our closed development system.  I thought that the report would come out empty.  Lo and behold, I got an eleven page report with information on more than 110 trigger programs in place that I had no idea were there.  Granted, most of them appear to be parts of the IBM i OS, but I did find some application triggers that I did not know were there.  Fortunately, I found that none of these were malicious, but I had my doubts for a while since one of them was written by a programmer who left our employee under somewhat of a cloud.

The IBM i OS includes a command that lets you keep track of the trigger programs that are installed on your system.  The command lets you run a master list of all trigger programs and then, periodically, just list the trigger programs that are new or have changed.

To get started on understanding the trigger programs on your system, run the following command:


This will produce a baseline report of all the trigger programs on your system.  Review the listing closely and make sure that you know what each of these programs does.  If you see a program that you suspect, track down the source code and make sure you know what it is about.  If it is from a third party software provider, get a statement from the software vendor that describes what the program is doing.  Since trigger programs react to events, they are good candidates for malicious actions just waiting for the right action to happen on your system.  Be prepared that the command may take a long time to run and you might want to consider running it in batch.

Once you have your baseline report, you can then periodically run the same command just changing the CHGRPTONLY parameter to *YES.  This version of the report will list changes and new trigger programs on your system.

If you have any questions about anything in this tip, just ask me and I’ll give you my best shot.  My email address is rich at  All email will be answered.

Password Levels

By Rich Loeber

Ever since the introduction of IBM i/OS V5R1, a system value for “Password Level” has been available (QPWDLVL).  This value lets you have control over the kinds of passwords you use on your system and how the system treats them.  Using the features provided through this value, you can implement passwords of up to 128 characters in length.

Why would anyone ever want to have a password that long?  I asked myself that very question, but when I started looking into the issue, some things jumped out at me that make perfect sense.  With a long password, you can implement a “pass phrase” rather than a password.  The implementation of the long passwords allows for case sensitive passwords and will accept embedded blanks and every character on the keyboard.  This complexity in your password can easily increase the difficulty for people trying to break into your system.

The system value that controls this is QPWDLVL and it can have the following settings:

“0″ – the default setting which sets up 10 character passwords and is what you are probably used to now if you’ve been working with the IBM i system for some time.

“1″ – uses the same 10 character passwords, but the IBM i/OS NetServer passwords for Windows Network Neighborhood access are not kept on the system.  If your system does not communicate with any Win-X machines using Network Neighborhood, you might want to consider this.

“2″ – allows you to have passwords of up to 128 characters that are comprised of any character on the keyboard, are case sensitive and may contain blanks (but not all blanks).

“3″ – implements the same level “2″ passwords but adds the restriction on Windows Network Neighborhood that level “1″ includes.

If I were implementing a new system, I’d seriously consider adopting level “2″ as a standard right from the get go.  But, most of you out there in IBMiLand have an embedded culture of 10 character passwords with specific rules in place that you have your users well trained for.  The good news is that you can move to a new password level as long as you do a little planning in advance.

Moving from level “0″ to level “1″ is pretty simple and does not require much planning.  This will simply eliminate the storage of another set of encrypted passwords on your system.  Moving from level “0″ or “1″ to a higher level should take some planning before you take the plunge.

One of the nice things is that whenever you create a new profile, the IBM i/OS creates the associated level “2″ and level “3″ passwords just in case you want to move to the higher password level.  So, the codes are already there on your system.  The possible downside is that embedded code and certain client software may not get along with the longer passwords.  Consequently, if you decide to make this change, you really should get a full backup of your current security profiles and passwords using the SAVSECDTA command.  This way, if things go south on you, you can recover back to where you are now quite easily.  You can use the DSPAUTUSR command to check your profiles for users with passwords that will not work at the higher levels.  There is a good, comprehensive discussion on how to move to a higher password level in the IBM i/OS manuals “Planning and setting up system security” or “Security Reference” that you should also take a close look at.

If you have any questions about anything in this tip, just ask me and I’ll give you my best shot.  My email address is rich at  All email will be answered.

Testing Resource Security

By Rich Loeber

Last month, I talked about the need to test your security setup on a regular periodic basis.  That article focused in on testing user profiles.  Today, I want to take a look at how you can go about testing your resource security setup.

There are two things that you need to test and evaluate on your system.  First, you have to make sure that users have sufficient authority to get all of their work done without a problem.  Once that has been established, you then need to go back and make sure that users don’t have too much authority, thereby compromising the confidentiality issues that prompted you to secure specific resources in the first place.

After publishing my previous tip about testing user profiles, I heard from one reader who offered an excellent suggestion.  In their shop, for user profile testing, they maintain a special user profile for each group on the system that is used just for testing purposes.  If you don’t have this set up on your system, I strongly recommend this approach.  Before testing, you can enable your test profile and then, as soon as your done with your testing, you can disable it again.  This idea applies when testing profiles and when testing resources on your system.

To test a user profile to make sure they have sufficient authority, you will have to log on with that profile or your test profile for the group.  Make sure the right menu comes up and then try exercising various menu options.  Remember, resource security does not get checked until a file is opened, so just displaying menus is not going to get the testing done.  Keep track of the operations that you perform, as some of them may have to be reversed within the application files before you end your session.  Make sure that the person who owns the application knows about your testing so they can be on the lookout for any unusual transactions that come up in their system.  Your testing should verify that the user can add records where they need to create new data and delete records where they should be able to remove data.  If you come up with any security problems, note them, make adjustments to your resource security setup and then repeat the testing until it comes up clean.

If a user has access to batch processes, those will need to be tested as well.  Great care must be taken in this area as some batch processes are not easily undone in a production environment.  You might consider setting up a test environment for these purposes.  When running batch testing, review the system operator message queue and the system history log for security error messages.  These messages will be in the 2200 and 4A00 ranges for CPF, CPI, CPC and CPD errors.

Testing for too much authority is also very important and probably a little more fun in the process.  After all, you have to have a little fun while you’re working and pretending to be a hacker is great.

While you are signed on under the profile being tested, check some of the following items:

  • Can you use menu options to gain access to a menu where you don’t belong?
  • Do you have access to the command line?
  • Are you able to key in and run CL commands?
  • Can you use the CPYF command to create a printout of a data file that you are not authorized for?
  • Are you able to run a query tool on your system to get to files that you are not authorized for?

If you are checking resource security for a specific application, you should also log on with a typical profile that should NOT have access to that application and then repeat the above checks.  You should specifically be looking to make sure that access to critical and confidential files is denied to users who should not have access.  This is particularly important as it applies to query tools since they can, by virtue of adopted program authority, thwart your resource security arrangements.

If you have any questions about anything in this tip, just ask me and I’ll give you my best shot.  My email address is rich at  All email will be answered.