Annual Checkup

By Rich Loeber

A few years ago, when I passed the age when I thought I might live forever and started maturing (a little), I decided that it would be a good idea to go see my doctor once a year for an annual checkup.  It was paid for by insurance and there was just no good reason not to go.  That first checkup (after many years of neglect, I might add) turned out OK.  The doctor told me a few things that I already knew (loose weight, get more exercise) and generally thought that I was doing OK.

After that first checkup, I got the annual appointment into my schedule and started going faithfully.  Then, after we moved up to the mountains of Northern New York, the doctor at our new home came back with a different response to my checkup.  He saw some things that didn’t look right and wanted to schedule some additional tests.  To make a long story short, he found a blocked cardiac artery and we were able to deal with it well before the onset of a heart attack.

What, you ask, does this have to do with computer security on your IBM i system?  Just this …. you need to do a full system checkup at least once a year just to see if there are any surprises.  I have done dozens of these checkups over the years on systems under my responsibility and I ALWAYS find something that needs attention.  If you’re responsible for system security, you need to do this, and year end is a good time to be thinking about it.  Nobody gets much work done during the last couple of weeks of the year and it’s a good time to go tinkering around in your system.

So, what should you include in your checkup?  Here’s a list of things to start with.  It is by no means comprehensive but will probably get you started and lead you into the areas where you need to be concerned:

●    Check the security settings in your system values using the Print System Security Attributes [PRTSYSSECA] command and reconcile differences on your system from the recommended settings.
●    List the user profiles on your system and check for employees who have left or changed their job assignment.
●    Create a database of your user profiles using the DSPUSRPRF command with the *OUTFILE option, then run a series of query reports to search for expired passwords, profiles with *ALLOBJ authority, and so on as appropriate for your installation.
●    Run the Security Wizard in the IBM i  Navigator (Or Access Client Solutsion) and check any differences on your system from the recommendations suggested.
●    Using the user profile database already created, list your user profiles by group to make sure that the groups are set up as you expect to see them.
●    Create a database of all *FILE objects on your system using the DSPOBJD command with the *OUTFILE option.  Then generate a report using your favorite query tool of new files created since your last audit and make sure that security on these new objects complies with established policies.
●    Run the Analyze Default Passwords [ANZDFTPWD] command to make sure that no default passwords exist on your system.
●    Check *FILE objects on your system with *PUBLIC access authority using the Print Publicly Auth Objects [PRTPUBAUT] command.  Make sure that the objects with public access all comply with established policies.
●    Go to the SECTOOLS menu and see if any of the options available can be of specific help to your audit efforts.
●    Review your backup process and offsite storage arrangements.  Do a physical inspection of the offsite location and make sure you can quickly and easily identify and retrieve backup sets.

Due to space constraints, this is not a comprehensive list but is intended to get you started on the audit process.  As you go through it, document both what you are doing and your findings.  That way, when next year end rolls around, you’ll be better prepared for the process and you’ll have a baseline to compare your results with.  Good luck, and I hope you don’t find any clogged arteries!

If you have any questions about this topic, you can reach me at rich at kisco.com, I’ll give it my best shot.  All email messages will be answered.

Snooping On Critical Files

By Rich Loeber

If you’re a regular reader, you’ll know that last time around, I presented a method where you can use IBM i/OS object auditing to keep track of who is doing what with selected files and objects.  That’s a good way to keep a record of what’s happening with these critical resources on your system, but no matter how often you check results, it is always after the fact.  This time, I’ll give you a way to track and report on file access for read, change and/or delete with immediate, real time notification.  Using this method, you’ll know right away when someone is in the critical file you want to keep an eye on.  And, nobody will know about it except you, if you can keep quiet about something this cool.

To accomplish this trick, we’ll create a very simple trigger program and then associate it with the file you want to track.  Keeping the trigger program simple is a key to success for this method of object tracking control.  Keep in mind that every time the file is accessed for the method you choose (which you will see as you read on), the trigger program will be run by the system and it will run in line with the application that is accessing your file.  I’ll put in some cautions along the way to point out where this might be an issue.

A trigger program is nothing more than a standard IBM i/OS *PGM object.  When it is associated with a *FILE, the OS will call the program according to the instructions you provide with the trigger file registration.  Your trigger program must have two parameters, one for information about the event and the other a simple binary two byte number that will provide you with the length of the first parameter information.  The first parm can be variable, but for a simple application like this one, you can code it at a fixed length.  The variable length is there to support multiple record length files and the actual record contents are passed to the trigger program, but for our purposes, we won’t be using that part of the information.  I code the first parameter with a length of 136 and the second parameter with a length of 2.  The first 30 positions of parameter 1 contain the file name, library name and member name in that order.  Position 31 will have an indicator as to the trigger event and position 32 has the trigger time indicator.

You associate the trigger program with the file by using the Add Physical File Trigger (ADDPFTRG) command.  The parameters on this are fairly self explanatory.  Since you don’t want to hold up production, use the *AFTER option for your trigger time setting.
The trigger event will indicate when you want the trigger to be called and the values are:

  • *INSERT – whenever a record is added to the file
  • *DELETE – whenever a record is deleted from the file
  • *UPDATE – whenever an existing record is changed in the file
  • *READ – whenever an existing record in the file is read

A word of warning about using the *READ option, this can generate a huge number of calls to your trigger program and it is probably for the best if you avoid using it.  If you want to track multiple events, you will have to register each one with a separate use of the ADDPFTRG command.

When you’re all done with your tracking project, remember to clear your trigger file registrations.  This is done using the Remove Physical File Trigger (RMVPFTRG) command.  Just use the defaults once the file is specified and all of the trigger registrations can be removed at once.

How you code your trigger program itself depends on what you want to find out.  If you’re looking for a specific user profile, then check for it.  If you’re looking for a specific time of day or day of the week, check for that.  When you’ve found something that qualifies, or if you just want to report on everything, use the SNDMSG to send a message to your user profile (or a list of user profiles that you store in a data area) and you’re done.  If you use the SNDDST to send an email notification, it would be best to do this via a SBMJOB so that the application processing is not held up while you get this sent.

To better explain this technique, I’ve written a simple CLLE program that can be used with any file and contains comments along the way to show different options that you might want to implement.  If you’d like a copy of this trigger file shell program for free, just let me know and I’ll send you a copy via Email.

If you don’t want to bother with coding your own solution, Kisco’s iFileAudit software has this capability built in along with a lot of other neat ways to keep track of what’s going on with your files.  It is available for a free trial on your system.

You can reach me at rich at kisco.com, I’ll do my best to answer.  All email messages will be answered.

Tracking Use On Critical Files/Objects

By Rich Loeber

Most shops have at least one, and probably more than one, mission critical information assets stored on the IBM i system.  If you’re doing your job as security officer, that asset is locked up tight to make sure that only authorized user profiles can get to it.  But, do you know for a fact who is actually accessing that critical data?

Here is one way that you can review who is reading and even who is changing data on an individual object-by-object basis on your system.  That is by using object auditing, a built-in feature of the IBM i OS.

For starters, you have to have to have Security Auditing active on your system.  You can do a quick double check for this using the Display Security Auditing (DSPSECAUD) command.  If security auditing is not active, you will need to get it up and active on your system.  That is a process for a different tip.  If you need help getting this started, send me an email (see below).

With security auditing active, you can set up access tracking on an object-by-object basis using the Change Object Auditing (CHGOBJAUD) command.  Depending on what you’re objective is, you can set the OBJAUD parameter to a number of values.  Check the HELP text for more information.  If you want to check everything, just set it to *ALL.  If you are only tracking usage for a limited time period, be sure to change this value back to *NONE when you’re finished as this will reduce some system overhead.

Once object auditing has been activated, the system will start adding entries to the system audit journal whenever any activity happens on the object you have activated.

To view the journal information, you use the Display Audit Journal Entries (DSPAUDJRNE) command.  The first parameter, ENTTYP, selects the specific information that you want to see.  Setting this value to ‘ZC’ will produce a listing of all of the times that the tracked object was changed.  If any applications are deleting the object, using the report for value ‘DO’ will report those happenings.  Using the value of ‘ZR’ will produce a larger listing showing all of the times that the tracked object was read.  Depending on how your object is used, you might find that the ZR report is just too huge without filtering it down …. read on.

The generated reports are simple Query listings.  The reports are generated from a file that the DSPAUDJRNE command creates in your QTEMP library.  The database file is named QASYxxJ4 where “xx” is the value you used on the ENTTYP parameter.  Once this database file has been created, you can use it to generate your own reports.  This way, you can slice and dice the data for your own unique needs.  For example, if you are looking for specific user profiles, you can add that as a selection criteria.  Or, if you want to analyze access by time-of-day or day-of-the-week, you can do that too.  The possibilities are quite open at this point.

I set this up on my test system to track accesses to an obscure data area that I was quite sure is only rarely used.  I set the tracking and left it for a few hours, then went back to it.  Even on this test system, I was surprised by the number of times the data area was used, and I’m the only user on the system!  Who knows what surprises you will turn up.

If you have any questions about anything in this tip, just ask me and I’ll give you my best shot.  My email address is rich at kisco.com.  All email will be answered.

Monitoring For Security Events

By Rich Loeber

Your system is in use by your user community all day long.  Depending on the size of your shop and the number of users, there could be hundreds or even thousands of security decisions being made by your security setup on a minute by minute, hour by hour, day by day basis.  If you’ve done your homework well, those security arrangements will all work to protect your data from being used incorrectly.

But, how do you know when a security violation has been made?

One way is to keep security auditing active on your system and run regular reports from the security audit journal.  In fact, that is a good practice to implement, but it is not going to give you quick feedback when a serious security violation occurs.

When a critical security violation happens, an error notice is posted to the system operator message queue (QSYSOPR).  The problem, however, is that LOADS of messages in most shops go to the system operator message queue and it is easy to loose one in the haze of all that activity.

To address this problem of the security messages getting lost in the system operation message queue, the IBM i OS has an alternate message queue capability set up.  Check your system to see of the QSYSMSG message queue exists in QSYS library.  If you don’t see one, just create using the CRTMSGQ command.

Once the QSYSMSG message queue is on your system, all critical security related messages will also be posted to this message queue along with your system operator queue.  Now, all you need to do is make sure that you end up knowing when a message has been posted.

The quick and easy way is to log on to the system and run the following command:

CHGMSGQ MSGQ(QSYS/QSYSMSG) DLVRY(*BREAK)

Once this is done, whenever a message is posted to the QSYSMSG message queue, it will be displayed on your terminal session as a break message.

But, this could be a problem.  First, it requires that you always be logged on and it limits the number of people who can monitor for security events to one.  A different solution is to create a little CL program to “watch” the message queue for you and then forward the message on to your user profile (or a series of user profiles) when they happen.

This way, you and your security team can find out about security problems in real time and won’t have to wait for audit journal analysis to see that serious security violations are happening.

I have put together a simple little message monitor CL program that works with a set of up to 5 user profiles stored in a simple data area.  If you’re interested in getting a copy of this code, or if you have any questions about this tip, send me an email (rich at kisco.com).

An even better solution is to implement a flexible message queue monitoring software tool such as Kisco Information Systems’ iEventMonitor software.  This will add email and text notification for you and you can implement many of the other features to monitor your system.

If you have any questions about anything in this tip, just ask me and I’ll give you my best shot.  My email address is rich at kisco.com.  All email will be answered.