*********************NEWSLETTER FORTY-FIVE******************************
       MXG NEWSLETTER NUMBER FORTY-FIVE, August 25, 2004.               
Technical Newsletter for Users of MXG :  Merrill's Expanded Guide to CPE
                         TABLE OF CONTENTS                              
I.    MXG Software Version.                                             
II.   MXG Technical Notes                                               
 8. MXG 22.08 or later required for full support of SAS V9.1.2/V9.1.3.  
 7. MXG 22.04 or later is required to support all CICS/TS 2.3 SMF data. 
 6. MXG 22.06 is required for DB2 Parallel                              
 5. If you are using IRD, you must install MXG 22.02 or later:          
 4. Still OS/390?                                                       
 3. MXG Default Media is now changed to CD-ROM.                         
III.  MVS Technical Notes                                               
33. What are IFA's?                                                     
32. Where does IFA CPU time show up:                                    
IV.   DB2 Technical Notes.                                              
 1. DB2 Accounting Discoveries, Jun 5, 2004:                            
V.    IMS Technical Notes.                                              
VI.   SAS Technical Notes.                                              
 1.            Impact of BUFNO option for SAS data sets                 
VII.  CICS Technical Notes.                                             
VIII. Windows NT Technical Notes.                                       
IX.   Incompatibilities and Installation of MXG.                        
         See member CHANGES and member INSTALL.                         
X.    Online Documentation of MXG Software.                             
         See member DOCUMENT.                                           
XI. Changes Log                                                         
     Alphabetical list of important changes                             
     Highlights of Changes  - See Member CHANGES.                       
I.   MXG Software Version 22.09 is now available upon request.          
 1. Major enhancements added in MXG 22.09:                              
    See CHANGES member of MXG Source, or    
II.  MXG Technical Notes                                                
 8. MXG 22.08 or later required for full support of SAS V9.1.2/V9.1.3.  
    The good news:                                                      
    Most importantly, there are no data incompatibilities between V8 and
    V9.  Data libraries and catalogs built with V8 can be read and      
    written with SAS V9, and libraries and catalogs built with V9 can be
    read and written with SAS V8.                                       
    The bad news:                                                       
    MXG 22.08+ is required for safe execution with SAS V9.1.2 or V9.1.3.
    While MXG 22.07 had critical revisions for SAS 9.1.2, more design   
    changes were discovered in V9.1.2 that required more MXG changes.   
     Plus, using the PROC SYNCSORT add-on product caused fatal errors   
     in BUILDPDB, because INFORMAT names were truncated with this error:
     See MXG Newsletter FORTY-FIVE, SAS Technical Note 12 for details.  
     While SYNCSORT and SAS examined the error, MXG Change 22.192       
     removed all MXG INFORMAT $NOTRAN statements, so MXG 22.08+ will not
     fail even without the ultimate fix from PROC SYNCSORT for SAS V9.  
     - This error was only with the add-on product (it prints the text  
       "PROC SYNCSORT" on SAS log);                                     
     - There were no errors with Syncsort's SORT product itself).       
     - SAS Institute has determined the error was in the SAS Toolkit and
       not actually in PROC SYNCSORT, see SAS Usage Note XX-yyyyyyy.    
     - A new PROC SYNCSORT patch will ultimately be required, after they
       get the revised SAS Toolkit; Syncsort ticket number # SR387805   
       refers to the problem.                                           
     Install MXG 22.08, use MXGSASV9 and CONFIGV9 from 22.08, and       
     Run UTILS2ER utility against all of your SAS programs to see       
     if any lines conflict with S2=72 option that replaced S2=0         
     option set by MXG previously.                                      
    Details on Changes related to SAS V9.1.2 and MXG execution:         
    a. CONFIGV9:                                                        
    NOTHREADS required for SAS V9.1.2, fixed in 9.1.3. Change 22.207.   
     SAS Note SN-12943 reports incorrect results, no error message:     
     PROC MEANS, SUMMARY, REPORT, TABULATE                              
     Only on "MVS", only if threading is used. V9 default is THREADS    
     While fixed in 9.1.3, I chose to force NOTHREADS in CONFIGV9.      
     Can use OPTIONS=THREADS with 9.1.3 on // EXEC to change.           
    b. CONFIGV9:                                                        
     NLSCOMPATMODE is required for MXG code to execute worldwide;       
     in V9, SAS changed their default to NONLSCOMPATMODE, but MXG       
     overrides that in CONFIGV9 to the NLSCOMPATMODE that's required.   
     Only on "MVS" (thus far!), doesn't fail if LOCALE is ENGLISH/blank 
     But with LOCALE=GERMAN_GERMANY or other non-blank, or non-ENGLISH  
       every @ symbol causes an error at compile time.                  
     Extensive discussion in text of Change 22.129 for NLS and LOCALE.  
     Once MXG has built its SAS datasets, these text-handling options   
     can be changed: in one case, exclamation marks and CRLF were not   
     produced in ods output with MXG's NLSCOMPATMODE, but changing it   
     back to SAS's NONLSCOMPATMODE default produced desired results.    
    c. CONFIGV9:                                                        
    S2=0 option now required; MXG previously used S2=72 in CONFIGxx.    
     Only on "MVS".  Extensive discussion in Change 22.123.             
     S2 sets line size of source read by %INCLUDE or AUTOCALL.          
     V9 MVS SASMACRO library was changed from RECFM=FB to RECFM=VB      
       -no standard for line size of SAS-provided %macro text           
       -new macros were written by ASCII folks, line length 255         
       -Rather than make the authors correct, RECFM changed to VB.      
     BUT: RECFM VB has entirely different meaning for S2 than FB.       
       S2=72  FB ==> read only first 72.  VB: ==> START IN 72!!!!       
     MXG had always specified S2=72 to protect you from line numbers    
       S2=0  ==> look at last 8 columns to see if line numbers exist    
     All MXG code is only 72 positions, so S2=0 is no-risk to MXG code. 
     BUT: If you have SAS code with mixed blanks and numbers in 73-80,  
          option S2=0 will cause your code to syntax error.             
     So: New MXG UTILS2ER utility will read all of your source libraries
          and identify any exposures in your SAS programs.              
    d. CONFIGV9:                                                        
    V6SEQ may still be required with SAS V9.1, V9.1.2                   
     Only on "MVS".  SN-012437 and Change 22.108 discuss.               
     SAS V9.1 and V9.1.2 create corrupted and unreadable datasets       
      with no error at create time, and data is unrecoverable,          
      if V7SEQ, V8SEQ, or V9SEQ are used.                               
     SAS Hot Fix in SN-012437 does correct the error for V9.1/9.1.2     
     BUT: I can't guarantee you have that hot-fix installed, so         
     MXG SEQENGINE default was again set back to V6SEQ in 22.05.        
        But: V6SEQ failed with long-length variables, so                
             Change 22.108 shortened all MVS variables.                 
             MXG has had numerous iterations on SEQENGINE!              
             Mostly because unnecessary compress was done to tape.      
    e. MXGSASV9:                                                        
    MVS JCL Example has new symbolics for new SAS NLS/LOCALE options.   
     XX='EN' - Default Language Value (ENGLISH)                         
     YY='W0' - Default Encode Value (USA)                               
      'DEW3' is for most GERMAN, but 'DEWB' is for SWIZTERLAND.         
     You must look at the SAS JCL proc built by your SAS installer      
     to find the correct XX and YY values, and then set them as         
     your MXGSASV9 JCL Procedure defaults.  The symbolics in MXGSASV9:  
       //CONFIG   DD  DSN= ... CNTL(BAT&YY.)                            
       //SASAUTOS DD  DSN= ...&YY..AUTOLIB                              
       //SASHELP  DD  DSN= ...&XX.&YY..SASHELP                          
       //         DD  DSN= ...EN&YY..SASHELP                            
       //SASMSG   DD  DSN= ...&XX.&YY..SASMSG                           
       //         DD  DSN= ...EN&YY..SASMSG                             
       New DD statements for TRNSHLP, ENCDHLP and TMKVSENV were added.  
    f. ASCII-execution code change:                                     
     EBCDIC character variables INPUT with $VARYING had hex zeros       
       where they should have had blanks                                
       because of a SAS V9 Design Change in $VARYING informat.          
     $VARYING always has returned a "raw" $CHAR string that must        
       be converted if the string is EBCDIC text, using:                
        INPUT VARIABLE $VARYINGnn. LENTEXT @;                           
      but when LENTEXT was less than nn, the "pad" of '80'x was         
      found on SAS ASCII platforms, so the statement                    
        VARIABLE=TRANSLATE(VARIABLE,' ','80'x);                         
      was added to translate the unexpected/undocumented '80'x.         
      Now, also undocumented, in V9, the "pad" of '00'x is returned!    
      So an additional                                                  
        VARIABLE=TRANSLATE(VARIABLE,' ','00'x);                         
      had to be added 511 times in 55 members.                          
 7. MXG 22.04 or later is required to support all CICS/TS 2.3 SMF data. 
     MXG 21.04 supported UTILEXCL/IMACEXCL to read CICS/TS 2.3 SMF 110s,
      so if you used UTILEXCL to read your CICS/TS 2.3 Dictionary to    
      create IMACEXCL, that IMACEXCL correctly read SMF 110 data.       
     And if you had EXCLUDEd fields, you HAD to use UTILEXCL.           
     But if you read SMF 110 records with MXG 21.04 thru 22.03, and all 
      CICS fields were present, TASCPUTM and many other variables were  
      completely wrong, and there were no error messages!               
     MXG 22.04 or later supported full CICS/TS 2.3 records.             
     You are still advised to use UTILEXCL/IMACEXCL, even with all data,
      as it only outputs CICSTRAN variables that exist in your CICS.    
     My original CICS/TS 2.3 test data had EXCLUDEd fields!             
 6. MXG 22.06 is required for DB2 Parallel                              
     CPU time for DB2 Parallel Trans was not output                     
      (i.e., lost, could be very large) in DB2ACCT.                     
     Code in MXG Exit Members EXDB2ACC/EXDB2ACP/EXDB2ACB                
      deleted all obs with DB2PARTY='P', which was wrong                
      because those obs contain the DB2TCBTM for parallel events.       
     New DB2PARTY='R' for the Roll-Up observations also added.          
     Extensive DB2 Technical Note in Newsletter FORTY-FIVE and          
     additional documentation in Change 22.121 text.                    
 5. If you are using IRD, you must install MXG 22.02 or later:          
    Full Support for IRD (Intelligent Resource Director) in all         
    CPU-related datasets.  IRD support was incremental in MXG:          
          Datasets            When        MXG Version  Change           
        ASUM70PR/ASUMCEC   Sep 22, 2003     21.05      21.170           
        TYPE70PR           Mar 11, 2004     22.01      22.011           
        TYPE70,RMFINTRV    Mar 22, 2002     22.02      22.050           
    PCTCPUBY in TYPE70 and RMFINTRV were wrong in any interval          
     when IRD varied CPUs offline.                                      
    I'm embarrassed, since PCTCPUBY is the second most important        
    of all of the variables in MXG                                      
      (CPUTM for billing is the most important);                        
    This is the first PCTCPUBY error in MXG's TWENTY-YEAR history!      
    When all engines remained online, however, there was no error.      
 4. Still OS/390?                                                       
  MXG 22.07 is required for z890 and 21.04 for z990s.                   
  IBM changed CPUTYPE value                                             
    z990 - 2084x                                                        
    z890 - 2086x                                                        
  Only impacts MSU variables that MXG had to set via                    
    a table lookup based on CPU TYPE for OS/390.                        
  With z/OS, MSU fields are in the SMF records so there                 
    is no table lookup required.                                        
 3. MXG Default Media is now changed to CD-ROM.                         
    I have changed the default media for MXG Software Shipments from    
    a 3480 tape cart (which contains a single 120MB EBCDIC text file),  
    to a CD-ROM (contains same EBCDIC file; upload-as-binary to MVS, and
    read it with IEBUPDTE like a tape to create your MXG SOURCLIB PDS). 
    And the CD-ROM also has MXG Source in ASCII files, so for ASCII MXG 
    execution, or to view the MXG source text on your workstation, you  
    only have to copy the CD-ROM to your workstation's hard drive.      
    However, still better than any media shipment, is for you to use ftp
    to download the 120 MB unzipped EBCDIC file for MVS execution, or   
    the 15 MB zipped ASCII directory for workstation execution of MXG.  
    The 120 MB unzipped EBCDIC file download takes about 15 minutes on  
    a typical mainframe connection with a T1 line.                      
    But, if it is simply impossible for you to download via ftp, and you
    still cannot read/upload from a CD-ROM, I will continue to send 3480
    tapes until your site can upgrade its technology and media support. 
    Annual Shipment (typically, in February of each year:               
    - Since you never know when the Annual Version will arrive, I plan  
      to automatically ship a CD-ROM to all MXG sites (and to ITSV sites
      that have requested the Annual Shipment), even to sites that can  
      ftp download, so I know everyone has the current version.  Sites  
      that still have to have a 3480 tape will need to request a tape,  
      but with that CD in hand next February, they may discover they    
      really don't need to wait for a tape to install the new version!  
    - Concurrent with the Annual Shipment, I post to MXG-L and ITSV-L   
      that ftp-capable sites can request ftp instructions via email, as 
      I'm not yet prepared to database email addresses for auto send!   
    Current Version Shipment requests:                                  
    - Use the form  to request
      the current version via ftp and you will receive ftp instructions 
      in a reply email (example instructions are in that page), or, use 
      it to request shipment on CD-rom, or you can use the form even to 
      request 3480 shipment, if your site is both ftp and cd challenged.
    Problems solved by CD-ROM:                                          
    - Firewall issues that prevent ftp download, internet restrictions. 
      You really should fix that internal issue to get download access  
      to our ftp site (text-only-files!), so you don't have to wait and 
      I don't have to creating, packaging and send ad hoc packages.     
    - Onerous Tape Mount Procedures, like Security and LABEL=NL issues: 
        One site's "IEBUPDTE" job was cancelled after 17 hours, because 
        the tape librarian had copied the NL tape to an Standard Label  
        tape, but changed BLKSIZE from 32720 to 80!  When correct 32720 
        blocksize was used, the IEBUPDTE job took less than 1 minute.   
    - Rare tape read errors, which cause additional delays.  The annual 
      shipment outsourcer's old 3480/3490 drives built three carts with 
      Data Checks (DCK error) and two blank carts (NCA error).          
        The 3480's created for ad hoc shipments on my Overland drive    
        have never caused a Data Check, but I have carelessly sent out  
        one or two blanks every year.  Overland no longer supports their
        auto-loader, so I only have a single-loader as backup for the   
        auto-loader's eventual failure.                                 
    - Cannot create 3590 media.  Overland doesn't manufacture one; even 
      if they did, I'd never buy one, with ftp and CD-ROM alternatives. 
    - Character translation, like dollar-signs versus pound-signs in UK.
      CD-ROM has separate EBCDIC and ASCII files for binary transfer.   
    - Shipment advantages when media is required.                       
      - Thinner Package, CD much cheaper than 3480 cart (unless you buy 
        "new" tapes from e-Bay, but those may be "lifted" new tapes)    
      - Annual Shipment outsourcer is really primarily a CD maker.      
      - Customs advantage - less hassles on CDs containing print matter,
         as cart's have caused custom's delays in some countries.       
      - CD-ROM has MXG Logo, tapes have a paper label.                  
    Potential Negative Advantages of CD-ROM.                            
    - Scratched CD:                                                     
      We've been shipping MXG on CD for four years, and all CD-s have   
      been read without error.  Even one CD that was run over with a    
      fork-lift (tire print on package), shattered jewel case, but the  
      CD had been read without an error.                                
    - Blank CD:                                                         
      Like carts, I screw up; I have sent 4 blanks in 4 years.  CD's are
      packaged inside a thin jewel case, wrapped in an envelope in a    
      White Stayflat cardboard shipping envelope.                       
    - But CDs do eliminate the opportunity to meet/date your company's  
      tape librarian and/or the tape apes.                              
 2. FLASH: Do NOT use SEQENGINE=V7SEQ, V8SEQ, or V9SEQ under SAS V9.1,  
    until you have installed V9.1 Hot Fix for SAS Note SN-012437, which 
    should be available May 24, 2004.  And this is NOT just an MXG/ITRM 
    problem: all SAS datasets built by those SEQENGINEs under V9.1, both
    those written/copied to tape, or to DASD in sequential format, can  
    be corrupted and unreadable and unrecoverable, with no error message
    when the dataset was created: ONLY when you attempt to read the data
    will you discover it is unreadable and the data lost.               
    ONLY SEQENGINE=V6SEQ can be used under V9.1 on MVS, without hot fix.
    But, using V6SEQ with MXG under SAS V9.1 may create error messages, 
    if you copy or write any of these datasets to tape or to sequential 
    format on DASD (which is exactly what MONTHBLD/WEEKBLD may do):     
        Product     Datasets                                            
        TYPE80A     TYPE80EK  TYPE8XEK                                  
        TYPE110     CICSJR    CICPGR                                    
        TYPE120     TYP120JI  TYP120WA  TYP120WA  TYP120WI  TYP120WI    
        IBM  SMSX   IGDACSSC  IGDACSSC  IGDACSSC                        
        TMON TMTC   TMTCIS                                              
        CANDLE OMDB DB0035/0630/1920/2060/2080/2360/2571/3120/3140/3190 
    because those datasets contain long-length character variables, but 
    you do get error messages that prevent the creation of bad data, and
    none of those datasets are built/copied in the default BUILDPDB or  
    WEEKBLD or MONTHBLD programs, so this should have minimal impact.   
    However, MXG Version 22.05, revised those datasets so their         
    variables are all 200 bytes or less, and you can install that 22.05 
    and safely use SAS V9.1 with V6SEQ.  Complete details will be in    
    Change 22.108 on or before May 24.                                  
    The V7/V8/V9SEQ engines create damaged SAS datasets that can't be   
    read, if the dataset has a "large" number of formatted variables.   
      One case had 128 variables with the SAS-supplied hhmm  format,    
      so it was not "MXG-created" formats that cause the corruption.    
    And there is NO clue when the dataset was created that it can't be  
    read later.  Only these read-time error messages tell you your data 
    was destroyed:                                                      
      ERROR: The format xxxxx  was not found or could not be loaded.    
        and that  xxxxx text contains strange, unprintable characters.  
    And even if you copy the dataset with OPTIONS NOFMTERR, the copy is 
    still unreadable; you then get these different error messages:      
      ERROR:  Format xxxxx was not found or couldn't be loaded for      
              variable vvvvvvvv.                                        
      ERROR:  Format name for variable vvvvvvv was not a valid SAS      
              name.  Perhaps the input dataset is corrupted.            
    You MUST change CONFIGV9 so that it specifies SEQENGINE=V6SEQ.      
    In MXG Source Library:                                              
      - Thru MXG 20.07, CONFIGV9 had V6SEQ specified.                   
      - MXG 21.21 dated Feb 6 still had V6SEQ in CONFIGV9               
                 (this was the Annual 3480 tape shipment build)         
      - MXG 21.21 dated Feb 11 changed CONFIGV9 to specify V9SEQ,       
                 due to Change 21.320, added after tape copy started;   
                 (this was the CD-ROM shipment, and all ftp downloads). 
      - MXG 22.01 thru MXG 22.04 still had SEQENGINE=V9SEQ.             
      - MXG 22.05 and later now has V6SEQ, and that will remain for some
          time, to protect sites that did not install the Hot Fix.      
    To the credit of SAS Institute Technical Support and Developers,    
    it took only two weeks from discovery to a tested hot fix!          
    And please note:  THERE IS NO ERROR USING V8SEQ WITH SAS V8.2.      
    It is only execution under SAS V9 that has this error.              
    In the discussion in Newsletter FORTY-FOUR, you can see I did do    
    extensive benchmarking of the V9SEQ engine, but unfortunately, I    
    was concerned only with the CPU costs of compression, and never     
    attempted to read the datasets that were created.                   
    This FLASH was originally sent May 21, based on my read of an early 
    copy of the SAS Note, and I concluded V8SEQ was safe for V9.1, but  
    today SAS Technical Support (working on Saturday!) corrected me, so 
    this note, dated May 22, 2004, does replace the earlier note; the   
    error occurs with V7SEQ, V8SEQ, or V9SEQ under V9, without the fix. 
 1. FLASH: BUILDPDB (only-under"MVS") runtime elongation can be         
    significant IF any output libraries (like CICSTRAN or DB2ACCT) are  
    on tape or in sequential format on DASD.                            
      This problem does NOT affect ITRM's normal job, because           
      %CMPROCES/%CPPROCES puts everything in //WORK, and ITRM doesn't   
      use tape libraries during its "BUILD".                            
    This problem was introduced in MXG 19.19, when %VGETENG was added in
    VMXGRMFI, to test if a //SPIN DD existed.  VMXGRMFI is called by    
    RMFINTRV, which is automatically included in MXG's BUILDPDB.  If you
    do NOT use tapes for output in your BUILDPDB job, you don't have    
    this problem.                                                       
    of //SPIN, but no WHERE LIBNAME=SPIN clause was used, so the PROC   
    SQL had to read all LIBNAMEs to populate the DICTIONARY.MEMBERS     
    table.  If your CICSTRAN and DB2ACCT are both multi-volume on tape, 
    you would have these mount messages on the BUILDPDB's job log:      
       SMF Opened, read started               14:25                     
       CICSTRAN  Mount-Dismount 5 vols        14:24-16:06               
       DB2ACCT   Mount-Dismount 2 vols        14:24-15:25               
       SMF Closed, read completed                   16:12               
       VGETENG-remount/read 2 DB2ACCT vols    16:17-16:30               
       VGETENG-remount/read 5 CICSTRAN vols   16:40-17:09               
         Total Elapsed time:  164 minutes with re-read.                 
    VGETENG wasted 52 minutes, mounting and reading the seven tapes! For
    DASD data libraries, PROC SQL only has to read the directory of SAS 
    datasets in that library, but for sequential format libraries, PROC 
    SQL has to read the entire sequential file, because tape format     
    libraries do not have a directory of datasets.                      
    And PROC SQL doesn't print any message on the SAS log to tell you it
    was the cause of the extra CICSTRAN & DB2ACCT mounts. The only clue 
    to the elongation are those extra, and unexpected, tape mount       
    messages on the job's SYSOUT!                                       
    Elapsed and CPU time, and EXCPs of your daily job can be reduced    
    significantly, if you use tape output on MVS in your BUILDPDB job,  
    with either circumvention:                                          
    Immediate circumvention, any of the three fixes:                    
      1. Replace MXG 21.21 with MXG 22.01, now available.               
         See text of Change 22.017 for lots of details.                 
      2. Change the JCL:                                                
         Add ,FREE=CLOSE to the //CICSTRAN and //DB2ACCT and to any     
         other output DDs that are on tape/seq format.                  
      3. Or, change the MXG source code:                                
         a. EDIT/CREATE member EXPDBOUT in USERID.SOURCLIB tailoring    
            library, and add a statement:                               
                 LIBNAME CICSTRAN CLEAR;                                
                 LIBNAME DB2ACCT  CLEAR;                                
            for each tape (or sequential format) DDNAME.                
         b. Or do the same in your //SYSIN stream, using:               
              //SYSIN DD *                                              
                %LET MACKEEP=                                           
                   %QUOTE(     LIBNAME CICSTRAN CLEAR;                  
                               LIBNAME DB2ACCT  CLEAR;                  
                %INCLUDE SOURCLIB(BUILDPDB);                            
       of job....                                     
    Discussion of the circumventions:                                   
    - Using FREE=CLOSE.  This appears to always be safe.  The FREE=CLOSE
      occurs when CICSTRAN/DB2ACCT/etc are closed, at the end of reading
      the SMF file, so PROC SQL in VMXGRMFI won't see those sequential  
      LIBREFs so they won't be read.  But even if your job does then    
      %INCLUDE any of the ASUMCICx, ASUMDB2A, or ASUMUOW members that   
      read those data libraries, SAS is still able to re-open the LIBREF
      that was FREE=CLOSEd, without any error.  Like magic! And using   
      FREE=CLOSE on //SMF DD seems always wise and courteous, so the    
      device can be available to another.                               
    - Using LIBNAME xxxxxxxx CLEAR; in EXPDBOUT also appears to be      
      completely safe.  EXPDBOUT is a little later than the close of the
      SMF file, after a few PROC SORTs, but the CLEAR closes the LIBNAME
      so PROC SQL in VMXGRMFI never sees those LIBNAMES to read, but the
      allocation can be re-opened, if, for example, you %INCLUDE any of 
      the ASUMxxxx's that read DB2ACCT or CICSTRAN.                     
    - With either circumvention, harmless messages are on the SASLOG for
      each DDNAME, at deallocation time:                                
               IT WAS ALLOCATED EXTERNALLY                              
         NOTE: LIBREF XYZ HAS BEEN DEASSIGNED                           
    - The circumventions do not have to be removed when you install an  
      MXG Version with this change.                                     
    - The choice of changing JCL or MXG source depends only on whichever
      is easier to do within your production change control system for  
      your MXG daily jobs!                                              
III. MVS Technical Notes.                                               
44. APAR PQ92957 documents illegal User ABEND U5320 from BMC Backup     
    Utility; the maximum User ABEND value is 4096.                      
43. APAR PQ92769 changes the format of SMF 118 records for the reply    
    type; MXG change 22.212 supported that internal format change in the
    VMACTCP support.  The APAR also corrects blank value for data set   
    type to the 'S' documented value, and changes the offset to HFS when
    there is no HFS data, which has no impact on anything I can see!    
42. APAR PQ91647 corrects SMF 118 and 119 records so that they are now  
    written when FILETYPE=JES and DELE command is issued to the FTP     
41. APAR OW08447 reports that PrintWay Extended Mode does not write type
    6 SMF records unless an SMF6 exit (ANFUXSMF) was installed!  The PTF
    for this APAR seems to eliminate the need for the exit.             
40. APAR OA08769 reports incorrect SMF 83 relocation section 01 after   
    changing a SECLABEL on a data set profile.                          
39. APAR OA03292 corrects IEBGENER (!!!!!): when used to copy a PS SMF  
    dataset with RECFM=VBS,LRECL=32767,BLKSIZE=27998                    
      (which, in my opinion, is flat out wrong; SMF data can only be    
       validly written with LRECL=32760, and if the user had used that  
       value, the APAR might not even exist; IBM accepts and corrects   
       any case in which their records have LRECL GT 32760)             
    to a PS-E striped output, the output dataset was twice as large as  
    it should have been.  The APAR changes IEBGENER so that it will not 
    copy the blocksize from the input dataset when the RECFM is NOT     
    undefined or default, which will cause SDB to be tried.             
38. APAR OA08719 corrects IFA CPU times that were incorrectly zeroed in 
    SMF 30 step termination records.                                    
37. Search for IFA and you will find Integrated Fax Adapter 
    for iSeries, and references to the IFA trade show, but no zAAPs!    
36. Support for VSAM IMBED, REPLICATE and KEYRANGE attributes will be   
    withdrawn beyond z/OS 1.7.  No supported release of OS/390 nor z/OS 
    currently allows you to define new VSAM datasets with these         
    attributes, and using them for existing data sets can waste DASD    
    space and can often degrade performance; when support is withdrawn, 
    you will not be able to process VSAM files with these attributes.   
35. Effective with z/OS 1.7, support for ISAM datasets will be withdrawn
    and you will no longer be able to process ISAM datasets.  The ISAM  
    Compatibility Interface remains available to help you migrate to    
    VSAM without application changes.                                   
34. IBM will remove support for STEPCAT and JOBCAT JCL statements in    
    z/OS 1.7 (expected in September, l 2005), claiming there are other  
    facilities in DFSMSdfp that allow catalog requests to be directed   
    to specific catalogs, and the utility of these two JCL statements   
    has been drastically reduced by the implementation of System Managed
    Storage and the placement of Unit Control Blocks UCBs above the 16MB
    line.  When STEPCAT/JOBCAT support is withdrawn, any remaining JCL  
    using those two statements will have to be changed, or JCL errors   
    will occur.                                                         
33. What are IFA's?                                                     
    For z/OS on z890s and z990s, IFA was the internal name of special   
    purpose processors ("engines") that are now called zAAPs, for ZOS   
    Attached Assist Processors, but all of the RMF/SMF field names use  
    IFA; zAAP is the marketing name for these engines.  IFAs are engines
    that execute only Java code, and are not included in MSU capacity,  
    so if you add new Java applications, or can offload current Java to 
    an IFA, you can increase hardware capacity without any increase in  
    software costs.  If you purchase IFAs, you can choose to force all  
    of your Java workload to execute only on the IFAs, thus maximizing  
    the offload of work from your CP engines, or you can let Java work  
    execute on your CPs when they are not busy: if you let Java execute 
    on your CPs, you can let it run at the priority of the Service Class
    or it can be run even lower than discretionary.  New keywords of    
    IFACROSSOVER and IFAHONORPRIORITY let you choose how you use IFAs.  
32. Where does IFA CPU time show up:                                    
    - What's really important about this preliminary discussion is that 
      it exists at all; IBM has been extremely pro-active, especially at
      the SHARE meeting in NYC in August, to provide early documentation
      of what's captured where, well in advance of the z/OS 1.6 delivery
      that is a prerequisite to use these new engines.  The following   
      notes are based heavily on those SHARE presentations and follow up
      from IBMers, with additional valuable input from Cheryl Watson.   
  a. Service Units.  In all records that contain Service Units, the TCB 
     Service Units that formerly had only the service units due to TCB  
     time on CP engines now includes SUs from both IFAs and on CPs.     
     If you use Service Units for billing, your billing units may be    
     increased when Java work runs on IFAs.                             
     Why did IBM include IFA service in TCB service?                    
     Because WLM manages based on service units; an all-Java workload   
     would have zero TCB service units, could be using 100% of all IFAs,
     and WLM wouldn't know that, unless IFA service is included.        
  b. SMF 30. Existing CPUTCBTM/SMF30CPT CPU time does NOT include IFA   
     CPU time, so existing billing based on TCB CPU time will not change
     (but CPU time on IFAs will not be recovered unless you change your 
     billing code).  New variable CPUIFATM is the CPU time on IFAs, so  
     you can choose to charge IFA CPU time at a different rate than the 
     CP CPU time, or you could add the CPUIFATM to the total CP CPU time
     in MXG's CPUTM variable to charge both CPU times at the same rate. 
       While on the z990s the CPs and IFAs run at the same speed, on the
       z890s, the IFAs run at full speed (365MIPS) while the CPs have 28
       different "speeds", so the IFA CPU time are normalized back to   
       the CP speed of the specific z890 model.                         
     MXG will NOT add the CPUIFATM into the existing CPUTM variable.    
     MXG creates new variable CPUIFETM ('Eligible') that contains the   
     part of CP CPU time in CPUTCBTM that was executed on a CP, but that
     was eligible to run on an IFA.  So even before you install an IFA, 
     with z/OS 1.6 and Java SDK V1.4 you can measure exactly how much   
     of your CP CPU time is Java time that could be offloaded to an IFA.
       The zAAP Projection Tool (WSC White Paper WP100431) can be used  
       now on z/OS 1.4+ to estimate current CP CPU time for Java apps.  
     These eight IFA-related CPU variables are created in type 30 data: 
       CPUIFATM='TOTAL*EQUIVALENT*CPU TIME*ON*IFA'                      
       CPUDFATM='DEPENDENT*ENCLAVE*CPU TIME*ON IFA'                     
       CPUEFATM='INDEPENT*ENCLAVE*CPU TIME*ON IFA'                      
       CPUIFETM='IFA-ELIGIBLE*CPU TIME*ON*CP'                           
     The original bad news in this Note (prior to MXG 22.11) stated:    
        While IFA CPU time is available in SMF 30 data, the CPUCOEFF,   
        SU_SEC, and R723NFFI (normalization) factors are NOT, so without
        a table lookup for those factors, you can't back out the IFA    
        Service Units from the total IFA+CP Service Units in type 30s.  
      However: IBM APAR OA09118 added the Service Definition Coefficient
      values, and Change 22.265 uses those values to back out the IFA   
      CPU Service Units from the CP Service Units in CPUUNITS variable. 
     Maybe bad news: There are concerns as to the repeatability of CPU  
     metrics, especially since Java work can run on either IFAs or CPs  
     when IFACROSSOVER=YES is specified, and additional concerns for the
     repeatability of the normalization factors on z890s.  Only when we 
     have data from real sites running real Java work will we know the  
     magnitude of these concerns.                                       
     The good news:  since so few sites are actually running much Java  
     on their current z/OS systems, these future concerns won't impact  
     your current chargeback, and you will be able to benchmark and     
     measure your work's repeatability before IFAs are in production.   
  c. SMF 70.  TYPE70 MVS System data flags each CPU, so new IFATYP0-X   
     variables indicates whether an engine is a CP or an IFA.  The MXG  
     PCTCPUBY calculation and related capacity metrics will still be    
     based ONLY on the CP engines, as has always been the case.  New    
     PCTIFABY calculation and related IFA capacity metrics are added    
     to TYPE70 dataset so IFA capacity can also be tracked, separately. 
     The IBM RMF CPU Activity report shows separate lines and averages  
     for the CPs and the IFAs.                                          
       SMF70IFA='NUMBER OF*IFA PROCESSORS*ONLINE' (IBM provided)        
       NRIFAS  ='NUMBER OF*IFA*ENGINES*AVAILABLE' (MXG count)           
       IFAUPTM ='IFA ENGINE*AVAILABLE*(UP) TIME'                        
       PCTIFABY='ALL IFAS*PERCENT*BUSY (DISPATCHED)'                    
       PCTIFBY0-PCTIFBYX='IFA 0*PERCENT*BUSY' (each engine thru IFA 32) 
       IFATYP0-IFATYPX='IFA OR CP*TYPE*CPU 0' (each engine thru IFA 32) 
       IFAWAIT0-IFAWAITX='IFA WAIT*DURATION*IFA 0' (each thru IFA 32)   
  d. SMF 70 for PR/SM.                                                  
       TYPE70PR PR/SM LPAR dataset contains observations for all engines
       including CPs, ICFs, IFLs, and IFAs, and starting with the z9,   
       variable SMF70CIN identifies the type of engine.  The LCPUPDTM is
       the "CPU" time (Partition Dispatch Time) on that type of engine. 
       For analysis of Linux IFLs and Integrated Coupling Facility ICFs,
       you must use the detail TYPE70PR dataset, and select based either
       on SMF70CIN if it is populated or LPARNAME.  However, Dedicated  
       IFLs always report 100% utilization in RMF TYPE70PR/ASUM70PR.    
       See NEWSLTRS article in Newsletter FIFTY-EIGHT titled            
        "1. MONWRITE data from 2 z/VM" since MONWRITE must be used.     
       But for your "MVS" PR/SM metrics, I strongly recommend that you  
       use the summary datasets, ASUM70PR, ASUM70LP, ASUMCEC, ASUMCELP, 
       which have always used the "CP" engine LCPUPDTM to populate the  
       "MVS" CPU variables, and now has new "IFA" variables from their  
       "IFA" engine observations in TYPE70PR.   And using these ASUMs,  
       rather than your own TYPE70PR summarization means you won't have 
       to modify it for frequent changes in these data; let MXG do it!  
  e. SMF 72.  CPU TCB Service Units, raw R723CCPU/CPUUNITS contains sum 
     of IFA + CP Service Units, and CPUUNITS is used to calculate the   
     CPUTCBTM.  HOWEVER: MXG variable CPUTCBTM will contain ONLY the CP 
     CPU time, and CPUUNITS will contain ONLY the CP Service Units, so  
     existing CP capacity metrics and Capture Ratio will be accurately  
     preserved (by subtracting the new CPUIFATM variable with IFA CP CPU
     time after CPUTCBTM is calculated from CPUUNITS, and by subtracting
     new IFAUNITS, IFA Service Units, from CPUUNITS).  The IFA timings  
     of CPUIFATM (CPU time on IFA) and CPUIFETM (CP CPU time that was   
     eligible to run on an IFA), and the corresponding IFA service units
     in IFAUNITS and IFEUNITS exist so IFA use and eligible capacity can
     be measured and tracked for each service class period.             
     The bad news: The RMF Workload Report now shows CP+IFA times in the
     "TCB" line, which will no longer match the CPUTCBTM in MXG; the RMF
     report does have a new line with "IFA" CPU time R723IFAT/CPUIFATM, 
     and APPL% CP, APPL% IFACP, and APPL% IFA percentages are reported. 
       Note: The CPUIFATM and CPUIFETM times are actually converted from
             R723IFAT(IFAUNITS) and R723IFCT(IFEUNITS) service units    
             using the new R723NFFI normalization factor.               
     These IFA-related variables are created in the TYPE72GO dataset:   
       R723NFFI='*NORMALIZATION*FACTOR*FOR IFA*TIMES'                   
       R723IFEU='IFA-ELIGIBLE*ON CP*USING SAMPLES'                      
       CPUIFATM='IFA*CPU TIME*ON IFA'                                   
       CPUIFETM='IFA-ELIGIBLE*ON CP*CPU TIME'                           
     The calculation of Velocity may need to be changed in MXG, but only
     when data exists so that the using/delay samples are better known. 
  f. SDSF display.  JOB-level data for CPU% includes both CP and IFA CPU
     percent, but a new IFA CPU column will eventually be added.        
     The Bad News:  The CPU percent busy field in the top line of the   
     display DOES (incorrectly?) include IFAs in the denominator, so a  
     two-CP one-IFA system with 100% in both CPs and 0% in the IFA      
     displays CPU Busy of 66% on SDSF, because the capacity was based on
     three total engines, not the two CP engines.                       
       Unrelated to IFA, but it was just observed that the JOB %CPU     
       includes the "Initiator" CPU time (CPITCBTM/CPISRBTM see ADOC30),
       the CPU time before your program actually starts executing, so   
       you might see CPU time while watching a job start to run, but the
       CPU time in the IEF374I message could be zero; only "standard"   
       program TCB/SRB time is shown in that message.  The "initiator"  
       TCB/SRB time is only captured in the SMF 30 subtype 4 records.   
      -z/OS 1.12 message IEF032I/33I replaced IEF374I/276I.             
  g. IEF374I step term and IEF376I job term log messages contain the sum
     of CP and IFA CPU time, but IBM intends to either add a separate   
     IFA CPU time value, or more likely a new IEFnnnI log message.      
  h. CICS TASCPUTM already includes the J9 Java TCB's CPU time, which is
     also separately available in the J9CPUTTM variable, but that is the
     total Java CPU time, so we can't tell how much was on a CP and how 
     much was on an IFA at the transaction level.  You will have the SMF
     30s for each CICS region, especially the interval records, to see  
     how much time is CP, IFA, and/or IFA eligible, at the region level,
     and you may have to use those percentages in your chargeback code. 
  i. IMS Message CPU time - unconfirmed, but it is expected that the IMS
     07 log record contains both TCB and IFA CPU time.                  
31. APAR PQ89007 appears to state that SMF120NCS never should have been 
30. APAR PQ91625 reports SMF 119 can have a blank member name if a PDS  
    member is renamed during the ftp transfer.                          
29. APAR PQ91912 reports SMF 120 subtype 3 incorrectly reports a large  
    number of Heap Allocation Failures in SM120HIC.                     
28. APAR PQ92496 reports SMF 120 subtype 1 byte transferred fields are  
    too small, and cites transfer of over 2 GigaBytes in a 2 minute     
    interval (which is only 17MB/sec!) will cause the fields to wrap,   
    producing very large or truncated numeric values.                   
27. APAR OA08065 corrects SMF6UIF (MXG variable LOCLINFO) to contain the
    value from JMRUSEID for PrintWay-creates SMF type 6 records; the    
    previous value was from the PrintWay JOB's userid.                  
26. IFA CPU measurements are corrected by APAR OA08606, and APAR OA07320
    corrects WLM handling of IFAs.                                      
25. IBM "Driver 55" for z990s and z890s has both a Hardware Model and   
    a Software Model Number, but that driver change has no impact on RMF
    data nor on MXG software.  The CPUTYPE remains 2084x, the CPCMODEL  
    is still 309 (the Software Model Number), and the Serial 0AC4DCx is 
    unchanged; using the D M=CPU command shows both model numbers:      
      CPC ND = 002084.B16.IBM.02.0000000AC4DC                           
      CPC SI = 2084.309.IBM.02.00000000000AC4DC                         
24. Differences between VSAM RLS Architecture and Performance and RLS is
    discussed in IBM Item RTA000172667, in answers to ten questions, and
    requests that you only ask one question per WWQ&A record, because   
    "our funding is based on the number of records we respond to."!     
23. Uncataloged datasets on SMS volumes can occur, as discussed in IBM  
    Item RTA000039164, as a result of system failures, or an authorized 
    program could have bypassed security and uncataloged them, or if you
    did a physical dump and restore of the volume: DFDSS/SDFSMSdss will 
    restore a VSAM dataset in that case, but cannot catalog it and will 
    give an ADR318I message to indicate that the data set "may" need to 
    be cataloged (physical dumps do not preserve enough information     
    about VSAM datasets to reconstruct the catalog entries).  The last  
    reference date should be the uncatalog date, since on an SMS managed
    volume the catalog should be consulted every time the data set is   
    accessed; MXG dataset TYPE6156 will identify who uncataloged the    
22. APAR PQ86871 for Tivoli reports it incorrectly interpreted values   
    greater than 2,147,483,683 as a negative value in SMF 100 records.  
21. APAR PQ88991 report WebSphere SMF 120 records are inaccurate,       
    sometimes, in subtype 5 and 6 J2EE interval and container subtypes, 
    with incorrect average response time values, usually too high.      
20. APAR PQ88997 report SMF 117 and SMF 118 records fail to get the     
    HOSTNAME filled in, sometimes.                                      
19. APAR OA04877 reports new subtype 8 of RMF type 74 record with link  
    performance statistics.                                             
18. APAR OA06367 reports that SMF 89 usage segments will be consolidated
    during interval processing, for each registered product, to correct 
    the growth of Subpool 230, key 0, caused by the prior algorithm of  
    keeping each and every segment for the life of the job.             
17. APAR OA07251 reports that when specifying a RANGE of SMF records in 
    SMFPRMxx, the subtypes can be misinterpreted so that you don't get  
    the subtypes you expected.  SYS(TYPE(1,28:30(6))) in SMFPRMxx, but  
    D SMF,O shows SYS(TYPE(1,28(6),30)), and the display is correct; all
    subtypes of SMF 30 are created, not just the subtype 6 expected.    
    Circumvention: enumerate IDs with subtypes: SYS(TYPE(1,28:29,30(6)))
16. APAR OA07396 reports ABACKUP creates SMF Type 1 (not since MVT, when
    it was recorded system wait time, and poorly at that!), even though 
    the site has SETSYS NOSMF was specified in APARB.                   
15. APAR OA06258 reports incorrect Switch ID in RMF FCD Report, and     
    creates new bit in R747SPFL to identify a Cascaded Switch.          
14. APAR OA07664 reports large number of SMF 80 when running an HTTP    
    server, after the apply of OA04416; the massive increase records    
    were for the 'PUBLIC" userid (surrogate).  Only circumvention now   
    is to use IEFU83/IEFU84 to exclude those created by a surrogate     
13. APAR OA07737 reports that you cannot have an MVS SYSNAME in your    
    SMFPRMxx member that has a first character numeric that is followed 
    by any of these characters H,M,S,P,T in any of the remaining        
    positions of the name, e.g., SYSNAME(9H99) is invalid.              
12. APAR OA07706 subtly indicates IBM has always been wrong in SMF 30   
    records SMF30TRR, MXG variable DIVRREAD, in that the value passed   
    by DIV (Data In Virtual) to SMF is accumulated, and not a DELTA     
11. OA06932 reports LSPACE ABEND738 if SMF 19 records are turned on, as 
    LSPACE is issued for each DASD volume, one at a time, at SMF INIT.  
    Long ago, we had problems with SMF 19, and with DCOLLECT or similar 
    DASD management tools, there is no real value, and lots of real     
    exposure if you permit SMF to write SMF 19 records.                 
    See also APAR OW45020 re both SMF 19 and SMF 69 LSPACE problems;    
    SMFPRM of NOTYPE((69),(19) is recommended.                          
10. APAR OA07693 documents an ABEND90A in IEASMFDP, the SMF dump program
    if there are more than 8 output DD statements, "because the updated 
    compiler requires updates on how some of the GETMAINs and FREEMAINs 
    services are invoked".                                              
9.  APAR OA06073 corrects error in IBM's DISPLAY STOR command, which had
    a slightly incorrect value for installed storage; the command showed
    a value of 384MB, but the sum of PVTPOOL and PVTFPFN was 386.9MB.   
8.  APAR OW57651 caused extremely large values in CPU time for OMVS     
    or USS tasks in Type 30 records; APAR OA06407 corrects those data   
    as well as other SMF30 fields (like CPUUNITS, etc).                 
7.  APAR OA07041 corrects SMF30SQT, which was not populated correctly   
    in the job termination record when the last step was flushed; the   
    field was incorrectly zeroes in the subtype 4 and subtype 5 record. 
6.  Discretionary work swapped out for an hour with low CPU utilization 
    may be addressed in APAR OA07058 or OW50378, per MXG-L posts.       
5.  Type 42 Subtype 21 (DELETE ALIAS) SMF records are only written if   
    SMF type 14 and 15 are collected, and this is not documented!       
4.  For z990s, APAR OA06346 is critical; without it installed, RMF will 
    CPU-loop, if CRYPTO is specified in ERBRMF00, and of course CRYPTO  
    is the default,  You can specify NOCRYPTO and re-cycle RMF to       
    circumvent the RMF loop, but the APAR apparently is a full fix, with
    several PTFs to install. Mar 17, 2004.                              
3.  Yet another FICON APAR, OA04856, corrects DASD I/O calculations for 
    connect and disconnect times in RMF 74s, and using and delay data in
    RMF 72s.                                                            
2.  APAR OW57679, June 2003, reported incorrect processing of channel   
    subsystem for control unit queueing for FICON connections caused    
    negative values for disconnect time.                                
1.  A second, unrelated APAR that messes with SMF 30 record fields, and 
    in particular with SMF30SRV, is fixed in APAR OA06407.  This is in  
    addition to APAR OA05542, reported in the previous MXG Newsletter.  
    Blank values in SRVCLASS were corrected by this APAR.               
IV.  DB2 Technical Notes.                                               
2. DB2 Trace IFCIDs 316, 317, and 318 are documented in SG24-6418-00    
   "The detailed information about the statement cache is only available
   to online monitor programs.  The information cannot be externalized  
   to SMF or GTF, so it cannot be processed by a DB2 PM batch job."!!!  
 1. DB2 Accounting Discoveries, Jun 5, 2004:                            
   DB2ACCT observations from DB2 Parallel "Rollup" Records, QWHCPARR='Y'
   have all been deleted since Change 19.027, by code in EXDB2ACC/ACP   
   exits, back when duplicate DB2ACCT data was discovered and deleted.  
   But that has changed in the last three years, and detail examination 
   of current DB2 V6 and V7 data proves my decision to delete was wrong.
      There are a number of IBM APARs, listed at the end of this note,  
      that changed the contents of DB2ACCT parallel data records.       
   a. A single DDF Unit of Work (NETSNAME UOWTIME) created 45 DB2ACCT   
      the same values.  This UOW started at 21May 08:59 and ran until   
      24May 18:07, three days later.  There were 11 groups of ACEs, with
      43 of 45 records written on the first day, from 08:59 to 12:48    
      (and they included nine pairs of DB2 Parallel Origin/Parent "O"   
      and Rollup "R" records, below).  Then at 14:11 on the 21st, the   
      "serious" DB2 parallel event started, ran for 75 elapsed hours,   
      ending at 18:07 on the 24th, and wrote one pair of parallel       
      records, observations 44 and 45 in the below details of all 45    
      DB2ACCT records from that UOW:                                    
                                   REPORT ONE                           
                               UNIT OF WORK SUMMARY                     
       NETSNAME =GA0400FD..A508  UOWTIME=27APR2004:23:37:15.959288      
                             Q         D    D  Q        Q  Q          Q 
                   Q         W         B    B  W        W  X      E   W 
                   W         H         2    2  A        A  M      L   H 
                   A         S         T    P  C        C  A      A   S 
                   C         S         C    A  P        P  X      P   W 
O      A           B         T         B    R  C        A  D      S   S 
b      C           S         C         T    T  N        C  E      T   E 
s      E           C         K         M    Y  T        E  G      M   Q 
           21MAY2004 1MAY2004                                           
 1 141CC1C8 08:59:39 08:59:39.972   0:00:00 S  0 00000000  0  0:00:00 73
 2 141CC1C8 09:02:07 09:02:08.120   0:00:00 S  0 00000000  0  0:00:00 74
 3 141CA708 09:13:06 09:13:07.129   0:00:00 S  0 00000000  0  0:00:00 08
 4 141CA708 09:13:54 09:13:55.783   0:00:00 S  0 00000000  0  0:00:01 09
 5 0F7F3E38 09:51:38 09:51:38.338   0:00:00 S  0 00000000  0  0:00:00 90
 6 0F7F3E38 09:51:38 09:57:16.717   0:00:05 O 13 00000000 11  0:05:38 22
 7 0F7F3E38 09:57:16 09:57:16.717   0:00:14 R 13 0F7F3E38  .          23
 8 0F7F3E38 10:01:09 10:01:10.096   0:00:00 S  0 00000000  0  0:00:00 03
 9 141CB8C8 10:06:11 10:06:11.114   0:00:00 S  0 00000000  0  0:00:00 16
10 141CB8C8 10:06:11 10:10:01.715   0:00:03 O 13 00000000 11  0:03:50 33
11 141CB8C8 10:10:01 10:10:01.715   0:00:00 R 13 141CB8C8  .          34
12 141CB8C8 10:17:06 10:17:06.675   0:00:00 S  0 00000000  0  0:00:00 17
13 141CB8C8 10:17:06 10:18:19.075   0:00:02 O 13 00000000 11  0:01:12 19
14 141CB8C8 10:18:19 10:18:19.075   0:00:00 R 13 141CB8C8  .          20
15 141CB8C8 10:21:29 10:21:29.106   0:00:00 S  0 00000000  0  0:00:00 27
16 141CB8C8 10:21:29 10:23:02.009   0:00:02 O 15 00000000 11  0:01:32 28
17 141CB8C8 10:23:02 10:23:02.009   0:00:00 R 15 141CB8C8  .          29
18 0F7F6388 10:49:31 10:49:31.675   0:00:00 S  0 00000000  0  0:00:00 82
19 0F7F6388 10:50:17 10:53:21.605   0:00:02 O 13 00000000 11  0:03:03 86
20 0F7F6388 10:53:21 10:53:21.605   0:00:19 R 13 0F7F6388  .          87
21 0F7F6388 11:00:01 11:00:01.705   0:00:00 S  0 00000000  0  0:00:00 95
22 0F7F6388 11:00:07 11:01:07.192   0:00:15 S  0 00000000  0  0:00:59 98
23 0F7F6388 11:02:06 11:02:06.929   0:00:00 S  0 00000000  0  0:00:00 99
24 0F7F6388 11:02:17 11:04:08.340   0:00:23 S  0 00000000  0  0:01:50 02
25 0F7F6388 11:11:56 11:11:57.058   0:00:00 S  0 00000000  0  0:00:00 25
26 141C93B8 11:12:27 11:14:32.848   0:00:01 O 13 00000000 11  0:02:05 33
27 141C93B8 11:14:32 11:14:32.858   0:00:15 R 13 141C93B8  .          34
28 0F7F6388 11:14:57 11:14:57.769   0:00:00 S  0 00000000  0  0:00:00 35
29 141CC548 11:19:36 11:19:36.259   0:00:00 S  0 00000000  0  0:00:00 50
30 141CC548 11:19:52 11:20:33.852   0:00:00 O  2 00000000 11  0:00:41 53
31 141CC548 11:20:33 11:20:33.851   0:00:12 R  2 141CC548  .          54
32 141CC548 11:20:39 11:20:40.348   0:00:00 S  0 00000000  0  0:00:00 55
33 141CC708 11:21:52 11:21:53.536   0:00:00 S  0 00000000  0  0:00:00 58
34 141CC708 11:22:09 11:23:22.843   0:00:00 O 13 00000000 11  0:01:13 62
35 141CC708 11:23:22 11:23:22.841   0:00:12 R 13 141CC708  .          63
36 141CC708 11:23:31 11:23:32.136   0:00:00 S  0 00000000  0  0:00:00 64
37 141CC388 12:35:17 12:35:17.945   0:00:00 S  0 00000000  0  0:00:00 79
38 141CC388 12:37:50 12:37:51.791   0:00:00 S  0 00000000  0  0:00:00 92
39 141CC388 12:38:24 12:39:08.285   0:00:00 O 13 00000000 11  0:00:43 93
40 141CC388 12:39:08 12:39:08.285   0:00:07 R 13 141CC388  .          94
41 141CC388 12:54:03 12:54:03.458   0:00:00 S  0 00000000  0  0:00:00 41
42 141CC388 12:54:44 12:56:41.133   0:00:08 S  0 00000000  0  0:01:56 49
43 141CC388 12:58:40 13:00:55.932   0:00:08 S  0 00000000  0  0:02:15 73
44 10800A88 21MAY2004 24MAY2004                                         
            14:11:41 18:07:45.923   0:00:00 O  6 00000000  6 75:56:04 91
45 10800A88 24MAY2004 24MAY2004                                         
            18:07:45 18:07:45.922 188:40:38 R  6 10800A88  .          92
   Notes on this data, created with MXG 22.05 (prior to 22.121 fix):    
   1. Each pair of DB2 Parallel records are now identified DB2PARTY     
             DB2PARTY  Description     Set by                           
               'R'      Rollup         QWACPARR EQ 'Y'                  
               'P'      Parallel       QWACPACE GT '00000000'X, NOT PARR
               'O'      Parent/Orig    QWACPCNT GT 0 OR QXMAXDEG GT 0   
               'S'      Sequential     None of the above                
      and the data shows the "O" Parent record is written first in each 
      pair and that is followed by the "R" Rollup record.               
   2. The QWACBSC (Begin Datetime) in the Rollup record is not the start
      of the parallel event, but rather is the ending datetime value.   
   3. The QWACESC (End Datetime) in the Rollup record is not even a date
      time, but in that record, is the total children elapsed time.     
   4. The below examples show QWACESC from the raw record, but now, MXG 
      sets QWACESC=QWHSSTCK, sets ELAPSTM=., and puts the children's    
      elapsed time in new variable CHIELPTM.                            
   5. Essentially ALL of the DB2 Parallel CPU time (DB2TCBTM) is only in
      the rollup record of each pair (188 CPU hours in 75 elapsed hours 
      in obs 44 and 45). And, yes, it was that QWHSPARR='Y' record that 
      was the one that was NOT output in DB2ACCT until Change 22.121.   
   6. The Rollup record is written after the Parent, in QWHSWMSG order, 
      but the Rollup's QWHSSTCK can be slightly earlier (see OBS 44/54).
   7. The Rollup record DOES NOT contain "rolled up" values, as can be  
      seen by comparing the Parent and Rollup Records, especially in the
      obs 44 and 45, for the fields that are documented in DSNWMSGS as  
      supposedly "rolled-up", for example, the Buffer Pool counts in    
      REPORT TWO which show values for Buffer Pool 1 in the Parent, but 
      no BP 1 counts in the rollup record, and the BP 2,3 counts are    
      only in the Rollup, so it appears all records are needed to       
      account for the event:                                            
                   REPORT TWO - ROLLUP FIELDS                           
  44     O          6       302       7         .             .     .   
  45     R          .         .       .        29    1905773502     0   
  44        .         .           .     .         .        .       .    
  45  1947039  63565784  1847086248     7        89       64       0    
  44      .       .        1         13       109       73        3     
  45    117       0        6        460        18        0        6     
  44      1       0:00:00.04        14         0         0      2       
  45      0     188:40:55.65   3894206  37604434  41181854      0       
  44      0      75:56:04.46    0:00:03.29    0:00:00.06    0:00:00.00  
  45      0     404:58:20.00    0:00:00.00    6:30:57.64    3:39:22.35  
 Obs      QWACAWTR        QWACAWTW    QWACCOMM    QXMIAP                
  44    0:00:00.00      0:00:00.00        0          0                  
  45   28:38:08.40      0:00:00.00        6          .                  
   7. Similarly, for the fields that are not documented as rolled up,   
      different values exist in both records:                           
               REPORT THREE -  NON-ROLLUP FIELDS                        
  44    O         .        .         0         0         0        0     
  45    R         1        2         0         0         0        0     
  44    0         0        0      0:00:00.00     0        0        0    
  45    0         0        0      0:00:00.00     0        0        0    
  44     2        0       47        0       0        4      0:00:00.00  
  45     2        0       47        0       0        5      0:00:00.00  
  44            .    0:00:16.68      .                .    0:00:16.72   
  45            .    0:00:16.68      .                .  188:40:55.65   
  44 24MAY2004:18:07:45.92     0          .                .      .     
  45 18JAN1900:06:58:20.00     0          .                .      .     
 Obs     QWACOTSE                  QWACRINV                  QWACSLNS   
  44            .    12:NORM:DEALLOCATE, NORMAL PROG TERM        .      
  45            .    12:NORM:DEALLOCATE, NORMAL PROG TERM        .      
  44            .   0:00:00.00  8217.77     .        .               .  
  45            .   0:00:00.00  8217.77     .        .               .  
  44            .     .               .     .               .     .     
  45            .     .               .     .               .     .     
  44            .     .               . 8:REMOTE UOW    302      7      
  45            .     .               . 8:REMOTE UOW      .      .      
  44    0       0       0       0      0       0       0        0       
  45    .       .       .       .      .       .       .        .       
  44     0       0       0       0       0      283      0       0      
  45     .       .       .       .       .        .      .       .      
  44    0       0        1       1      1       0        0        0     
  45    .       .        .       .      .       .        .        .     
  44     0         0        0         0         0        0         1    
  45     .         .        .         .         .        .         .    
 Obs QXTOTGRP    QXUPDTE                                                
  44     1          0                                                   
  45     .          .                                                   
   8. And you can see from REPORT FOUR, most character variables in the 
      two parallel observations are the same:                           
                   REPORT FOUR - CHARACTER VARIABLES                    
  44  BIGDB2JB DISTSERV   MVPQ    D:DBAT         225421404040           
  45  BIGDB2JB DISTSERV   MVPQ    D:DBAT         225421404040           
  44                        BIGDB2JB  SERVER   CVODBC32.EXE   bigdb2jb  
  45                        BIGDB2JB  SERVER   CVODBC32.EXE   bigdb2jb  
 Obs    QWHCEUTX      QWHCEUWN    QWHCOPID    QWHCPLAN                  
  44  CVODBC32.EXE                BIGDB2JB    DISTSERV                  
  45  CVODBC32.EXE                BIGDB2JB    DISTSERV                  
 Obs  QWHCTOKN                                        QWHDCCNT          
  44  47413034303046442E41353038000900225421404040      2020            
  45  47413034303046442E41353038000900225421404040      2020            
  44                      SQL07020            202020202020  
  45                      SQL07020            202020202020  
  44     Y                                 SQL07020 
  45     Y                                 SQL07020 
  44  CVODBC32.EXE  bigdb2jb                                            
  45  CVODBC32.EXE  bigdb2jb                                            
  44                                                 NT         0       
  45                                                 NT         0       
  44     02      SQL:OS/2      07                    N          N       
  45     02      SQL:OS/2      07                    N          N       
  44               0003                MIMSVU   SQL07020  
  45               0043                MIMSVU   SQL07020  
  44  CVODBC32.EXE    bigdb2jg                                          
  45  CVODBC32.EXE    bigdb2jb                                          
  44 10800A88  3:ACCOUNTING    59326   DCBDBR1     0006      A508       
  45 10800A88  3:ACCOUNTING    59327   DCBDBR1     0006      A508       
  44 040520225421 00000006 GA0400FD    6.1   1AX:ACCOUNT AND STATISTIC  
  45 040520225421 00000006 GA0400FD    6.1   1AX:ACCOUNT AND STATISTIC  
  44 3:ACCOUNTING    DBR1    24MAY2004:18:07:45.923    46091       6    
  45 3:ACCOUNTING    DBR1    24MAY2004:18:07:45.922    46092       6    
   9. The bottom line:                                                  
      MXG Change 22.121, in MXG 22.06, as made these corrections to     
      the MXG logic for processing DB2 parallel transactions:           
      a. Variable DB2PARTY was previously set to "S" instead of "O", and
         the Rollup and Parallel records both had "P" instead of set to 
         separate values.  This is the DB2PARTY definitions now:        
           DB2PARTY  Description     Set by                             
             'O'      Parent/Orig    QWACPCNT GT 0 OR QXMAXDEG GT 0     
             'R'      Rollup         QWACPARR EQ 'Y'                    
             'P'      Parallel       QWACPACE GT '00000000'X, NOT PARR  
             'S'      Sequential     None of the above                  
           (Prior to this change, "O" obs were "S".  Both "R" and "P"   
            were "P" and the "R" were deleted in the Exit members.)     
      b. The exit members EXDB2ACC and EXDB2ACP no longer DELETE any    
         observations, so datasets DB2ACCT/DB2ACCTP/DB2ACCTB will now   
         contain one observations from each 101 SMF records.            
            To see how much CPU time was lost from DB2ACCT, create your 
            PDB.DB2ACCT with this change, and then use:                 
               PROC FREQ DATA=PDB.DB2ACCT;                              
               WEIGHT DB2TCBTM;                                         
               FORMAT QWACBSC DATETIME9.;                               
            to tabulate the CPU seconds for each start date; note that  
            the WEIGHT statement uses integer seconds, so any fractional
            DB2CPUTM won't be included, but this is a fast tabulation;  
            you could PROC SORT and PROC MEANS if you need absolutes.   
      c. ELAPSTM was always calculated only when the QWACESC end time is
         later than the QWACBSC begin time, and both were nonzero. In   
         the preceding example, QWACESC is a duration, not a datetime,  
         and APAR PQ41012 documents that it is now the elapsed time of  
         the children, in the rollup record, so now, for DP2PARTY='R',  
         QWACESC is set equal to QWHSSTCK, the record-write-timestamp,  
         the original value of QWACESC is stored in CHIELPTM, but the   
         ELAPSTM is set missing, as it could be misinterpreted.         
         This does mean there will be an overlap of the BSC/ESC values  
         in the two records written for the parallel event.             
      d. There are numerous IBM PTFs that correct DB2 Parallel data:    
             APAR    Last Change Date    Date Closed                    
                        yyyy mo dd        yyyy mo dd                    
             PQ22451    1999/03/02        1999/02/01                    
             PQ41012    2000/12/01        2000/11/14                    
             PQ45519    2001/05/02        2001/04/02                    
             PQ50538    2002/05/16        2001/10/03                    
             PQ78546    2003/12/02        2003/10/25                    
             PQ85650    2004/04/05        2004/03/23                    
V.   IMS Technical Notes.                                               
VI.  SAS Technical Notes.                                               
14. WRITE ACCESS DENIED can be caused, on "MVS", if you attempt to      
    write to a SAS Data Library, but you have DISP=SHR; you must        
    have exclusive DISP to write to a SAS Data Library (except that     
    the SAS/SHARE product does permit writes).  The message can also    
    because if the submitter of the job (RACFUSER) does not have the    
    correct RACF/ACF2 access authority to write to the dataset, and     
    this condition does NOT product any other RACF/ACF2 messages; you   
    only get the SAS 999 ABEND and the DENIED message on the SAS log.   
13. While MVS-S/390-z/OS and Windows platforms no longer use MEMSIZE,   
    for unix and Linux platforms, there still is a MEMSIZE parameter    
    that can cause out-of-memory errors.  The MEMSIZE value is normally 
    set in the sasv8.cfg or sasv9.cfg configuration file, but it can    
    also be set on the sas command (look at properties of the command to
    see if your SAS installer has put the limit there!).  SN-010731 SAS 
    note documents that unix operating systems have limits on how large 
    a process can grow, and while you may have several gigs of RAM on   
    the system, increasing the value of the -MEMSIZE option might not be
    able to make additional memory available.  That note show limits of 
    1, 2, or 3 gigabytes, but MXG rarely needs more than 64 MegaBytes.  
    V9.1.2, only occurs with PROC SYNCSORT R2.3A BATCH 0423, the patch  
    from Syncsort for that add-on for SAS Version 9.1.2, but the error  
    was caused by an error in the SAS Toolkit product that Syncsort uses
    to create their product.  The error truncates INFORMAT names, the   
    $NOTRAN informat was truncated to $NOT.  PROC SYNCSORT finishes with
    no error, but when the sorted dataset is read, the error occurs.    
      Note: this error is ONLY with the add-on 'PROC SYNCSORT' product  
      and does NOT occur when SyncSort is used as the Host Sort.        
      To bypass PROC SYNCSORT add-on, remove the SyncSort DD statement  
      in the //STEPLIB concatenation in your JCL procedure.             
      (The Host Sort part of Syncsort is normally in the LINKLIST.)     
      (Specifying SORTPGM=SAS is not sufficient to bypass PROC SYNCSORT;
       if the PROC SYNCSORT library in the //STEPLIB, it will be used   
       even if SORTPGM=SAS is specified).                               
    However, MXG Change 22.192 removed all INFORMAT $NOTRAN statements, 
    so MXG 22.08 or later can be used with PROC SYNCSORT, even without  
    the ultimate fix that will have to come from Syncsort, after they   
    get the fix to the SAS Toolkit software from SAS Institute.         
    The Syncsort ticket number # SR387805 refers to the error.          
    Version 8, as the Transient Library for C Programs is only required 
    in V8 if your job use TCP/IP functions (like ftp, email sockets).   
    The SAS CONFIG member (named BATCH in SAS V8) options SASCTRANSLOC  
    is probably pointing to an incorrect DSNAME to cause this error.    
    Under SAS V9, the C Transient Library is required, but the Install  
    of V9 forces it to be located correctly as part of the install.     
    Since it is a SAS file specified thru their CONFIG, the option is   
    NOT used in the MXG CONFIGV8/CONFIGV9 configuration overrides.      
    Jul 28, 2004.                                                       
10. OPTIONS ERRORCHECK=STRICT; can cause an ABEND if you want one when  
    you %INCLUDE SOURCLIB(xxx); and xxx does not exist.  I was unaware  
    of the ERRORCHECK option.                                           
9.  If you are using ODS HTML with the WIN device driver, your SAS      
    session may hang.  Changing the device to GIF seems to correct.     
8.  SAS Hot Fix C9BA050S addresses issues in SAS 9.1.2 on z/OS where SAS
    components on z/OS fail communications with TCP/IP; the particular  
    failure occurred when an MVS job was trying to send a file via email
    and the EMAIL step would hang until timeout; the Hot Fix corrected. 
7.  SAS note SN-010122 discusses "No MKLEs found ERROR: VM1319: The PCE 
    address= and MEMORY= address= ....".  In the past, VM1319 was due   
    to a virtual memory restriction (a MEMSIZE too small, or REGION= too
    small), but this error is also generated in Release 8.2 and 9.1 if  
    you have a FILENAME statement with only one dataset name inside the 
    parens:  FILENAME  TEST  ('xxxxx.yyyyyy.zzzz') DISP=SHR;            
    You must remove the parens when there's only one dataset, but must  
    have the parens when two or more datasets are to be concatenated.   
    Under SAS 9.1 you also get additional clues:                        
         ERROR: Logical name assigned but not in current scope.         
         ERROR: Error in the FILENAME statement.                        
6.  SAS Version 9 System EC6 ABEND can result when an OE segment is not 
    defined for the user; all users in V9 must have an individual OE    
    segment or a site default OE segment as noted in SAS Note SN=011960.
    SAS Note SN=001616 documents that the SAS/C transient library now   
    requires an Open Edition (a/k/a OMVS) segment, so that each user has
    permission to use unix System Services, and how to define one.      
    with TRACEBACK pointing to SASSORT module, resulted when the JCL had
    LOAD='OLD.SYNCSORT.LOADLIB', but the site no longer had SYNCSORT.   
    Removing the JCL reference eliminated the 0C1 ABEND.  11Jun2004.    
4.  SAS under unix still has a MEMSIZE parameter in the CONFIG file that
    limits memory; if there is no MEMSIZE parameter, it defaults to 64M,
    but that is NOT a guarantee; it is a request, but if there are many 
    processes running, your SAS job may not be able to get the MEMSIZE  
    that you requested, and there is no clue that the OUT OF MEMORY was 
    due to too small a MEMSIZE, or because there was not enough memory  
    available at the time your job started.  And unix processes hold on 
    to all of the memory, so even though (in MXG's BUILDPDB), it is only
    the first DATA step that needs 64M, that memory is not freed for the
    many smaller steps (DATA and PROC) that follow.                     
3.  SAS 9.1 introduced an error if you have an eight-byte member name in
    a DD statement that is then %INCLUDEd:                              
       //SYSIN DD *                                                     
        %INCLUDE YOURSTUF;                                              
    SAS fails with these errors in the SAS log:                         
       ERROR: CANNOT OPEN %INCLUDE FILE DDNAME                          
    You can remove the member name from your JCL, and change your input 
    source to use  %INCLUDE YOURSTUF(MEMBER00);, but the error is fixed 
    in SAS 9.1 TSLEVEL 1M2, and a hot fix for 9.1 is available at     sbcs prod list.html#012075 
    (or, for sites using Asian Language (DBCS) Support, download from: dbcs prod list.html#012075 
2.  SAS Version 9.1 requires ENTRY=SAS instead of ENTRY=SASHOST, which  
    was the entry point for SAS Version 8.  If you use the old MXGSASV8 
    JCL procedure and/or ENTRY=SASHOST with SAS Version 9.1+, your job  
    will die with an 0C4 ABEND with Reason Code 00000011.  Instead, use 
    the MXGSASV9 JCL Procedure example with V9+. 27Apr2004.             
    Apr 2007: Also, V8 SASXA1 is SASB (Non-LPA bundle configuration),   
    and V8 SASXAL is SASLPA (LPA bundle configuration) as V9 entries.   
 1.            Impact of BUFNO option for SAS data sets                 
Diane Eppestine and Jack Hudnall of SBC read the recommendation in a    
book published by SAS Institute, "Tuning SAS Applications in the OS/390 
and z/OS Environments" by Michael A. Raithel.  That book recommended    
BUFNO=10, so they ran BUILPDB to measure the impact of changing BUFNO.  
MXG Defaults have been BUFNO=2 and BLKSIZE=27648 (Half Track), as that  
moves one track of data per SSCH, which my 1984 paper in ACHAP42 found  
was optimal or near-optimal for all sequential I/O operations.          
While SAS defaults now BUFNO=3 and BLKSIZE=6144, they chose to compare  
the MXG Default BUFNO=2 with the proposed BUFNO=10, with BLKSIZEs of    
both 6144 and 27648 (and found one run with BLKSIZE=MAX ended up with   
actual BLKSIZE=4096, because SAS does not yet support superblocking):   
JOB    BLKSIZE  BUFNO    EXCP     IOTM     VIRT      CPU      ELAPS     
                         DASD     DASD     SIZE       SU      TIME      
                                 mm:ss                        MM:SS     
B        27648   10       8672   02:22.4    89M    4581623    10:31     
A        27648    2      14102   02:30.0    79M    4604366    10:08     
D        6144    10      16327   02:50.4    78M    4550875    10:25     
E        4096    10      18228   02:55.6    77M    4567742    11:21     
C        6144     2      29079   03:05.4    71M    4553833    11:04     
          PDB    BLKS    CYLS      WORK    BLKS   CYLS     DATA BYTES   
B        27648    2      394       27648    2      599        276,480   
A        27648    2      394       27648    2      592         52,296   
D         6144    8      444        6144    8      663         61,440   
E         4096   12      444        4096   12      661         40,960   
C         6144    8      444        6144    8      657         12,288   
The above comparisons above are sorted by IOTMDASD (I/O Connect Time).  
But also note that this also sorts them by EXCPDASD, Region Size, and   
almost by elapsed time, but especially they are also sorted by the      
number of data-bytes per SSCH, which I still believe is the key.        
The half-track runs (2:22 and 2:30) took 28-43 secs (20%-30%) less I/O  
time than the runs with smaller blksize (2:50,2:55,3:05).  And between  
half-track runs, increasing BUFNO from 2 to 10 saved only 8 I/O seconds;
it did also reduce the EXCP count from 14,102 to 8,678.  All of the     
smaller BLKSIZE runs had appreciably more EXCP counts.                  
I was apprehensive that increasing BUFNO from 2 to 10 with half-track   
would increase the Virtual Storage (REGION) size, and it did, from 79MB 
with MXG's default to 89MB with BUFNO=10, as every output SAS dataset   
required additional virtual storage for those extra buffers.  However,  
10MB more REGION is not really a resource of concern at most sites, and 
if REGION=0 specified, you don't have to change your Region Size!       
The CPU Service Units are not quite in descending order, but the total  
difference from maximum to minimum (4604366-4550875) is only 1 percent, 
so changing BUFNO or BLKSIZE has minimum impact on the CPU time.        
The last table shows the Size of the Output PDB and WORK libraries, and 
reaffirms that half-track blksize reduces the size of those libraries.  
In summary, the MXG Half-Track BLKSIZE with BUFNO=2 still seems a wise  
choice; using BUFNO=10 saved very little I/O Connect Time, with no CPU  
time savings; it reduced EXCP count by 40%, but especially with changing
BLKSIZEs, reducing EXCP counts result from reduced numbers of blocks in 
the SAS data sets, but no savings in IOTM shows there was no real saving
in the amount of I/O that used your channels and disk devices.          
But then Chuck Hopf benchmarked a larger SMF file (3275MB); his results 
show similar modest reductions, except for EXCP counts:                 
       BUFNO   Elapsed    CPU     EXCPTOTL    IOTMTOTL  REGION          
                mm:ss     mm:ss                mm:ss      MB            
         2      38:52     18:40    113,824     12:36     79M            
        10      34:59     18:27     78,331     12:24     88M            
Because the 10MB increase in REGION size could cause a perfectly running
BUILDPDB to unnecessarily ABEND, I choose to leave the MXG default in   
CONFIGV8 member of BUFNO=2, but you are free to change it!              
VII. CICS Technical Notes.                                              
VIII. Windows NT Technical Notes.                                       
IX.   Incompatibilities and Installation of MXG 22.02.                  
 1. Incompatibilities introduced in MXG 22.01 (since MXG 21.21):        
    See CHANGES.                                                        
 2. Installation and re-installation procedures are described in detail 
    in member INSTALL (which also lists common Error/Warning messages a 
    new user might encounter), and sample JCL is in member JCLINSTL.    
X.    Online Documentation of MXG Software.                             
    MXG Documentation is now described in member DOCUMENT.              
XI.   Changes Log                                                       
--------------------------Changes Log---------------------------------  
 You MUST read each Change description to determine if a Change will    
 impact your site. All changes have been made in this MXG Library.      
 Member CHANGES always identifies the actual version and release of     
 MXG Software that is contained in that library.                        
 The CHANGES selection on our homepage at            
 is always the most current information on MXG Software status,         
 and is frequently updated.                                             
 Important changes are also posted to the MXG-L ListServer, which is    
 also described by a selection on the homepage.  Please subscribe.      
 The actual code implementation of some changes in MXG SOURCLIB may be  
 different than described in the change text (which might have printed  
 only the critical part of the correction that need be made by users).  
 Scan each source member named in any impacting change for any comments 
 at the beginning of the member for additional documentation, since the 
 documentation of new datasets, variables, validation status, and notes,
 are often found in comments in the source members.                     
Alphabetical list of important changes after MXG 20.20 now in MXG 21.xx:
  Member   Change    Description                                        
  See Member CHANGES or CHANGESS in your MXG Source Library, or         
  on the homepage                                          
Inverse chronological list of all Changes:                              
Changes 22.yyy thru 22.001 are contained in member CHANGES.