****************NEWSLETTER SIXTEEN**************************************
             MXG NEWSLETTER NUMBER SIXTEEN  February 14, 1990           
Technical Newsletter for Users of MXG :  Merrill's Expanded Guide to CPE
                         TABLE OF CONTENTS                              
I.   MXG SOFTWARE VERSION 7.7 ANNOUNCEMENT.           pages   2 thru   8
  1. Production MXG Version 7.7 enhancements.                   page   2
  2. Installation sizing and instructions for MXG 7.7.          page   6
  3. Enhancements planned for the next version of MXG.          page   7
  4. Future considerations for MXG enhancements.                page   8
II.  TECHNICAL NOTES                                  pages   9 thru  18
  1. MVS/ESA changes to RMF CPU Capture Ratio decreases.        page   9
  2. MVS PTFs                                                   page  10
  3. MVS Technical Notes                                        page  10
     a. CPU cost of page movement to/from ESTORE.                       
     b. Additional notes on TYPE72 variable ACTFRMTM.                   
     c. Impact on BUILDPDB of the PROC SYNCSORT product.                
     d. SRM control in LPAR PR/SM environment.                          
     e. Conjecture on PR/SM Dedicated Partition timings.                
     f. Memory measurement reference.                                   
     g. No EXCP data for type 30 subtypes 4 and 5 from STCs.            
     h. PR/SM and LPAR considerations.                                  
     i. ESTORE and MAXMPL notes.                                        
     j. Sequencing of task startup for RESOLVE and RMF.                 
     k. Upper limit on QSAM/BSAM buffers.                               
     l. Parallel mount impact on MXGTMNT measured duration.             
     m. 3480 tape cartridge statistics.                                 
     n. Cost of initializing MXG Trend data base                        
  4. VM/XA Performance Note on DSPSLICE value.                  page  15
  5. SAS System Technical Notes                                 page  16
     a. SAS technique to locate data value location in INPUT.           
     b. MXG Compatibility with SAS Version 6.06+.                       
     c. Timestamp trunction with incorrect LENGTH value.                
III. CHANGE LOG                                       pages  19 thru  42
     Changes through Change 7.243 to 7.166                              
 IV. SRM CPU parameters for MPL Control In LPAR       pages  43 thru  45
     Reprint (with permission of IBM Corporation) of this               
      fine article by Richard Armstrong, originally made                
      available as Washington Systems Center FLASH 8923,                
      June, 1989.                                                       
I.   MXG SOFTWARE VERSION 7.7 ANNOUNCEMENT.                             
  1. Production MXG Version 7.7 enhancements.                           
 The Production Version 7.7 of MXG was shipped during the last week of  
 February, 1990, to all supported MXG sites.                            
  MXG Version 7.7       dated February  14, 1990, thru Change 7.243).   
  (Newsletter SIXTEEN   dated February  14, 1990, thru Change 7.243).   
  (PRERELEASE MXG 7.6   dated January   29, 1990, thru Change 7.228).   
  (PRERELEASE MXG 7.5   dated December  22, 1989, thru Change 7.207).   
  (PRERELEASE MXG 7.4   dated November  25, 1989, only for an ESP).     
  (PRERELEASE MXG 7.3   dated November  25, 1989, thru Change 7.190).   
  (Newsletter FIFTEEN   dated November  11, 1989, thru Change 7.165).   
  (PRERELEASE MXG 7.2   dated October   19, 1989, thru Change 7.161).   
  (BETA test  MXG 7.1   dated September 14, 1989, thru Change 7.140).   
  (BETA test  MXG 7.0   dated May       31, 1989  thru Change 7.098).   
  (Newsletter FOURTEEN  dated April      1, 1989, thru Change 7.035).   
   Previous Version 6.6 dated January   20, 1989, had 206 changes.      
 Operating System and Subsystem Version.Releases supported in MXG 7.7:  
  MVS/ESA thru 3.1.3 and DFP thru 3.2.                                  
  CICS thru 2.1.                                                        
  DB2 thru 2.2.0.                                                       
  RACF thru 1.9.                                                        
  VM/370 thru Release 6 and HPO thru Release 5 (no known problems).     
  VM/XA SP thru 2.1 plus later PTFs.                                    
  IMS 2.1 and IMS 2.2 (see special note for 1.3).                       
 New Devices Recognized and/or supported in MXG 7.7:                    
  3490 and 9348 tape drives (cart and reel respectively) recognized.    
  3480/3490 Tape Compression (IDRC) recognized.                         
  3390 DASD devices recognized and counted in EXCP3390/IOTM3390.        
 Most important new enhancement in MXG Version 7.7:                     
  MXGTMNT, the long awaited MXG Tape Mount Monitor, captures how long   
   operators take to mount tapes, and identifies which job mounted what 
   tape on which tape drive when, with no system modifications. The     
   monitor wakes up every 2 seconds to scan the UCB chain, and writes   
   a User SMF record when each tape mount is satisfied.                 
 Alphabetic (by acronym, of course) list of major changes, enhancements,
 and new products/versions now included and supported in MXG 7.7:       
  ACF2 'ARE' data sets captured.                                        
  AION Development System SMF record from AION.                         
  VSAM activity included with non-VSAM in ANALDSET I/O analysis by job. 
  ARBITER SMF records from Tangram product.                             
  CMF and RMF records can now be differentiated.                        
  CPU Serials added to RMFINTRV.                                        
  DB2 Audit Class trace type 102 records.                               
  DB2 SQL text ("what he typed in") is captured.                        
  DB2PM-like reporting enhancements for DB2 2.2 in ANALDB2R.            
  DFP 3.2 TYPE42 SMF record.                                            
  DOS/VSE Power 3.1.2.                                                  
  FILEAID SMF record from COMPUWARE product.                            
  GTF format DB2 trace data supported.                                  
  ICF TYPE6156 VOLSER capture and enhancement.                          
  IDRC (Improved Data Recording Compression) for 3480/3490 tapes.       
  IMS 1.3 transaction processing: see Change 7.075.                     
  IMS 2.0 and IMS 2.1 response measurement corrections.                 
  JES2 mods to capture SYSOUT release timestamp in type 6 SMF record.   
  MDFTRACK SMF record from Amdahl MDF environment.                      
  MVS/ESA 3.1.3 SMF and RMF records.                                    
  MVS/ESA CPU timings in step records.                                  
  NAF SMF record from Candle's Network Accounting Facility.             
  NETSPY Release 3.2 SMF record.                                        
  NETVIEW TYPE37 Release 2 Hardware Monitor External Log Record.        
  NETVIEW TYPE39 Release 2 Session Monitor External Log Record.         
  OMEGAMON Command Audit SMF record from Candle.                        
  PDB.STEPS contains STEP accounting fields (if any).                   
  PDSMAN/XP Release 6 SMF record.                                       
  RACF 1.9 (based on SMF 3.1.3 manual) SMF records.                     
  RMF Monitor II Type 79 SMF record (fourteen subtypes).                
  RMF Monitor III VSAM data set records and TYPE72MN enhancement.       
  ROSCOE 5.6 support for variable number of complexity levels.          
  STOPX37 Release 3.3 SMF record from Empact product.                   
  STX Release 1.0+ supported.                                           
  TLMS (Tape Library Management System) catalog records.                
  TMON/MVS data records (non-SMF) from Landmark's Monitor for MVS.      
  TMS (Tape Management System) catalog records.                         
  TPX Release 2.0.0 SMF supported.                                      
  TREND data base validation and enhanced report examples.              
  TSO/MON Version 5.2 SMF record from Legent product.                   
  VM/XA SP 2.1 plus PTFs, and protection for VCOLLECT environment.      
  VMXGVTOC enhanced for VSAM with 128 extents and DASD VTOC data.       
  VSAM TYPE64 PTF to add important data for HiperBatch aid.             
  Validation and cleanup of all reported MXG 6.6 errors.                
 Pre-release by pre-release delta of changes (for testers, thanks!):    
  Significant changes added in 7.7 that were not in 7.6:                
   Support for Release 3.3 of Empact's STOPX37 product.                 
   IMS 2.2 algorithm enhancements improve IMS log response captured.    
   ANALDB2R updated for DB2 2.2 (most reports, except DDF).             
   Lots of final cleanup from pre-release testing.                      
  Significant changes added in 7.6 that were not in 7.5:                
   Support for all 14 subtypes of the type 79 RMF Monitor II record.    
   Support for VM/XA SP 2.1 plus new PTFs, and integrity enhancements.  
   Enhanced decoding of TYPE6156 ICF Catalog record to add VOLSER(s).   
   Support for Candle's Network Accounting Facility SMF records.        
   Support for Legent's TSO/MON Version 5.2 added.                      
   Support for Landmark's TMON/MVS product.                             
  Significant changes added in 7.5 that were not in 7.3:                
   Support for MVS/ESA 3.1.3, including the major enhancements in type  
    6 SMF record and a new type 42 DFP and 83 RACF SMF records.         
   Support for the VSAM PTF changing SMF 64 record (for HiperBatch Aid) 
   Support for DB2 Release 2.2.0 changes to 100, 101, and 102 records.  
   Support for Netspy Release 3.2.                                      
   Support for PDSMAN/XP Release 6.                                     
   Validation of NETVIEW type 37 SMF record modem section.              
   Correction of JES3 type 6 (prerelease only) error.                   
  Significant changes added in 7.3 that were not in 7.2 or Newsletter 15
   DB2 I/O, Locking, and Record Trace reports added to ANALDB2R.        
   MVS/XA & ESA Pathing configuration and activity report in ANALPATH.  
   3390 DASD device support is official (support is already in 7.2).    
   RMDS pagecounts are fixed.                                           
   D3330DRV (mountable drive count) eliminated from 30s and PDB.        
   GTF format DB2 type 102 trace data now correct.                      
   VMXGVTOC cleanup (deletes to clean WORK, last track counting, etc.)  
   TPX Release 2.0.0 new release supported,                             
   STX Release 1.0+ new product supported.                              
 Prior changes added in 7.2 and earlier are listed in Newsletter 15.    
 Documentation of enhancements in MXG Version 7.7.                      
 Details on enhancements, and their impact will usually be found in the 
 text of the actual Change description itself.                          
 While Changes can and should be read in the printed Newsletter, it is  
 very helpful to use SPF BROWSE/EDIT to scan the online documentation   
 available in member CHANGES of the MXG Source Library interactively.   
 Member CHANGES contains the current MXG version status and changes that
 have been installed in that software. Member(s) CHANGEnn are copies of 
 member CHANGES as it stood when MXG version nn was created.            
 In addition, the Change Number lists the member(s) affected by that    
 change. Browse those members, especially the ANAL...., IMAC...., and   
 VMAC.... members for further documentation and usage notes.            
 Member NEWSLTRS contains the text of all newsletters (including the    
 newsletter that accompanied that MXG release). You can usually search  
 on product name or acronym to find the MXG acronym and member names    
 that document, support, and process that product's data records.       
 Member DOCVER07 contains abbreviated Chapter FORTY style documentation 
 of just those variables and datasets that were added by MXG Version 7.7
 since MXG Version 6.6, the "delta-documentation".  For example, you    
 could browsing DOCVER07 for the TYPE70 thru TYPE79 (RMF) dataset       
 descriptions, to learn what new information IBM has added to RMF data  
 records. There is a DOCVERnn member in the library for each version.   
 Penultimately, member DOCVER contains abbreviated Chapter FORTY format 
 documentation of ALL 22,058 or so variables from the 679 or so MXG data
 sets that can be created by MXG Software Version 7.7 (alphabetically by
 data set name and variable name).                                      
 Finally, MXG is a source distributed system, so you can often find your
 answer by BROWSE/EDIT of the source member, especially the VMAC...'s   
 that actually create the data set, or ANAL....'s that create the MXG   
 report.  In many instance, the MXG Variable is the IBM or Vendor's     
 documented field name. In other cases, the IBM field name is carried   
 as a comment beside the MXG variable that contains that information.   
   In looking thru MXG library members online, try these SPF techniques:
     Make a COPY of member CHANGES (don't use the actual member, because
     you will use an EDIT command, which can delete/change data         
     For example:                                                       
     EDIT a non-existent new member and COPY in (for example) CHANGES.  
     On the command line, enter EXCLUDE ALL.                            
     On the command line, enter FIND "VM/XA" ALL , for example.         
      which will result in your display containing only those           
      lines that contain VM/XA.                                         
     You can then un-exclude the lines before and after each occurrence 
      by typing L5 or F3 on the line number of the excluded lines to now
      display or un-exclude the Last 5 or First 3 lines.                
     When done, CANCEL from the command line & nothing will be written. 
     The use of SPF commands EXCLUDE ALL and FIND ALL is a major tool in
      creating and maintaining MXG software.  It's especially useful is 
      scanning through a large number of lines of text, like MXG CHANGES
      and NEWSLTRS members. Unfortunately, it is only available as a    
      subcommand of EDIT; you cannot EXCLUDE under BROWSE.              
  2. Installation sizing and instructions for MXG 7.7.                  
  The MXG Installation instructions are found in Chapter 32 of the MXG  
  Supplement for version replacement as well as new install.            
  The MXG tape still is distributed as a Non-Labelled (NL) tape with a  
  single file, DCB=RECFM=FB,LRECL=80,BLKSIZE=6160, containing an        
  unloaded Partitioned Data Set containing 1100 members (and about      
  220,000 lines of source) in IEBUPDTE format. Under CMS/SAS this format
  is known to the TAPPDS command instead of the MVS IEBUPDTE program.   
  At 303 feet of 6250BPI tape, MXG no longer fits on those little       
  half-full minireels with 300 feet of tape. (They were only $6.50 each 
  when a full 600 foot reel was $11.75, and in 1984 MXG Version 1 was   
  only 99 feet long.)  Now, for those of you who are still in those dark
  ages and require MXG on 3420 tape reels, MXG arrives on that same 7   
  inch mini reel, but now it's full (and 600 feet now costs $6.50!).    
    If you received a mini-reel instead of a 3480 cartridge, please     
     let us know as soon as you can accept 3480 tape cartridges.        
    We can create about 250-300 cartridges per hour, but only about 100 
     of the reels per hour, and they have more errors!  And cartridges  
     are only $5.75!                                                    
    Judy still holds the world land speed record of 11 seconds per 3420 
     tape mount building MXG Version 4.                                 
  The MXG Version 7.7 SOURCLIB requires SPACE=(CYL,(30,1,199)) with     
  a DCB attribute of DCB=RECFM=FB,LRECL=80,BLKSIZE=23440.               
  The MXG Version 7.7 SASLIB format library (built by the first step    
  of JCLTEST) requires SPACE=(CYL,(2,1,99)) and the blocksize is        
  set when JCLTEST's first step is run.                                 
  See the comments below in the Changes log (or in member CHANGES)      
  for compatibility issues with installation tailoring IMAC....         
  Pre-releases of MXG 7.7 have been installed by over 400 sites this    
  year, and no real problems in installation have been encountered.     
  The major portions of all the important code have been running in a   
  production status at many sites for months.  MXG 7.7 has been better  
  tested than any of the preceding 28 releases, but as it must always   
  be, it's up to you to validate with your own data.                    
  3. Enhancements planned for the next version(s) of MXG.               
  These items were under consideration but did not get completed in time
  for MXG Version 7.7.  We anticipate a pre-release by mid-summer 1990  
  (for Landmark's Release 8 and IBM's CICS/ESA 3.1 Version) by product  
  availability. The next newsletter will be in June or July, 1990.      
     IBM's CICS/ESA 3.1 promises major changes in CICS monitoring, with 
     announced availability in June of 1990.                            
     Landmark's Monitor for CICS Release 8. Data was not available for  
     testing (the product is not yet available), but will be soon.      
     VMACHSM SMF record is incomplete (Change 7.096).                   
     Oracle product SMF Record.                                         
     WSF product SMF Record.                                            
     LLA IBM-documented, User SMF Record.                               
     VAX/VMS Accounting and Performance data processing. This support   
     will require mainframe SAS Version 6.06 or later, and will be      
     supported for execution only in MVS, CMS or DOS versions of SAS,   
     operating on VAX/VMS data that has been ported to an IBM system.   
     Cray COS (and eventually UNICOS) accounting and performance data   
     processing. To be implemented as discussed for VAX, with the Cray  
     data to be ported to an IBM system.                                
     JES3 TYPE25 mounts merge with MXGTMNT mounts.                      
     Amdahl CPUID of 'C05157' (Character) instead of expected PK3.      
     BUILDPDB enhancement to add ACCOUNTn to TYPE30_V Interval data.    
     AIM enhancements.                                                  
     EPILOG CICS CL1000 Versions 430, 440, and 450 have not been        
       personally verified, but several users report that XXACEPIL does 
       not fail with either 440 or 450. (For 430, one site suggested to 
       set the offset to -4 and MXG processed the records.)  I simply   
       did not have time to install the Candle decompression algorithm  
       so that I could test the compressed data I received!             
     IMS/ESA 3.1 Validation.                                            
     IBM incorrect storage of values in  TYPEVM VM/ACCOUNT variable     
       PWCOUNT (hex values are F0F8 F0F9 F0C1 or EBCDIC 08 09 0A for    
       actual data counts of 8, 9, and 10!) has not been circumvented.  
  4. Future considerations for MXG enhancements.                        
     EPILOG/MVS product records code, documentation, etc, that was to be
       provided by Candle hasn't been.                                  
     IMACACCT to control TASBFLDn variable's (rename to ACCOUNTn) and   
      the JES3 type 26 ACCOUNT1 lengths.                                
     Spool Transfer interaction (still) with BUILDPDB.                  
     ACCOUNTn added in ANALDSET analysis.                               
     TYPE30_6 deaccumulation.                                           
     VVDS analysis.                                                     
     Operational Product                                                
     Control (OPC) data.                                                
     Problem Change Management.                                         
     JES3 JMF JES Measurement Facility SMF type 84 record.              
     VM/XA VMPRF product reports may be replicated in MXG.              
     NETVIEW File Transfer Program Release 1.0 "User" SMF Record.       
     FACOM PDL record processing.                                       
     TPNS activity log.                                                 
     IMS Fastpath records.                                              
     Circumvent CMS limitation on VBS blocksize.                        
     FACOM PDL (not PDLF, which is supported) enhancement.              
     DAILY/WEEKLY/MONTHLY/TRENDING PDB sample JCL and code.             
     ASUMTSO for TSO/MON like ASUMCICS for CICS.                        
     LENGTH= protect for broken RMF Record.                             
     MXGMENU macro variable names.                                      
II.  TECHNICAL NOTES                                                    
  1. MVS/ESA changes to RMF CPU Capture Ratio decreases.                
    a. The following chart was published in MXG NEWSLETTER FIFTEEN, but 
       the phrase "Reportedly to be captured in ESA 3.1.3" turns out to 
       be incorrect. The actual CPU captured in TYPE72 records with     
       MVS/ESA 3.1.3 looks like this:                                   
TYPE 70 (RMF-captured CPU measurement of real or logical CPUs):         
        Elapsed Interval (Duration multiplied by number of CPUs)        
     ______________________________________________________ ---------   
                          CPU                                  CPU      
                        Executing                            Waiting    
     ________ ________ _________ ________ ________ ________             
      CPU 1    CPU 2    CPU 3     CPU 4    CPU 5    CPU 6               
       Busy     Busy     Busy      Busy     Busy     Busy               
     Total Hardware CPU Busy Time (from Type 70 "non-wait")             
TYPE 72 (summing only the control performance group's data) contains:   
     _____ _____ ------------------------------------------             
      RMF   RMF                  Uncaptured                             
      TCB   SRB                  RMF CPU Busy                           
      CPU   CPU                  pre-ESA 3.1.3                          
     ___________ _____________________________ ------------             
        CPUTM       Not yet captured in         Uncaptured              
      pre 3.1.3     RMF under ESA 3.1.3          RMF CPU                
                    but will be (some day).     with 3.1.3              
TYPE 30 (if all work started and ended in the interval) contains:       
                              Never captured                            
                              before (IIP,RCT)                          
                              or new (HPT).                             
     _____ _____ _____ _____ _____ _____ _____ ------------             
      SMF   SMF   SMF   SMF   SMF   SMF   SMF   Uncaptured              
      TCB   SRB   TCB   SRB   IIP   RCT   HPT      SMF                  
      CPU   CPU   CPI   CPI   CPU   CPU   CPU      CPU                  
      CPUTM = Sum of all CPU times in TYPE30                            
      The fact's are, CPU capture ratio has significantly declined in   
      MVS/ESA 3.1.3 because Hiperspace CPU, Second Level Interrupt      
      Handler CPU, and Region Control Task CPU time (all of which       
      ARE captured in type 30 data) are not captured in type 72 data.   
  2. MVS PTFs which may affect what IBM puts in your data records:      
    APAR PL31497 (PTF UL37077) corrects error in DB2 IMS transactions   
    that are executed under threads having 'reuse' option (and that     
       exactly what is captured where:                                  
    APAR OY26208 (PTF UY44574,575,576,757,758) correct invalid CPU      
    Time values in step records after 913 ABEND (an other 9xx abends).  
    APAR OY2m372 (no PTF as yet) reports invalid JOB and READTIME in    
    type 6 records under MVS/ESA 3.1.1 with JES 2.                      
    APAR OY21221 (PTF UY37733) corrects timestamps in the JES2 Type     
    26 SMF Purge record when Spool Transferred jobs are re-loaded.      
    APAR OY19327 (PTF UY33328) corrected a bug in RSM without ESTORE.   
    AFQ kept growing, no page steal, system go boom.                    
    APAR OY21181 is an open problem with no PCU in RMF I/O Report and   
    zeroes in many fields in type 78 records.                           
    APAR OY24606 corrects problem with type 32 SMF record.              
    APAR OY16896 corrects SMF6OUT/OUTDEVCE to now contain the ACF       
    VTAM LUNAME.                                                        
    APAR OY20265 discusses SMF6PGE/PAGECNT not correct approximate      
    page count for an interrupted or restarted printer.                 
    APAR OY12804 discusses SMF6ROUT/ROUTE is wrong if $TJ to multiple   
    destinations is issued.                                             
    APAR OX31370 SMF6LNR/NRLINES is wrong if 3800 is forward spaced.    
    APAR OY21704 PSF does not update SMF6CPS/COPIES.                    
  3. MVS Technical notes.                                               
    a. CPU cost of page movement to/from ESTORE.                        
    Bill Mullen has presented some measurements of CPU costs of movement
    of pages in/out of HIPERSPACE. The CPU time per page moved is       
    clearly a function of the number of pages moved in each move        
    operation. At one page per move, 80 microsecs (usec) were measured, 
    but by five pages per move the cost per page is down to 45 usec per 
    page, and above 10 pages per move the cost per page moved decreases 
    from about 38 usec (at 10 ppm) to about 35 usec (at 200 ppm).  This 
    is significantly less than the typical value of 75 usec per page    
    move which has been used in some analysis of expanded storage       
    movement costs!                                                     
    b. Additional notes on TYPE72 ACTFRMTM.                             
    Newsletter THIRTEEN discussed the Active Frame Time variable        
    ACTFRMTM, measured at the performance group period level in TYPE72, 
    which is the Residency Duration times the sum of the memory frames  
    in Central Store plus Expanded Store that were allocated to ASIDs in
    each performance group.  (Two counters are maintained by the SRM,   
    but only their sum is multiplied by residency and written in the    
    TYPE72 record.  Not included in these frame counts are frames in the
    Nucleus, Common Area (since they are not in the address space), and 
    of especial importance, logically swapped frames are NOT included in
    the Active Frame count of ACTFRMTM (and that can be a large amount  
    of memory, especially for a TSO performance group with many         
    logically swapped ASIDs!).                                          
     c. Impact on BUILDPDB of the PROC SYNCSORT product.                
    Dan Kaberon compared CPU timings of BUILDPDB executed with both the 
    Syncsort standard PROC SORT and with their PROC SYNCSORT product.   
    BUILDPDB invokes a sort 24 times, and in this case built a PDB of   
    over 70 MB. Individual sort executions with PROC SYNCSORT do show   
    reduced CPU TCB timings (TYPE74=24 MB, from 4.68 to 1.46 sec., and  
    TYPE30_4=18MB, from 3.48 to 1.71 sec). But sorting itself is a very 
    small part of BUILDPDB.  The total CPU (TCB+SRB) to build this large
    PDB (with 41,436 TYPE30_4 steps, 77,477 TYPE74 device records) was  
    reduced by only 16.44 seconds, from 492.78 to 476.34 seconds, using 
    PROC SYNCSORT, a savings of only 3.3% of the total CPU TCB+SRB      
    time.  In the original PROC SORT run of BUILDPDB, log messages      
    reported CPU TCB time for all sorts totaled only 19.29 seconds, or  
    4% of all TCB!  Even a big savings in sort costs will have only a   
    small savings if sorting is only a small part of the total.  (The   
    elapsed time with PROC SYNCSORT dropped from 27:51 to 25:11 mm:ss,  
    but the variability of elapsed time due to paging, swapping, tape   
    mount, contention for device and dispatch make a single comparision 
    Is BUILDPDB representative of your other workloads' use of sorting? 
    One prime time (11 hours) snapshot at a site with both a 3090-200   
    and a 3090-300 showed 3,550 sorts were issued that used 19 CPU hours
    while their step records totaled 47 CPU hours, showing that over 40%
    of their processor time was consumed in sorting!  Other claims of   
    25% to 33% of total CPU hours have been seen in the literature.     
    I'd like to think it is the elegant design of the BUILDPDB algorithm
    which exploits the power of SAS to perform complex data merges, that
    causes only 4% of BUILDPDB processing to be in sorting!             
     d. SRM control in LPAR PR/SM environment.                          
    Richard Armstrong's excellent article, printed by permission of     
    IBM at the end of this newsletter, discusses a workflow reduction   
    because default SRM values for MVS/XA 2.2 and later may be (in an   
    LPAR PR/SM environment) inappropriate. The article is also a fine   
    tutorial on how LPAR PR/SM data is captured. It recommends that     
    RCCCPUP and RCCCPUT both have values of  (100+E),120  with E being  
    the number of engines available to this MVS.                        
    Previous IBM supplied defaults were (IBM supplied defaults 95,98 and
    98,100.9, which tell the SRM to reduce MPL at a CPU busy of 100.9%. 
    An SRM control value of (100+E) means that a 3090-600 is NOT        
    overcommited for the CPU resource until the SRM measures a CPU busy 
    of over 106%.                                                       
    SRM records a CPU busy of 106% when the CPU was not only actually   
    100% hardware busy, but also that six IN-READY-QUEUE users were not 
    dispatched during the previous SRM interval.                        
    IBMs recommendations simply affirm that your Central Electronic     
    Complex is NOT overcommitted until MORE THAN one IN-READY user PER  
    CPU is being delayed for CPU dispatch.  One Hundred Percent Hardware
    busy, PLUS one waiting user per engine, IS THE DESIGN POINT of MVS! 
    That does not mean you can run your critical workload at 100%       
    hardware busy, but it most certainly does mean that your hardware   
    platform can and should operate at 100% busy during your peak       
    period. Your site MUST segregate critical workloads into domains,   
    and then tell the SRM what's important and what's not, so that in   
    these otherwise lull moments during your peak period, the SRM can   
    find one of these lesser important (to you) workloads to dispatch   
    and thereby soak up what would have been wasted wait time. If your  
    peak hourly processor utilization never exceeds 90% of a $6,000,000 
    machine, you are throwing away $600,000!  To quote Armstrong,       
    "Fundamentally, the MPL is too high if and only if the page fault   
    rate is too high".                                                  
     e. Conjecture on PR/SM Dedicated Partion timings.                  
    For PR/SM partitions that are Dedicated, a 30 minute elapsed        
    interval with four engines showed RMF DURATM*4 was 7200.016 seconds,
    but PR/SM PDT Partition Dispatch Time was 7191.958 leaving 8.058    
    seconds unaccounted time. Is this the "overhead" for PR/SM          
    management of a dedicated partition? If so, it is only 0.44 percent 
    (less than one half of one percent) of the 30 minute interval.      
    (Shared partitions do not offer any similar instrumentation). This  
    site is conducing more research on this.                            
     f. Memory measurement reference.                                   
    Measurement of real memory requirements by task, workload, etc., is 
    not easy in MVS.  The best discussion of how complicated it can get,
    and how to put the various pieces together from the may RMF/SMF     
    records is the presentation by Gary King that was published in SHARE
    Proceedings (Volume I) from the SHARE 72 meeting (February, 1989),  
    Session O262, pages 448-482.                                        
     g. No EXCP data for type 30 subtypes 4 and 5 from STCs.            
    SMF Type 30 subtypes 4 and 5 for STCs (Started Tasks) do not contain
    an EXCP segment when INTERVAL and NODETAIL is specified in SMFPRMnn 
    member of SYS1.PARMLIB, even though the EXCP segment does exist in  
    subtypes 2 and 3 (the interval records). This is documented under   
    the DETAIL parameter in Initialization and Tuning (but not mentioned
    in SMF Manual).  What is not documented is that the default options 
    NOINTERVAL and NODETAIL DO create EXCP segments for STCs in subtypes
    4 and 5, the step and job termination records! Thus a site which had
    not specified INTERVAL and had NODETAIL by default, did have EXCP   
    counts in STC step and job termination records, but when the site   
    enabled INTERVAL data, the EXCP counts for all STCs became zero in  
    PDB.JOBS and PDB.STEPS.  The moral: ALWAYS specify DETAIL for STCs  
    IF YOU NEED EXCP COUNTS from subtype 4 and 5 records in PDB.JOBS and
    PDB.STEPS, no matter what INTERVAL/NOINTERVAL parameters specify.   
    But, there ARE also good reasons to specify NODETAIL and use only   
    the PDB.SMFINTRV data, instead of PDB.JOBS/PDB.STEPS for EXCP data: 
    26Apr02 revision to the preceding "ALWAYS specify DETAIL":          
    - IBM Information APAR II07124 discusses why you might need to      
      specify NODETAIL for your STC's:  When DETAIL is specified, the DD
      and EXCP information will be stored in 32K memory blocks in SP230,
      and those blocks (in virtual storage) are kept for the entire life
      of the address space.  For an STC like DBM1, which can have 10,000
      DDs, its SP230 can grow and run out of private area storage, both 
      high and low, requiring a restart of the DB2 system to clear the  
      Sub Pool.  Instead, with NODETAIL, the DD information is only kept
      for each interval record, i.e., in PDB.SMFINTRV, so the data is   
      available in SMF, and DBM1's SubPool 230 does not grow over time, 
      so you don't have to stop and restart your DB2 subsystem.         
    - That APAR also recommends DDCONS=NO, a parameter that was created 
      by IBM as a result of an MXG user's discovery of the problem it   
      corrects, so MXG has always recommended DDCONS=NO be specified.   
    - Specifying NODETAIL for STCs has no direct impact on MXG logic.   
      Your STC observations in PDB.JOBS/PDB.STEPS will have zero for the
      EXCP/IOTM variables, which might affect your chargeback for STCs, 
      but STCs are usually an internal charge; furthermore, only those  
      STCs that are terminated normally (created subtype 4/5) could have
      been billed for STC EXCP counts.  But the EXCP/IOTM variables for 
      STCs will exist the PDB.SMFINTRV dataset for each interval, even  
      with NODETAIL, as long as INTERVAL is enabled to write subtype 2/3
      records, (and MXG member IMACINTV was enabled to populate TYPE30_V
      and PDB.SMFINTRV datasets).  And with a large value for SPINCNT in
      your MXG member IMACSPIN, dataset PDB.SMFINTRV will contain the   
      ACCOUNTn and SACCTn accounting fields for the STC, so those EXCP  
      counts for your STCs can still be charged back with PDB.SMFINTRV. 
    12May03 addition:                                                   
    - To keep DB2 "up forever", you do NOT have to turn off SMF interval
      recording, which is a mis-reading of that APAR.  As long as both  
      NODETAIL and DDCONS=NO are specified, the NODETAIL eliminates the 
      SP230 growth issue, and DDCONS=NO eliminates the CPU spike, so you
      can continue to write SMF type 30 interval records for your DB2   
      that is designed to "never end".                                  
     h. PR/SM and LPAR considerations.                                  
    PR/SM and LPAR considerations are extremely well described in the   
    WSC FLASH 8923 in this newsletter.  In addition, these other entries
    in IBM's INFO/SYS will be of interest to sites executing in these   
    "Multi-Image" environments (either MVS with PR/SM, or VM/XA).       
    Examine these entries if you are in these environments:             
          Q399071         Q423087          Q399108                      
          Q458004         Q424130          Q395069                      
          Q410851         Q433673                                       
     i. ESTORE and MAXMPL notes.                                        
    I have a scratch pad note of collected facts from somewhere, but I  
    can't remember whose presentation or where!  I hesitated to print   
    them, but they sound too authoritative to be wrong and too useful   
    to not be shared:                                                   
    For a 3090 ESTORE machine when UIC is below 5 to 10:                
     - MAXMPL control is not dynamic enough,                            
     - CPU control is not particularly important since the real concern 
       is memory, and thus CPU is a second order effect,                
     - UIC=(2,4) was suggested because it                               
       -- Prevents Thrashing                                            
       -- Is insensitive to page movement in/out of ESTORE              
          (Movement does not hurt response, but movement costs          
           CPU time, and logic to decide to lower MPL itself            
           "costs" CPU time to execute.)                                
     - When this happens, RMF can show very high Master address space   
       SRB CPU Time (MXG variable CPUSRBTM in TYPE30_V for the jobname  
       of the Master ASID's interval record). You can probably estimate 
       high master address space SRB time by looking at RMF Performance 
       Group 0 SRB time, since only Master and a few other tasks execute
       in PERFGRP=0.                                                    
     - MASTER SRB plus UNCAPTURED CPU are the main places for ESTORE    
       page movement CPU time to actually be recorded.                  
     - Storage Isolation PWSS controls the sum of Real plus ESTORE.     
     - There is no support for period switching by the SRM for tasks    
       which are non-swappable.                                         
       If the author recognizes his work, I'll acknowledge the source.  
     j. Sequencing of task startup for RESOLVE and RMF.                 
    Boole and Babbage's RESOLVE product caused RMF date to be           
    "clobbered" when RESLOVE was started before RMF. Apparently at      
    RESOLVE startup time, the MONITOR CSA function grabs the IOCDS      
    (I/O control data set, which contains all the static I/O mapping    
    information needed by RMF). Then when RMF started second, it        
    could not get at the IOCDS, and type 78 data in RMF records was     
    invalid.  As long as RMF is started first, there is no problem.     
    However, there is no way during startup to actually guarantee that  
    a task has completed startup before another task starts! (I had     
    assumed there was some standard "WAIT UNTIL DONE" command to        
    sequence task startup).  This site ultimately solved their          
    problem by inserting the startup of their large Network between     
    RMF startup and RESOLVE startup.                                    
     k. Upper limit on QSAM/BSAM buffers.                               
   Brian Bowman of SAS Institute has pointed out that the upper limit   
   of 30 buffers for BUFNO in SAM access methods is a constant in MAP   
   Defined limit IGGSAMB, but the actual number of buffers that can     
   be used is a function of the block size of the access. The real      
   storage channel program must fit in a 4K SAMB to pass to EXCPVR.     
   For example, at a blocksize of 18K, only 11 blocks can fit in the    
   single SAMB (which contains the IOBs, DEVADDR, and IDAWs).  Even     
   if you requested BUFNO=30,BLKSIZE=18000, you only get 11 x 18000     
   bytes of data per I/O operation, or 198,000 bytes per SSCH!          
   This limit only applies to SAM access methods; if the programmer     
   writes his own EXCP access, there is no such limit on the length     
   of data transferred in a single I/O operation.                       
     l. Parallel mount impact on MXGTMNT measured duration.             
   Parallel mounting occurs when two or more units are allocated to     
   a DD. (This is rarely used now, but was very important when IMS      
   logs were written to tape; two tape drives had to be allocated to    
   that single DDNAME in the IMS job so that when one tape filled       
   IMS could immediately continue writing its log. Before we allocated  
   the second tape drive for the IMS log, IMS would WAIT all terminals  
   while the log tape rewound, and the operator got the new scratch     
   tape on as quick as he/she could. In fact, we located both tape      
   drives well away from the tape area, immediately beside the IMS      
   Master Terminal Operator's console!)                                 
   The mount verification of the second tape mount is delayed until     
   EOV (end of volume) on the first tape volume is complete, even       
   though the second tape had been physically mounted a long time       
   ago (in the case encountered, it took 6 minutes to read each of      
   the 128 3480 cartridges in the multi-volume tape data set!)          
   The MXGTMNT Tape Mount monitor sees the second mount complete after  
   the first mount has been read and after the second device was        
   actually opened, according to Guy Caron, whose similar VERSTAND      
   mount monitor apparently protects for parallel mounting.             
     m. 3480 tape cartridge statistics.                                 
   A 3480 tape cartridge must contain at least 505 actual feet of       
   magnetic tape, although there is usually 541 actual feet of tape     
   on the cartridge.  The inter-record-gap IRG is 0.08 inches           
   (a hardware-required space between each physical block on the tape)  
   plus the equivalent of 0.009 inches of tape with overhead bytes      
   for an effective IRG of 0.089 inches.  A density of 37871 was        
   reported by Bill Dines and Brad Cahill at CMG.                       
   Some specific numbers from actual 3480 (non-IDRC) volumes show       
   that at 32760 byte blocks about 6,576 blocks fit per volume (for     
   215MB per volume) and that at 4096 byte blocks 30,427 blocks fit     
   (for only 124MB per volume, another reason why large blocksize       
   for sequential files makes real good sense).                         
     n. Cost of initializing MXG Trend data base                        
    One site initialized their MXG Trend data base with the past two    
    years SMF data. Their pair of 3090-200 machines produced a weekly   
    SMF tape on 11 cartridges (that's 2200MB of SMF data weekly).       
    Each week's BUILDPDB plus trending required between 63 and 66       
    minutes of CPU time to process the eleven tape cartridges, and      
    needed only 525 cylinders (400 MB) of work space for each run.      
  4. VM/XA Performance Note on DSPSLICE value.                          
    The default VM/XA Dispatcher Time Slice, DSPSLICE, has been 3       
    milliseconds, but IBM reportedly will issue a PTF to change the     
    value to 25 ms, because a heavy guest machine can monopolize the    
    CPU with that small a value. Furthermore, SET SHARE does not        
    work properly with the 3 ms value, because elsewhere in the         
    Dispatcher it was assumed that DSPSLICE was 25 milliseconds!        
  5. SAS System Technical Notes                                         
     a. SAS technique to locate data value location in INPUT.           
    This SAS technique is useful for finding the location in a record   
    from which a variable is INPUT, and/or to force a hex dump of the   
    data record to examine the actual hex value at that location in a   
    record. This technique is especially handy if you are deep inside   
    relocated sections (like in a SMF 102) record, and are not sure of  
    the actual physical location of the data.  Simply change the format 
    of the variable in its INPUT statement to a packed decimal format   
    (eg., change OFFSQLS PIB4. to OFFSQLS PD4.).  Unless the data value 
    just accidentally happens to be a valid packed decimal value, SAS   
    will detect an invalid data value, print a note to that effect, and 
    produce the SAS vertical hex dump of the record.  Bob Olah, Dun and 
    Bradstreet reported this slick technique.                           
    The NOTE and hex dump of the record look like this on the SAS log:  
      NOTE: INVALID DATA FOR OFFSQLS  IN LINE 53  33-36.  245:56        
      RULE:     ----+----1----+----2----+----3----+----4----+----5----+ 
      53  CHAR  ....8.....SYSBPRD2.................<.@................. 
          ZONE  0600F80828EEECDDCF000000000C05000004070000010900000A0200
          NUMR  E5088B097F2822794200000000100401000C0C010010000100100001
    The location of the data value is immediately reported in the NOTE. 
    The text of the NOTE says that in the 53rd input record, OFFSQLS was
    read from bytes 33-36 of that record.  Further, the NOTE indicates  
    that the INPUT statement that was executing is located at program   
    line number line number 245, column 56. (You will see that line     
    printed in your SAS log if you had enabled SAS options SOURCE       
    SOURCE2 MACROGEN and MPRINT to cause source to print on your log.)  
    The hex dump (provided automatically when invalid data is detected) 
    dumps the logical record, the 53rd, providing the RULE to demark    
    columns (bytes) of the record.  If the byte contains an EBCDIC      
    printable character, it will be printed on the line marked CHAR,    
    directly above the vertical hex ZONE and NUMERIC nybbles of each    
    byte of the record.                                                 
    Eg:  - the  2nd byte is the record id, hex 65, decimal 101.         
         - the 11th byte is the EBCDIC letter S, hex value 'E2'X.       
         - bytes 33-36 are hex value '0000004C' (which is the decimal   
             value of 76 when input into OFFSQLS as a PIB4.). Because   
             '4C'x is also the EBCDIC value for the printed "less-than" 
             symbol, that symbol is above the 36th byte on CHAR line.   
    This hex dump of an input record can be created at any time by the  
    LIST; statement.  The LIST; statement actually prints hex only if a 
    print line of a record contains any non-printable characters.  If   
    the entire print line (usually 100 bytes on hardcopy, 60 for        
    terminals) is all printable characters, the ZONE and NUMR lines of  
    the hex dump format are not printed.                                
     b. MXG Compatibility with SAS Version 6.06+.                       
    I have been working closely with SAS Institute in validating that   
    MXG and SAS 6.06 will be mutually compatible when it ships this     
    Spring.  While many MXG members executed without error under their  
    November 6.06 Beta test, I  still found nine critical errors        
    which preclude the execution of MXG under that Beta test version.   
    All of these reported errors have been corrected by SAS Institute in
    the advanced beta version at the Institute.  Unfortunately, only the
    base SAS system has been tested; none of the GRAF.... member which  
    invoke SAS/GRAPH could be tested.                                   
    Very little language incompatibility has been found, but for these: 
    -  VMXGVTOC END=EOF in MERGE statement had to be moved to the end of
       the statement. END= is not documented as being positional, but   
       SAS 6.06 requires MERGE statement options to follow all dataset  
    -  CCHHR as an option on an INFILE statement to extract the pointers
       needed to read DASD VTOCs has been used in MXG since before there
       was an MXG. When specified as an OPTION (in my definition,       
       OPTIONS are switches and PARAMETERS have arguments), the CCHHR   
       returns a CCHHR value in the first 5 bytes of the record, ahead  
       of the first byte of the actual VTOC record. In the SAS Version 5
       documentation, the CCHHR option is not described (but it workx   
       exactly as described above in SAS 5.16 and 5.18). Instead, SAS   
       Version 5 documents a CCHHR=XYZ "Parameter" which returns the    
       value of the CCHHR in the variable XYZ, and makes no mention of  
       inserting the CCHHR into the first 5 bytes. However, Version     
       5.08+ CCHHR=XYZ has never worked as described.  CCHHR=XYZ under  
       Version 5 works exactly as the CCHHR option described above, and 
       does put the CCHHR value in the first 5 bytes of the record!  Why
       is all this important? Because the designer of this component of 
       SAS Version 6.06 based the design on the documentation and not on
       how the component actually worked. Beta Version 6.06 testing     
       uncovered this incompatibility, and now BOTH the CCHHR option and
       the CCHHR=XYZ option will be supported in SAS Version 6.06, and  
       both will now work as to be documented - CCHHR inserts five      
       bytes, and CCHHR=XYZ doesn't!                                    
    -  FMXGUCBL did not normalize the value returned by the function.   
       SAS 5.18 accidentally handled and corrected the returned value,  
       but the function was not written in compliance with SAS function 
       specifications. This correction changed line 017500 from LD to   
       AD, and inserted new line before that changed line:              
                         SD  FR0,FR0                                    
       It will be necessary to re-assemble FMXGUCBL before use under SAS
       6.06, but the new module will execute correctly under either SAS 
       5.18 or SAS 6.06+                                                
    -  Member FORMATS was revised write formats to SASLIB DDNAME,       
       expecting an OS Partitioned Data Set (Load) Library under SAS    
       5.18, or to LIBRARY DDNAME, a SAS data library, if executing     
       under SAS 6.06+.  The size of the SAS 6.06+ Format Library       
       appears to require SPACE=(CYL,(10,2)), with no third operand     
       since it is NOT a PDS, whereas 5.18 was only                     
       Sep 1, 1990 addendum: Excess space required by format library was
         a problem only in 6.06 beta and was fixed in April 6.06.01. The
         SAS 6.06 format library for MXG need only be CYL,(1,1).        
    -  Under SAS 6.06+, the LIBRARY DDNAME is required to point to the  
       SAS Version 6 Format library, where the SAS 5.18 execution       
       required the SASLIB DDNAME.                                      
    SAS Version 6.06 is known to be the planned initial offering of SAS 
    Version 6 for mainframes, and there are parts of that major SAS     
    enhancements that will not be delivered by SAS Institute until the  
    follow-on SAS Version 6.08 and later enhancements, but there are so 
    very many new, functional, useful features of SAS Version 6, that I 
    plan to begin to exploit those capabilities in future MXG versions. 
    (SAS Version 6.06 can decode both $ASCII and VAXRB reversed binary  
    fields. MXG support for VAX/VMS data will be in a prerelease of MXG 
    Version 8 later this year and will require SAS 6.06.)               
    With the separation of SAS Statistics routines into the SAS/STAT    
    product, the question has been asked, does MXG require SAS/STAT?    
    The answer is NO, in that MXG does not require SAS/STAT procedures  
    in the creation of MXG-built SAS data sets. Some of these STAT      
    routines (notable PROC GLM and PROC FASTCLUS) are used in sample    
    MXG reports, and I personally would want to have this powerful suite
    of tools for data analysis at my site, but it is not a requirement  
    for MXG that the site have SAS/STAT. Similarly, SAS/GRAPH, while    
    recommended and used in examples, is not required by MXG Software.  
     c. Timestamp trunction with incorrect LENGTH value.                
    MXG variables which are SAS timestamps are all specifically made    
    LENGTH 8 in length statements (all default numeric variables are    
    stored in on 4 bytes to save on DASD storage requirement of MXG     
    data libraries).  If you should inadvertently store an eight-byte   
    timestamp value into a four-byte numeric variable, the actual       
    value of the timestamp will be truncated by as much as 4.25 minutes 
    (255 seconds).  The actual truncation stores the value of the       
    previous modulo 255 second timestamp since the SAS epoch value.     
 IV. SRM CPU parameters for MPL Control In LPAR                         
Sincere thanks to IBM Corporation for permission to reprint the         
 following article printed originally as the Washington Systems         
 Center Flash 8923 in June 1989.                                        
                     Richard M. Armstrong                               
                     Senior Marketing Support Representative            
                     Washington Systems Center                          
The  SRM is designed to maintain the flow of work through an MVS system.
SRM parameters and default values are described in the Initialization   
and  Tuning Guide.  Recently  some customers have experienced a         
reduction in workflow for an MVS production partition in an event driven
LPAR  environment.    The  SRM function  of interest is that of holding 
back additional work (more MPL) when there is enough work to keep the   
CPU near 100% busy.    Additional  work  may cause other problems (such 
as holding a lock) and we are not going to process more than 100% CPU   
anyway.  The object is to process the right 100%.  Specifically,  the   
SRM reacts to zero wait time rather than 100% utilization, so we need to
think in terms of wait time.  There are two kinds of waiting: (1) the   
old fashioned kind due to issuing a WAIT (this is  now  called  a       
voluntary WAIT)  and  (2)  an involuntary wait caused by the LPAR       
dispatcher.  The LPAR dispatcher must allocate dispatch time to logical 
processors to  achieve  the desired  distribution  of activity.   The   
only wait that the SRM sees is the voluntary WAIT. In an event driven   
environment  there can be  less  and  less voluntary  WAIT.  For several
customers the (voluntary) WAIT time went to zero.  This means the SRM   
WAIT time is zero.  The SRM CPU utilization parameters were set to      
reduce MPL under this condition.   The lower  MPL  caused  a workflow   
reduction. The resulting workflow was less than the amount specified    
with  the  LPAR partition WEIGHTS. We need to review the SRM parameter  
values for this new environment.                                        
This Flash discusses some relations of workload quantities with         
parameters in the PARMLIB member IEAOPT. Customers may set these        
parameters to values of their choice. No customer-written programs or   
programming interfaces are involved.                                    
Conclusion:  For MVS/SP2.2 and later releases an  appropriate  default  
for RCCCPUP and RCCCPUT is (100+E),120, where E is the number of        
engines.  These CPU MPL parameters are not a substitute for setting MIN 
TMPLs (Target MPLs) and other Domain management controls properly.  They
do let you get the most out of the system, particularly in an LPAR      
environment. For an MVS/SP2.1.7 or 2.1.3 environment use RCCCPUP and    
RCCCPUT values of 101,101, and depend on the paging rate parameters for 
restricting the MPL. This case is different because the maximum values  
of these parameters are different for the earlier releases.  The        
following description highlights the reasoning behind the conclusion.   
Discussion:   The discussion primarily applies to MVS/SP2.2 and later   
releases. The MVS/SP2.1.3 and 2.1.7 solution is a best can do subset.   
There are several variables that have a vote in changing the system MPL.
Any vote to lower will cause a lowering. A vote to raise must be        
unanimous. The SRM CPU utilization parameters in IEAOPT for controlling 
system MPL are RCCCPUP and RCCCPUT. The SRM defaults for RCCCPUP and    
RCCCPUT are (95,98) and (98,100.9), respectively. When a utilization of 
100% (zero WAIT) is found, the SRM adds the in-ready queue value to the 
utilization value, up to a maximum of 128.  (The maximum in MVS/SP2.1.3 
and 2.1.7 is 101.) There are other variables for system MPL control that
relate to other resources. The combined effect of all these MPL control 
variables (together with their MIN/MAX constants in IEAOPT) determines  
if an adjustment will be made.                                          
Unless there are other bottlenecks, getting the most work through the   
system means running the CPU at 100%.  RCCCPUT is used independently    
while RCCCPUP is used in conjunction with other variables. Therefore    
RCCCPUT is potentially the more sensitive of the two. Each one          
represents the view of the CPU and they can be treated equally.         
Fundamentally, the MPL is too high if and only if the page fault rate is
too high. (Note that the page fault rate and demand page rate are       
essentially the same statistically.)  While storage capacity is the main
criteria for MPL adjustment, there is no benefit to adding MPL after the
CPU is at 100% utilization.  Of course, make sure the Domain controls   
cause the right ASIDs to be in storage. There can be disadvantages with 
having more work in storage than can be dispatched by the CPU -- such as
holding resources.  If your system has expanded storage (ES), the       
balance between central storage (CS) and ES is important as well as the 
amount of total storage. The general guideline here is less than 500    
pages per second per engine total traffic between CS and ES. Also if    
your system has ES, a "reasonable" amount of paging to DASD is much much
smaller than what was "reasonable" before ES. With ES, the life history 
of a page fault is CS to ES to CS to DASD.                              
While CPU utilization considerations apply to the MPL management of any 
MVS system, the situation is particularly important in an event driven  
(WAIT Complete = NO) LPAR environment.  In this case a production       
partition with a lot of work may constantly exceed its LPAR dispatch    
time and be pre-empted by the LPAR dispatcher -- rather than continue   
and issue a WAIT. If the logical processors for the partition never     
issue a WAIT, WAIT time goes to zero.  (The involuntary wait does not   
count.) Even though the WAIT time is zero, we do not want to reduce MPL.
Reducing MPL causes the production partition to do less work. In some   
cases the LPAR WEIGHTS will be designed to run several partitions with  
each partition at zero WAIT (i.e., 100% busy during the dispatch time). 
In the LPAR environment we need to pay special attention to what happens
as WAIT goes to zero.                                                   
As a general point of view, allow the system to reach 100% utilization  
with interactive and batch workloads plus allow for one background      
grinder per engine. This means we want to increase MPL if the SRM CPU   
utilization is equal to or less than 101 or (100+E) where E is the      
number of engines.  Then TMPL (Target MPL) would not be increased beyond
an SRM CPU utilization of 101 for a uniprocessor or (100 + E) where E is
the number of engines. Queueing theory shows that 101 is not enough. For
example, in queueing theory for exponential distributions, a single     
server at 90% utilization has a queue of 9.  After reaching 101 (or     
100+E), the utilization will periodically dip below this threshold      
allowing MPL increases until there is enough queue to support 100%      
utilization. The TMPL should not be decreased because of SRM CPU        
utilization values in the immediate range above 101 or this process of  
supporting 100% actual utilization will be defeated.                    
With the objective of running at 100% CPU (again, assuming no other     
bottlenecks), the upper limit can be an SRM CPU utilization of 128.     
However, there should be some CPU mechanism to lower TMPLs. Ideally, and
assuming there is enough work to saturate the CPU, we do not want the   
IN-READY queue to be any larger than it needs to be to support 100% CPU 
utilization. And we want to accommodate the interactive work in the     
process. This means a high upper limit but less than 128. This means a  
number like 120 for a heavy TSO system. The upper limit can be smaller  
in a system dominated by a few ASIDs such as some CICS systems and where
there is little TSO.                                                    
Again for MVS/SP2.1.3 and 2.1.7 we do not have these choices and fall   
back to the values 101,101.                                             
  Figure:  CPU WAIT in an LPAR Environment.                             
  this partition, each processor:                                       
            ------Busy----------- ---WAIT----                           
              TCB+SRB     Uncapt                                        
            ---- Dispatch time --------------                           
            ------ Interval -----------------                           
  SHARED,  WAIT Complete = YES:                                         
  this partition, 'average processor':                                  
            ------Busy----------- ---WAIT-- ------non-dispatch------    
              TCB+SRB     Uncapt                                        
            ------- Dispatch time ---------                             
            --------- Interval -------------------------------------    
  SHARED,  WAIT Complete = NO:                                          
  this partition, 'average processor':                                  
            ------Busy----------- ----non-dispatch------                
              TCB+SRB     Uncapt                                        
            --- Dispatch time ---   <== dynamic                         
            ------- Interval ---------------------------                
        * From initial WAIT, includes any contiguous non-dispatch time. 
          Additional non-dispatch time occurs when this partition loses 
          control during Busy.                                          
------------End of reprint of IBM WSC FLASH 8923------------------------
   It had been my intention to reproduce the above figure, overlaying   
   Dick's tabulation with MXG variable names from TYPE70 and TYPE70PR,  
   for each bucket, but I don't have the time to verify it and still    
   meet the printer's deadline for this Newsletter. Stay tuned.