****************NEWSLETTER FORTY****************************************
    MXG NEWSLETTER NUMBER FORTY dated Feb 14, 2002.                     
Technical Newsletter for Users of MXG :  Merrill's Expanded Guide to CPE
                         TABLE OF CONTENTS                          Page
I.   MXG Software Version 19.19.                                        
II.   MXG Technical Notes                                               
III.  MVS Technical Notes                                               
IV.   DB2 Technical Notes.                                              
V.    IMS Technical Notes.                                              
VI.   SAS Technical Notes.                                              
VII.  CICS Technical Notes.                                             
VIII. Windows NT Technical Notes.                                       
IX.   Incompatibilities and Installation of MXG 19.19.                  
         See member CHANGES and member INSTALL.                         
X.    Online Documentation of MXG Software.                             
         See member DOCUMENT.                                           
XI. Changes Log                                                         
     Alphabetical list of important changes                             
     Highlights of Changes  - See Member CHANGES.                       
I.   MXG Software Version 19.19 is now available.                       
 1. Major enhancements added in MXG 19.19:                              
    See CHANGES.                                                        
II.  MXG Technical Notes                                                
 1. Benchmarks of Data Transfer and MXG Processing on PCs.              
    Early results were presented to the MXG User Group Meeting at CMG   
    2001 in Anaheim, December 5, 2002.                                  
 a. Measurement of sustainable data transfer rates between platforms    
    and between disk drives, with the theoretical capacity of two LANS  
    is given in this table:                                             
               Table of Sustainable Data Transfer Rates                 
         KB=KiloBytes  MB=MegaBytes  GB=GigaBytes=1,073,741,824 bytes   
       Connection/path      KB/sec   MB/sec   MB/min   MB/hr   GB/day   
      Dial In 56kbit            6     .006      .4        20*      1    
      T1 1.44mbit             144     .144       8       500*     12    
      MVS to 10 mbit LAN      470*    .50       30      1800      42    
      10 mbit LAN @100%      1024     1.0       60      3600      85    
      750MHz 100mbit LAN              2.4*     144      8640     202    
      500MHz C: to D: IDE             6.5*     390     23400     550    
      100 mbit LAN @100%    10240    10.0      600     36000     850    
      750MHz C: to D: IDE            12.0*     720     43200    1000    
      850MHz C: to D: SCSI           24.0*    1440     86400    2000    
      Those measures are for raw data transfer bytes.                   
      Note that a T1 connection transfers 500 MegaBytes per hour, while 
      a 100 mbit LAN transfers over 8500 MegaBytes per hour (using about
      25% of the theoretical capacity).  The much higher disk-to-disk   
      transfer rates on the PCs show that data transfer time, and not   
      the MXG run time, will be the constraint on daily processing time.
 b. Download SMF data, Execute BUILDPDB on three different PCs:         
    The SMF file contained only SMF 6, 26, and 30 records:              
      842 MegaBytes, Compressed to 72 MegaBytes,  11 to 1.              
    On a 2064/1C5 CPU (SU_SEC=10540) BUILDPDB took 18.5 minutes         
    elapsed (and 6 minutes CPU).                                        
    To download SMF data, you must first convert it from RECFM=VBS to   
    RECFM=U (copy with IEBGENER), and then (optionally) zip (compress)  
    the file on MVS (to reduce the download time), and then unzip on the
    PC and process the uncompressed SMF file.  The transfer was over a  
    T1 line and the timings in minutes for the processing are these:    
    Phase          500 MHz 2 SCSI     850 MHz 2 SCSI     1.3MHz 1 IDE   
                   No-ZIP  Zipped     No-Zip  Zipped     No-Zip Zipped  
    RECFM=U           1       1          1       1          1      1    
    ZIP on MVS                9                  9                 9    
    ftp download     89       7         89       7         89      7    
    unzip on PC               1                  1                 2    
    BUILDPDB         19      19         10      10         38     38    
    total minutes:  109      37        100      28        128     57    
    With 11:1 compression to reduce the transfer time, the 850 MHz box  
    with two fast SCSIs would take 28 minutes to process (including the 
    download), as compared with 18.5 minutes for BUILDPDB on MVS.       
    Note that the speed of the PC and its disk drives has significant   
    impact on the total run time.                                       
    Zipping on MVS:  "Info Zip MVS", SHAREWARE:                         
    Get program/documentation at 
      Input limit: One full Volume uncompressed; input must be a member 
                   of a PDS (RECFM=U conversion outputs to PDS member). 
    Unzipping on PC: "Zip Magic for Windows", $40 USD purchase:         
    Get program/documentation at (the address has changed several       
    times, as the product has changed hands) as of Feb, 2003:        , $39.99, downloadable.     
    When installed on PC, can read the zip file directly with MXG, so   
      there is no need to unzip (and hence you only need 72MB of disk   
      space for the downloaded zipped SMF file, instead of an additional
      842 MB for the expanded SMF file!).  This is really slick!        
      Mar 30, 2004 Note:  ZipMagic is NOT supported for Windows/XP,     
        although it has worked without error on many files.  If it does 
        fail with XP, you have no recourse with the vendor, who has not 
        responded as to their plans (if any?) for XP support.  Their    
        alternative Stufit product doesn't support direct read of zips. 
    MXG Member DOCZIP (in MXG 19.08+) contains the JCL and specifics for
    the installation of both "InfoZip MVS" and "Zip Magic for Windows". 
 c. Detail examination of impact of Compression options with PC SAS.    
    The TYPE30 program was used to compare compression costs, since it  
    reads the SMF file and writes out the SAS datasets in a single SAS  
    data step.  The input dataset could be zipped or unzipped, and the  
    output SAS datasets could be written compressed or uncompressed.    
    MACHINE: 1.3 GHZ ONE IDE DISK:                                      
    Input  Output   Elapsed mm:ss  User CPU  System CPU  Total CPU      
     RAW    NO        13:22          2:22       :33        2:55         
     RAW    YES       11:27          4:30      1:45        6:15         
     ZIP    NO        11:00          2:19       :54        3:13         
     ZIP    YES       10:23          4:30      2:04        6:34         
     Comment:  Fastest ET was with both zipped input and compressed out.
               Compressing Output saved 2 minutes.                      
               Compressing Input saved 2.5 minutes.                     
               Compressing both saved 3.0 minutes, or 22% savings.      
               Here, the elapsed cost to do I/O is greater than the     
               elapsed cost of compression, and CPU was available.      
               Best Time:  10:23.                                       
   MACHINE: 500 MHZ TWO SCSI DISKS:                                     
    Input  Output   Elapsed mm:ss  User CPU  System CPU  Total CPU      
     RAW    NO         5:12          4:57       :11        5:08         
     RAW    YES        5:35          5:15       :13        5:28         
     ZIP    NO         5:52          4:49       :55        5:44         
     ZIP    YES        6:09          5:12       :52        6:04         
     Comment:  Fastest ET was with unzipped input and uncompressed out! 
               Run time is more than halved due to faster I/O.          
               Here, the processor is CPU bound, and adding compression 
               to a CPU-bound process increases the run time.           
               Savings from slowest to fastest was 57 seconds, 30%.     
               Best Time:   5:12.                                       
   MACHINE: 850 MHZ SAME TWO SCSI DISKS:                                
    Input  Output   Elapsed mm:ss  User CPU  System CPU  Total CPU      
     RAW    NO         3:00          2:47       :08        2:55         
     RAW    YES        3:12          3:00       :09        3:09         
     ZIP    NO         3:27          2:48       :34        3:22         
     ZIP    YES        3:36          2:59       :34        3:33         
     Comment: Faster CPU with same disks; run time dropped from 5 min   
              to 3 minutes, but processor is still CPU bound, so fastest
              run time is still unzipped and uncompressed.              
              Savings from slowest to fastest was 36 seconds, 16%.      
              Best Time:   3:00.                                        
    Input  Output   Elapsed mm:ss  User CPU  System CPU  Total CPU      
     RAW    NO         3:48          2:49       :09        2:58         
     RAW    YES        3:38          3:00       :09        3:09         
     ZIP    NO         3:59          2:49       :34        3:23         
     ZIP    YES        3:45          3:00       :34        3:34         
     Comment: One disk is slower than two disks; contention increased   
              run time from 3 minutes to 3:38, but CPU was the same     
              (and these numbers show the consistency of CPU measures). 
              Savings from slowest to fastest was 36 seconds, 16%.      
              Best Time:   3:38, with output compressed (probably due   
              to Write cost being much higher than read cost).          
   The overall conclusion is that zipping and compression is always good
   when you have CPU available; even when compression does increase the 
   run time, the increase is small, and more than justified by the large
   saving in disk space and transfer time.                              
   But faster CPUs and faster Disk drives have a bigger impact on run   
   time than does compression!                                          
 2. Benchmarks comparing SMF and Web Log processing on Linux & Windows: 
    SMF  (842MB unzipped type 6, 26, and 30s.                           
          119MB compressed output, 54MB Sorted)                         
                              Elapsed   Elapsed  Data  Copy  Sort       
                              mm:ss       sec    sec   sec   sec        
  Dell Inspiron 1.3GHz Win2K  12:44       764    738    18    08        
  IBM ThinkPad  1.3GHz Linux   6:26       386    347    15    24        
  IBM ThinkPad  1.3GHz Win2K   3:06       186    152    18    16        
  Desktop Clone 866MHz WinXP   3:28       208    183     7    16        
     Both IBM and Dell are 1.3 MHz CPUs; 2:1 difference might be due to 
       disk drives, but I think is actually due to better floating point
       execution on IBM; see below.                                     
     Linux to Win2K, 2:1 difference shows Win2K NTFS outperforms the    
       nfs that comes with Red Hat Linux 7.2.                           
     In all cases, the data step is much longer than the sort time; this
     has always been the case with MXG processing.                      
    WebLog (93MB unzipped input, 672MB uncompressed output dataset,     
            95MB compressed, 484343 observations 1456 obs length,       
            SORTSIZE=256M, only one variable in BY List runs:)          
                              Elapsed   Elapsed  Data  Copy  Sort       
                              mm:ss       sec    sec   sec   sec        
  Dell Inspiron 1.3GHz Win2K   4:08       244     55    13   180        
  IBM ThinkPad  1.3GHz Linux   8:25       505     93    28   384        
  IBM ThinkPad  1.3GHz Win2K   3:43       223     55    15   163        
  Desktop Clone 866MHz WinXP   4:07       247     73    12   162        
     In all cases with this character data, sort times are much longer  
     than the data step time to read the input and create the dataset!  
     I had never seen this behavior, because SMF data is highly numeric.
     This caused me to examine what caused the longer sort times, and it
     was the length of the BY list that was first discovered to have a  
     major impact on the sort times.                                    
                                   850   866       850   866            
                                   MHz   MHz       MHz   MHz            
                                   DATA  STEP      SORT  STEP           
                                   secs  secs      secs  secs           
       All Variables in BY list:    75    73        612   914           
       Only one var in By List:     75    73        176   162           
     And those were for the best runtimes with compression enabled; the 
     uncompressed 850MHz sort time with all variables was 831 seconds,  
     (and copying compressed took 11 seconds, copying uncompressed 204).
     But fortunately, since we really need to be able to sort on more   
     than one variable, my webmaster found that SYNCSORT for WINDOWS had
     been released, and it appears to be a solution if you are going to 
     sort large character datasets under SAS on Windows:                
       All Variables, SYNCSORT:     75    73         90    --           
     and clearly, SYNCSORT under Windows outperforms the internal SAS   
     sort provided in SAS Version 8 under ASCII, with this kind of      
     highly compressible character data.                                
     On unix, SYNCSORT is available, and SAS does interface with some   
     unix systems, but Linux is not yet one of them; SAS Technical Note 
     TS-6830 documents the systems that support SYNCSORT as host sort.  
     For SAS Version 9, SAS expects to support SYNCSORT under Linux,    
     HP/UX, Solaris, and Tru64.  And when SYNCSORT supports the 64 bits 
     with AIX 5.1,  SAS intends to support that as well in Version 9.   
     SAS Version 9's internal sort will be a portable multi-threaded    
     parallel sort that is much faster on SMP hardware than the current 
     internal SAS sort, so improvement is planned there.                
     On MVS, SAS sites have either SYNCSORT or DFSORT installed, so I've
     never had reason to measure its internal SAS SORT performance.     
     1. Note that Dell and IBM are much closer with all character data  
        whereas IBM was much faster with SMF numeric data.              
III. MVS Technical Notes.                                               
20.   Al Sherkow's posting to MXG-L answers many questions about how to 
      and how not to set up WLM policy:                                 
     On Friday, January 11 Joan Kontra asked about manipulating CPU     
     resource among LPARs. One option Joan was considering was to       
     manage their workload by via changes in WLM. Part of my response   
     "I do not recommend changing WLM policy. Develop a policy and      
     stick with it. It may need to evolve as your workloads change over 
     time, but switching policies is (in my opinion) not a strategic    
     Backchannel I received some questions about my email and setting   
     up WLM policies:                                                   
     1.  How can you have different levels of service as a function of  
     2.  Can they use the WLM to change service between her daytime and 
          nighttime LPARs?                                              
     3.  What's wrong with having the operators change the policy at    
          those times of day?  Isn't that also just a console command?  
     While WLM was always supposed to manage your workload within your  
     sysplex in my opinion WLM is only now, with z900 & zOS, beginning  
     to do that. Most of WLM's control was within an image/LPAR. The    
     exception being the transaction "spraying" capabilities that could 
     choose which of a group of images to send a unit of work to.       
     Part of the problem WLM faces is that once a transaction/batch     
     job/query begins in an image it stays on that image until it is    
     complete. It doesn't matter if higher priority work arrives, and   
     it doesn't matter if high priority work is already running on the  
     system. In general, work does not move. Now consider another LPAR  
     in the sysplex that has available resources. It would be nice if   
     work could move from busy LPARs to less busy LPARs.                
     With z900 and IRD if the LPARs are on the same CEC then IRD will   
     move the CPU resources and/or I/O resources from an LPAR not using 
     them to an LPAR that can use them. Remember z900, z/OS and IRD     
     introduced the new term "LPAR Cluster". All the LPARs on the same  
     CEC in the same sysplex are an LPAR Cluster.                       
     Today's service policies take advantage of how WLM has been        
     working.  Consider CICS and Batch. In many policies CICS has a     
     higher importance than batch. Generally production is in one LPAR  
     and development/test is in another. Still everything is fine,      
     because CICS is higher than batch. Now z900, z/OS and IRD come     
     along and these partitions run in a single CEC. The production     
     CICS and production batch are running fine, meeting their          
     objectives and goals. Development CICS starts missing its          
     objectives. The WLM policy has always been operating at a sysplex  
     level, but IRD provides new capabilities that truly extend the     
     policy across the images within the LPAR Clusters. With IRD, the   
     policy is now implemented at the Sysplex level .... so that        
     development CICS has the same importance as Production CICS and is 
     higher than all batch (including production). IRD takes resources  
     away from the production LPAR to help the development LPAR's CICS. 
     Viola! Finally WLM is working throughout the sysplex.              
     I recommend that sites need to determine the requirements and      
     service levels of all their workloads at all times. Also remember  
     that service is service, regardless of timeframe. I think you need 
     to perhaps consider your "timeframes" when defining the policy and 
     use some other means to get work into the correct timeframe        
     buckets. In this case "The workload on 'System A' consisting of    
     one LPAR is always to be favored during the day". I wonder if she  
     means the whole LPAR or something it is running? Their other       
     requirement "The workload on 'System B' consisting of 2 LPARs is   
     to be favored at night sometimes."                                 
     Perhaps the 'System A' work during the day, and the 'System B'     
     sometimes favored could be at the same importance. One critical    
     question that has to be answered: "What would they do if these     
     workloads were running at the same time?"                          
     On Changing Policies:                                              
     Even before WLM I was generally opposed to changing IPS at         
     different times of the day. In much of performance and managing a  
     data center there are alternative techniques to solve a problem    
     that will lead to the same conclusion. I think that analyzing data 
     from a site that changes SRM parameters (IPS, ICS, OPT or WLM) is  
     more difficult. There can be the added problem of an offshift IPS, 
     and prime shift IPS and dealing with last night's outage. While    
     catching up do you run in 'offshift' or the 'prime' configuration? 
     For this reason I prefer a comprehensive 'all time frame' view of  
     an installation's service.                                         
     This is also greatly affected by the available capacity. Rarely is 
     an upgrade in capacity the size that the site actually needs. The  
     engines of today's machines are just too big. I'm not sure anyone  
     runs out of MIPS in 250MIPS chunks. Today if I'm upgrading a z900  
     the steps are about 250 MIPS each. What works well after an        
     upgrade may not work well just before the next upgrade. This is    
     the time that parameters are "tweaked" to solve specific problems, 
     often without regard to overall service. In IPS terms this is when 
     people tried to use time slicing or played with the time slicing   
     pattern. (Should we favor a 40% or the pattern or 50% of the       
     pattern?) After the upgrade few installations go back and          
     "untweak". So over time the "tweaks" build up. I've often found    
     this when called in to work on performance problems. Peter Enrico  
     has done papers at Share, z/OS Expo and CMG about periodically     
     reviewing your WLM Service Policy to see if it is still doing what 
     you want. This addresses the same problem.                         
19.   IBM APAR PQ55355 for Websphere Application Server V4.0.1 for z/OS 
      and OS/390 lists many problems in type 120 SMF records that are   
      in error and corrected.   PQ55355 is associated with SERVICE LEVEL
      W401009 of WebSphere Application Server V4.0.1.                   
18.   BMC MAINVIEW CMF originally from Boole and Babbage type 73 values 
      for PCHANBY and PNCHANBY are incorrect in their record and are now
      corrected by BMC PTF BPM7996 and discussed in their APAR BAM7806. 
17.   APAR PO56039 for MQSeries V5.2 for OS/390 corrects values in many 
      WQPUTMAX, and WQPUTMIN are documented as incorrect.               
16.   SMF Writer design cannot handle normal bursts of SMF data, for    
      example when a step with many dynamic allocations ends.  These    
      bursts overrun the SMF buffers, causing loss of SMF data.         
      A specific, daily example:                                        
      ENDEAVOR, an programming development system used by hundreds of   
      programmers allocates many DDs dynamically for each use.  In one  
      day, from 9am to 3pm, there were 2,548,964 DD segments that were  
      written in 1,735 type 30 subtype 4 SMF records:                   
      15:06:48.78  First of 1735 Step Termination 30-4 Records          
      15:06:50.54  Last of 1735 Step Termination 30-4 Records           
                   54 MegaBytes in 1.76 Elapsed Seconds ==> 30 MB/sec   
      Then 0.70 Elapsed Seconds from end of step SMF to start of job end
      15:06:51.24  First Job Record SMFTIME                             
      15:06:52.52  Last of 1735 Job Records SMFTIME                     
                   54 MegaBytes in 1.28 Elapsed Seconds ==> 42 MB/sec   
         Total:  108 MegaBytes in 3.74 Elapsed Seconds ==> 30 MB/sec.   
      This is far above the sustainable write-to-VSAM-file rate of the  
      SMF writer with current DASD:                                     
        One 3GB volume is filled every 20 minutes   ==> 2.5 MB/sec max  
      And IEE986E messages shows five SMF buffer expansions in 15:06:52.
      The SMF Buffer is not designed to absorb 100 MB bursts of data;   
      not only due to its small maximum size (which itself requires a   
      PTF to allow the 128MB maximum), but the intrinsic serial design  
      of the SMF writer causes potential data loss with every buffer    
      expansion (eight 8MB expansions from 8MB to 128MB), because the   
      writer cannot accept new records during buffer expansions.        
      The case of the "multi-DD" step SMF burst occurs randomly, but    
      more frequent (and often larger) SMF bursts occur at interval     
      end when there are many CICS regions recording statistics data,   
      and synchronized interval recording is critical for analysis      
      across multiple address spaces, increasing the need for redesign. 
      Loss of SMF data records is both an accounting exposure, and a    
      security exposure, since any SMF records can be lost during the   
      buffer expansions to handle these bursts.  I believe that the     
      integrity of the SMF file is a prerequisite for OS/390 being      
      designated as a B2/C2 DOD Secure operating system.                
      The SMF Writer clearly must be redesigned to handle these bursts! 
      Many of the companies in IBM's Gold Consultant program not only   
      depend on SMF data being written, but recommend z/OS systems      
      because of the SMF integrity in recording security accesses, and  
      for performance and capacity measurement, all of which are        
      threatened by SMF data loss.  I would like to propose a telephone 
      conference call to discuss how members of that group can provide  
      resources and assistance to IBM to redesign the SMF writer to     
      handle the current and expected SMF data without data loss.       
15.  CMF type 70 does not contain the ICF flag, until one or more of    
     BPM7293, BPM7307, BPM7308, and/or BPM7309 are installed.           
     Dec 16, 2001.                                                      
15.  APAR OW52226 for JES3 Only, type 30 fields SMF30PTM and SMF30TPR   
     (MXG variables TAPNMNTS and TAPSMNTS) do NOT count JES3 dynamically
     allocated tape mounts.  The APAR is "CLOSED FIN", Fixed-in-Next    
     (i.e., the next time IBM has to update that code for some other    
14.  APAR OW51353 corrects defective VVDS after OW50528.  The impact was
     defective type 60 SMF records, and IBM still refuses to provide the
     VVDS DSECTS, claiming they are "object code only".                 
13.  APAR OW45447 corrects an IEC027I 737-24 ABEND on CONFIG-002 DD; the
     IBM error occurs when non-SMS and SMS managed datasets were in the 
     //CONFIG DD concatenation.  Reversing the order so that the non-SMS
     data set was first avoided the ABEND, but MXG expects its CONFIGV8 
     member to be last, because it is the last CONFIG option that is    
     used by SAS.  Previously, CONFIGV6 for V6 had to be last so that it
     would override the SAS default for MEMSIZE, but since SAS V8 cannot
     have a MEMSIZE parameter in //CONFIG members, the need for MXG's   
     CONFIGV8 member to be last may not be absolute.  I'm checking.     
12.  APARs OW50579 and OW50837 are marked "VARIOUS RMF PROBLEMS" and    
     both fix a number of problems with RMF I and RMF III reports and   
     ABENDs in both RMF Monitor I and Monitor III sessions.             
11.  SMF 30 records with ALOCTIME earlier than INITTIME (SMF30AST/30SIT)
     can occur even though it is physically impossible; APAR OW50134    
     acknowledges the problem, but that apar is "FIN", Fixed in Next    
     (i.e., next 18 months in a new release).                           
10.  SMF 42 subtypes 15 thru 19 were not written after APAR OW47863 was 
     installed; new APAR OW51179 corrects the error.                    
 9.  SMF VBS records with LRECL greater than 32760 have been found      
     beginning with OS/390 R2.7, and IBM has acknowledged their error;  
     SMF records can have a maximum LRECL of 32760 (max data LENGTH of  
     32756).  APAR OW51139 for SMF Type 8 records, OW51146 for SMF Type 
     90 Subtype 32 records.  These records cannot be read with SAS      
     V6-V8, so you must use IFASMFDP to copy/delete if they are found.  
 8.  With OS/390 R2.10 and DFSMS/rmm, problems with SAS data sets on    
     tape required IBM APAR OW49577 for RMM.                            
 7.  A user noted that IBM's IXGRPT1 utility report from TYPE88 had the 
     incorrect values for SMF88ETT and SMF88EO, but MXG was correct.    
 6.  APAR OW49920 reports overlays caused when program objects in PDSEs 
     are LLA-managed and the object gets staged to VLF.                 
 5.  APAR O49773 reports missing RMF Cache data from Shark D/T 2105     
     after PTF UW77972 is installed.                                    
 4.  Information APAR II12079 discusses lots of FTP errors and issues,  
     and reasons why SMF 118 TCP records might not be written for FTP   
     Client and/or Server:  there are two parameters in PROFILE.TCPIP   
     dataset, SMFCONFIG and SMFPARMS, documented in IP Configuration    
     Guide, Chapter 3.  SMFCONFIG FTPCLIENT is shown as an example to   
     create the client record.                                          
 3.  APAR OW50084 corrects non-writing of SMF 79 subtype 1,2, or 5 under
     Goal Mode if DOMAIN parameter is specified for ASD/ARD/ASRM, errors
     if CPs are varied ON-line and was OFF-line at the end of interval, 
     and many errors in RMF reports.                                    
 2.  APAR OW41696 corrects blank SMF30EXN/OMVSEXNP program name field in
     USS/OMVS type 30 SMF records.                                      
 1.  APAR OW38842 corrects Accounting Fields in SMF Type 30 from USS    
     (unix system services, a/k/a OMVS) tasks.  accounting information  
     was taken from the calling address space instead of the address    
     space associated with the passed ASSB.                             
V.   IMS Technical Notes.                                               
VI.  SAS Technical Notes.                                               
11.  MOVE TO 41.  82BA62 Fixed Windows error:                           
     FATAL ERROR: WRCODE=80000805, MODULE='wtdelet': Unexpected return  
     from vtswtch().                                                    
10. A note on case sensitivity of SAS under unix.                       
    Under unix operating systems, a file named x.y is different that a  
    file named X.y, but when SAS executes under unix, its sensitivity   
    to case depends on the syntax.  This note may not be perfect:       
     - DDNAMEs and DATASET names are case insensitive in SAS statements 
       but the created unix file names will be lower cased.  SAS will   
       read x.sas7bdat with SET X; or with set x; in your unix program. 
     - Stored variable names are in the case of their first instance in 
       the source program, so if you type the variable name in lower    
       case it will be stored as a lower case variable name.  Thereafter
       all references to that variable name are case insensitive        
     - External files, like MXG Source Directory members, must be lower 
       cased and end with .sas, so that the %INCLUDE SOURCLIB(MEMBER);  
       syntax can be transparently used across all platforms.           
     -  MXG intends to always create only upper case names, so as to    
        avoid the mess created by those unix idiots that thought case   
        sensitivity was a good idea.  Life is complicated enough just to
        spell things right; getting the right case is asking too much.  
 9. An interesting, but probably not generally important note, mostly   
    about square brackets.  The beautiful hex dump created by the LIST  
    statement (or by input errors like invalid data), interprets some   
    hex characters differently in the CHAR line of the dump than in the 
    PUT= of the input of that hex character.  The LIST option checks    
    each character with the C isprint function to test its printability,
    but isprint's definition of 'printable' is platform dependent; on   
    MVS, isprint('[') or (']') is considered unprintable and returns a  
    zero, so the CHAR line prints a dot when it sees 'AD'x/'BD'x hex.   
    But when you PUT a character variable, there is no printability test
    and so it will be displayed in batch job's logs based on a table in 
    ????????, while under TSO, the character you see in a PUT statement 
    is controlled by the translate table in the emulator that owns your 
    If square brackets do not display on your ISPF terminal, you need to
    have your VTAM SysProg set PROGRAM SYMBOLS ON in the VTAM Terminal  
    Definition for your terminal.                                       
 8. MXG Software and SAS Versions that you can use.                     
  a. Independent of which MXG Version you are using:                    
    We STRONGLY RECOMMEND that you ONLY execute MXG Software under V8:  
      SAS V8.2 TS2M0, plus, for MVS-OS/390-z/OS, with the 82BX03 HOTFIX 
                            Bundle (that includes critical 82BA57 fix). 
                            Original 82BX01, changed to 82BX03 Oct 2002.
    but, if you still must stay in the dark ages, there are no reported 
    MXG errors using SAS V6.09e at TS-475 (with SHARK, check SAS notes).
    There were errors in releases of V8 prior to SAS V8.2 that have been
    fixed in the V8.2+82BX01 HOTFIX; you'll be wasting your time and my 
    time trying to use any earlier releases of V8 SAS.                  
  b. MXG will exploit new V8 enhancements when MXG is executed under V8,
     but you do not need a new version of MXG when you install a new SAS
     release for your daily runs.  However, for some new data sources,  
     execution under V8 is required: Websphere EE SMF type 120 records  
     contain DBCS Unicode characters that were not supported until V8,  
     and MXG TYPEWWW WebLog support needs 32000-byte-length-character   
     variables (which it fills mostly with blanks, which disappear with 
     Note it is only V8-greater-than-200-byte-character-variables that I
     use in MXG; there are no long-variable-names, and since I only see 
     complication and confusion with long-variable-names, I do not now  
     expect to have MXG variable names longer than eight characters.    
        However, if alias names for variables ever exist in SAS, I might
        want to change my mind. I could use the variable's LABEL as an  
        alias name, so you could access it either way, and I would have 
        a third alias with the original IBM field name as variable name!
 7. An increase in CPU time between V6 and V8 in a unique MXG job was   
    found by SAS to be in its keeping track of the count and location of
    missing value calculations, and in this special case, the NOSTMTID  
    SAS option significantly reduced the CPU time.  The site had used   
    IMACEXCL to keep only 7 variables in CICSTRAN, keeping no datetime  
    variables, which caused latent Y2K protection code to be executed   
    because those datetime variables were now missing!  The added CPU   
    time was not due to the missing value calculations themselves, but  
    rather the calls to keep count; with OPTIONS NOSTMTID that counting 
    is bypassed, no counts are kept, and significant CPU reduction in   
    this unique case.  However, tests with NOSTMTID enabled showed a    
    small 0.1%-0.5% increase in CPU time, so it does not seem worth     
    enabling by default.   And SAS will revise the count-calls in SAS   
    Version 9, now that they have diagnosed the problem.  SN-004513.    
    10Apr02 update:  New UTILEXCL logic to create IMACEXCL completely   
    eliminate any need for NOSTMTID, as previously expected; but here   
    are the CPU times of what happened:                                 
        V18.18   98.77   Original V609 Run, before problem.             
        V18.18  229.59   Run with V8.2 that uncovered the problem.      
        V18.18  105.31   Run with V8.2 with DSOPTIONS=NOSTMTID          
        V19.19   82.29   Now, using IMACEXCL built by UTILEXCL          
 6. SAS usage note SN-004513 is an Outstanding Problem that discusses   
    possible increased CPU time with Version 8 when missing values are  
    passed to functions; the counting and keeping track of which line   
    number had missing value calculations, and how many, is apparently  
    expensive in V8; the note suggests  OPTIONS DSOPTIONS=NOSTMTID;     
    which we are testing.  The specific problem was a heavily tailored  
    CICSTRAN dataset in which IMACEXCL was used to only input 6 fields, 
    so all of the other variables were missing, which is not normal.    
    In this case, the option was very significant; the 217 CPU seconds  
    without the option dropped to 97 seconds when it was enabled to     
    create 16 million observations in CICSTRAN.                         
 5. Should you use //SORTWKnn DDs or Dynamic Allocation for SAS Sorts?  
    MXG has always defaulted to having actual //SORTWK01 -> //SORTWK03  
    DD statements in its JCL examples.  Here's why:                     
    a. There is a limit of 6 Sort Work Areas when they are dynamically  
       allocated, so if you need more than 6 work units, pre-allocating 
       //SORTWKnn DDs is the only way to get more space and/or units.   
       Chuck Hopf uses 64 sort work DDs in his monster ASUMUOW job.     
    b. If a VIEW or a Sequential-format SAS dataset is involved in the  
       SORT, SAS can't determine the size to pass to the sort, so use of
       pre-allocation can ensure enough work space is allocated.        
    c. Never use RLSE on //SORTWKnn DD statements, unless only one SORT 
       is to occur.  In BUILDPDB, there are multiple sorts, some large  
       some small, and some large again, and with RLSE that third sort  
       can fail with a B37 when you RLSEd the space that somebody else  
       has got now!                                                     
    d. The Sort Work DDs must be named //SORTWK01 and be in order, i.e. 
       SORTWK01, SORTWK02, ...  SORTWKnn.                               
    e. The CONTIG parameter is no longer required by sort packages, and 
       it can delay or even prevent allocation if that large chunk of   
       work space is not currently available on DASD.  Sort benchmarks  
       have not shown a significant difference with/without CONTIG now. 
    f. But if you want to know the actual allocation algorithm for SAS  
       sort work space allocation:                                      
      -Host Sort interface is chosen to be used (either by SORTPGM=HOST 
       or the size of the file to be sorted is greater than the value of
       SORTCUTP when SORTPGM=BEST), then                                
      -SAS checks TIOT to see if SORTWK01 is allocated, uses if found.  
      -Otherwise, check to see if SORTWKDD value (see below) has been   
       previously allocated, and if the space is sufficient for the     
       current sort to be to be performed.                              
      -If that space is not sufficient then DEALLOC if present.         
      -Check if DYNALLOC option is set.  If it is, SAS will not do the  
       allocate but instead lets the sort package dynamically allocate  
       its sort work area.                                              
      -If NODYNALOC is set, then SAS will now allocate the sort work    
        -SAS DD names are based on the SORTWKDD= option's value, which  
         defaults to SASS, so they are of the form SASSWK01 to SASSWKnn 
         where nn is the value of the SORTWKNO= options (and it is hard 
         limited to six).                                               
        -The size of sort work space allocated is computed as (number   
         of OBS)*(RECLEN)*(2).                                          
        -If the size cannot be determined (VIEW or SEQ Format) then     
         SORTSIZE= option is used to determine sort work space to       
        -Allocation is based on SORTUNIT (TRKS/CYLS/BLKS), SORTDEV      
         (3390/3380/SYSDA...).  SAS still adds the CONTIG parameter.    
        -If there is a DAIRFAILure message it is written to the SASLOG  
         indicating that a dynamic allocation failed allocating that    
         specific DD, but SAS will continue to attempt the SORT.        
      SAS Technical Support provided assistance in writing item f.      
      Dec 12, 2001.                                                     
 4. Errors when reading a V6.09-built SAS dataset with SAS V8:          
    is probably fixed in Hot Fix 82BA57, below, but is circumvented with
       LIBNAME SPIN    V609;                                            
    that tells SAS V8 to use the V609 engine to read that DDNAME.       
    This error first occurred after maintenance for IBM SHARK DASD; one 
    one reporting site was backlevel at SAS V8.0, and they also had an  
       ERROR FORMAT $MGTRAN NOT FOUND                                   
    when V8.0 tried to use a //LIBRARY format library that had been     
    created by V6.09.  Creating a new format library with V8.0 was the  
    best solution, but a LIBNAME LIBRARY V609; statement circumvented   
    the error.  But, once again, you really need to be at SAS V8.2 with 
    all Hot Fixes!  First: Nov 30, 2001.  Last: Jan 10, 2002.           
    Update July 12, 2002:  Either of these error messages:              
    are now documented in SAS NOTE SN6447; these messages can occur if  
    DISP=MOD is coded for an existing SAS Data Library, if you have     
    installed any of these hot fixes:   81BA50, 82BA57, 82BX01          
    Their temporary circumvention is to use DISP=NEW, but I will update 
    this note after I discuss further - SAS needs to support DISP=MOD.  
 3. OS/390 Hot FIX 82BA57 for V8.2 fixes many I/O & multi-volume errors 
    (and Hot FIX 81BA50 is for V8.1). SAS note SN-05642 discusses and it
    lists these SAS notes that are associated with this defect:         
          895 2674 2881 4229 4916 5004.                                 
    Problems included non-read VBS records after error, multiple mounts,
    BY variable position errors, S0C1 in SORT step, B37-04 on SYSOUT DD 
    with a PROC PRINTTO, U315/U317/S0C4, etc.                           
    Nov 20, 2001.                                                       
 2. A minor difference in values between ASCII and EBCDIC SAS numeric   
    variables that are stored in less than eight bytes is documented.   
    There is no error, just differences in the way numeric values are   
    stored on different hardware platforms and/or operating systems!    
    MXG 19.11 now uses LENGTH DEFAULT=&MXGLEN and &MXGBYLN (See Change  
    19.272) to set the stored length of most numeric variables, where   
    MXGLEN is 5 on EBCDIC and 6 on ASCII, which both saves disk space,  
    and keeps full resolution of fields input from 4 bytes.  Only if the
    exact value of a large-value variable is needed is LENGTH 8 used;   
    datetimestamp and byte variables are length 8.                      
    Because SAS uses floating point internal storage, ASCII systems     
    require a minimum of 3 bytes to store a one-byte numeric, while     
    EBCDIC systems only require 2 bytes.                                
    Previous EBCDIC truncation measurements when 'FFFFFFFF'x was INPUT  
    as PIB4 and stored in 4 bytes had shown a maximum loss of 255, but  
    these new ASCII measurements under WINDOWS show a maximum loss due  
    to truncation of 247.                                               
      Hex Value   INPUT PIB4. ASCII     INPUT PIB4. EBCDIC    LENGTH    
       FFFFFFFFx     4294967295            4294967295           8       
       FFFFFFFFx     4294967295            4294967295           7       
       FFFFFFFFx     4294967295            4294967295           6       
       FFFFFFFFx     4294967288            4294967295           5       
       Truncate loss:         7                     0           5       
       FFFFFFFFx     4294965248            4294967040           4       
       Truncate loss:      2047                   255           4       
    This shows that LENGTH=5 for MVS and LENGTH=6 for ASCII platforms   
    will store all fields input as PIB4 without any truncation.         
    Originally, Nov 14, 2001, I wrote:                                  
       There's really nothing to fix here, but if you ever need the     
       change the length of an MXG numeric variable, you can override   
       the MXG code by adding a statement:   LENGTH variable 8;  in your
       EXdddddd "exit member" in your tailoring library.  SAS uses the  
       last instance of a LENGTH statement, and the Dataset Exit        
       member's code is always seen after the MXG LENGTH statements!    
    Nov 29, I realized I can easily externalize the MXG default value   
    using DEFAULT=&MXGLEN in place of DEFAULT=&MXGLEN in all MXG source 
    and preserve the existing value by using %LET MXGLEN=4; in the MXG  
    initialization, and then, if you need full resolution, you can have 
    it by using  %LET MXGLEN=n; as your first //SYSIN statement in the  
    MXG jobs that create the datasets.  Implemented in Change 19.272.   
    If you use a  PUT variable= ; statement in a SAS program that is    
    creating and then storing that variable in 4-bytes, the output of   
    the PUT will be the full 8-byte value, but a PROC PRINT of the      
    dataset will have the (truncated) 4-byte value; SAS uses 8-byte     
    virtual storage internally for all numerics; it is only when the    
    variable's value is OUTPUT to a dataset that its stored length is   
    set by the LENGTH= statement.                                       
 1. SN-002861 is a SAS V8.0 and V8.1 only correction; the error is fixed
    in SAS V8.2.  Tape Format libraries on disk which encountered B37   
    out of space conditions require zap z8002861 or Z8012861, and these 
    zaps also correct errors in SN-004787 with S837-08 and multiple tape
    volume datasets.  0C4's may also occur, but again, only if you are  
    still running 8.0 or 8.1.                                           
VII. CICS Technical Notes.                                              
 1.  APAR AQ61844, about two years old, for CICS 4.1, reduced CPU time  
     in LE and COBOL environment's initialization and termination with  
     modules ceevgtsi, igzeini, and igezetrm all active.                
VIII. Windows NT Technical Notes.                                       
IX.   Incompatibilities and Installation of MXG 19.19.                  
 1. Incompatibilities introduced in MXG 19.19 (since MXG 18.18):        
  a- Landmark data for DB2, IMS, and MVS now have datetime variables    
     converted to local time zone by MXG; previously the datetimes      
     were incorrectly left in GMT.  This could affect reports if you    
     use TYPETMDB, TYPETIMS, or TYPETMV2.  The Landmark CICS data was   
     not corrected.  See documentation in Change 19.288.                
 2. Installation and re-installation procedures are described in detail 
    in member INSTALL (which also lists common Error/Warning messages a 
    new user might encounter), and sample JCL is in member JCLINSTL.    
X.    Online Documentation of MXG Software.                             
    MXG Documentation is now described in member DOCUMENT.              
XI.   Changes Log                                                       
--------------------------Changes Log---------------------------------  
 You MUST read each Change description to determine if a Change will    
 impact your site. All changes have been made in this MXG Library.      
 Member CHANGES always identifies the actual version and release of     
 MXG Software that is contained in that library.                        
 The CHANGES selection on our homepage at            
 is always the most current information on MXG Software status,         
 and is frequently updated.                                             
 Important changes are also posted to the MXG-L ListServer, which is    
 also described by a selection on the homepage.  Please subscribe.      
 The actual code implementation of some changes in MXG SOURCLIB may be  
 different than described in the change text (which might have printed  
 only the critical part of the correction that need be made by users).  
 Scan each source member named in any impacting change for any comments 
 at the beginning of the member for additional documentation, since the 
 documentation of new datasets, variables, validation status, and notes,
 are often found in comments in the source members.                     
Alphabetical list of important changes after MXG 18.18 now in MXG 19.07:
  Member   Change    Description                                        
  See Member CHANGES or CHANGESS in your MXG Source Library, or         
  on the homepage                                          
Inverse chronological list of all Changes:                              
Changes 19.288 thru 19.001 are contained in member CHANGES.