COPYRIGHT (C) 1984-2021 MERRILL CONSULTANTS DALLAS TEXAS USA
MXG NEWSLETTER FIFTY
*********************NEWSLETTER FIFTY***********************************
MXG NEWSLETTER NUMBER FIFTY, September 5, 2007.
Technical Newsletter for Users of MXG : Merrill's Expanded Guide to CPE
TABLE OF CONTENTS
I. MXG Software Version.
II. MXG Technical Notes
III. MVS, aka z/OS, Technical Notes
IV. DB2 Technical Notes.
V. IMS Technical Notes.
VI. SAS Technical Notes.
VI.A. WPS Technical Notes.
VII. CICS Technical Notes.
VIII. Windows NT Technical Notes.
IX z/VM Technical Notes.
X. Incompatibilities and Installation of MXG.
See member CHANGES and member INSTALL.
XI. Online Documentation of MXG Software.
See member DOCUMENT.
XII. Changes Log
Alphabetical list of important changes
Highlights of Changes - See Member CHANGES.
COPYRIGHT (C) 1984,2007 MERRILL CONSULTANTS DALLAS TEXAS USA
I. The 2007 Annual Version MXG 24.24 was dated February 5, 2007.
All sites were mailed a letter with the ftp download instructions.
The availability announcement was posted to both MXG-L and ITSV-L.
You can always request the current version using the form at
http://www.mxg.com/ship_current_version.
1. The current version is MXG 25.08, dated Sep 5, 2007.
See CHANGES member of MXG Source, or http://www.mxg.com/changes.
II. MXG Technical Notes
3. Changes to Daylight Saving Time in 2007 have no MXG impact.
MXG Software code is impervious to Daylight Savings Time; we give
you the data that is in your RMF,SMF, etc. data, as you choose to
create it.
If you choose to set your clocks back on an active system, you
corrupt many datetime values (i.e, jobs end before they start,
negative elapsed times, intervals end times before their begin
time, etc.), and you will create multiple records with the same
STARTIME in all RMF datasets, as well as CICINTRV, SMFINTRV, and
any other interval datasets, but that's the result of your choice
to set back clocks on an active system, and not of MXG's doing.
If the records also contain an actual GMT Offset field, those
pairs of same-STARTIME observations can be identified, but your
hourly reports for the hour of time setback will have
pseudo-duplicate data.
MXG Change 24.224 (in MXG 24.09) for the ASUM70PR summarization
example now protects the TYPE70PR RMF data so that those
pseudo-duplicates are summarized as separate observations, with
different GMTOFFTM values, but with the same STARTIME values,
i.e., you still have duplicate STARTIME values in your data.
Note that if your installation uses the TIMEBILD architecture (to
tell MXG to change all datetime variables from SYSTEM's that are
on different time zones to a common timezone), as long as you
correctly update the mapping table in advance of the time change,
MXG will correctly handle each system's time change, but, again,
if you choose to set clocks back on an active system, all of the
preceding caveats apply.
It is precisely when you have local time as a result of the GMT
Offset that you are exposed to errors if you change the time
(i.e., change the GMT Offset) backwards on an active system. If
you have a GMT Offset of zero, and you keep all your clocks on
GMT, there would be no change in your timestamps.
This information was posted to our MXG-L ListServer in Jan, 2007.
2. Very Long Stored-Length MXG Variables + COMPRESS=YES on z/OS.
a. This new text was added in November, 2009:
Be careful if you change from YES to COMPRESS=NO:
While SAS on z/OS does measure an appreciable increase in both the
CPU time and Elapsed Time, changing the MXG default to COMPRESS=NO
to save CPU time could massively increase disk space requirements.
And, only SAS on z/OS has the CPU time increased. Benchmarks of
SAS on Intel showed COMPRESS=YES used LESS CPU & Run Time, because
Intel's cost of CPU-to-do-I/O was reduced more, with less data,
than the cost of CPU-to-Compress that data.
Many MXG datasets now contain character variables with long stored
lengths, $128 or $256 or even $32000 bytes, the maximum text length
of new thingies like Open Systems Path Names, SQL Text, Websphere
names, DB2 Unicode text, CICS user identities, etc., but since SAS
stores character data in fixed length fields, and since these new
data fields typically have only 8 characters and a lot of blanks,
the COMPRESS=YES option is really REQUIRED to store these highly
compressible text data fields.
And, a CPU increase may be secondary to running out of disk space.
b. This was the original 2007 Newsletter Article, with the list of
long variables updated in November, 2009:
MXG's COMPRESS=YES default can be changed to COMPRESS=NO (in your
CONFIGV9 for z/OS or AUTOEXEC for ASCII); one z/OS site reported
they exchanged a 25% reduction in the CPU time, for a 10% increase
in EXCP counts, and a whopping 70% increase in DASD space, but as
their job ran when its LPAR was capped, the run time was reduced.
COMPRESS offers resource and time tradeoffs, but "IT ALL DEPENDS".
However, there a few MXG datasets that really SHOULD be compressed,
because they define "Very Long Stored-Length" variables, with their
LENGTHs of 1024 or 32000 bytes, required because those text fields
(SQL text, open system path names, etc.) can be that long!
So, if you change BUILDPDB to COMPRESS=NO, and you have obs in the
below datasets created from SMF, you could see an even larger
increase in DASD space required because of those long variables!
Three groups of datasets with "Very Long Stored-Length" variables,
1024 or more bytes, created by MXG 25.07+, could require increased
disk space if COMPRESS=YES is disabled, with no other impact:
1. Datasets automatically built by standard BUILDPDB:
Datasets Variables Description Length
TYPESYMT SYSLTEXT Syslog Mount-Related Event text 1024
TYPESYSL SYSLTEXT Syslog Message Text 1024
ASUMTAPE SYSLTEXT Syslog Message Text 1024
2. Datasets built from SMF records, could impact BUILDPDB if
user-added. Many MXG sites have added TYPE80A processing of
ID=80 RACF records, which creates a lot of TYPE80nn datasets,
and TOKDCE is kept in them all. Some MXG sites have added the
creation of specific T102Snnn DB2 TRACE (ID=102, IFCID=nnn),
especially those that contain the long-length SQL text fields.
Datasets Variables Description Length
TYPE80nn TOKDCE DCE 1024
TYPE83LD LDAPLDAP LDAP*RELOCATE*007 1024
TYPE83LD LDAPNAME LDAP*RELOCATE*008 1024
TYPE9201 SMF92PPN PATHNAME*OF DIRECTORY 1024
TYPE9215 SMF92APN FILE*PATH*NAME 1024
T102Snnn QW0nnnTX SQL Text 140,141,142,145 32000
QWnnnnnT SQL Text 124 32000
QW0nnnCN Cursor Name 59,61,64,65,66 32000
QW0nnnCT Command Value 90 32000
QW0nnnDS MSG or FMH-5 DATA 180,194 32000
QW0nnnHR RECOVERY LOG NAME 207 32000
QW0nnnON Object Name 62 32000
QW0nnnMR LAST MESSAGE RCVD 206,208,236 32000
QW0nnnMS LAST MESSAGE SENT 206,208,236 32000
QW0nnnPA LOCATION NAMES 203 32000
QW0nnnP1 AMS COMMAND TEXT 92,97 32000
QW0nnnMS Message Text 4,5 32000
QW0nnnSP SQL STMNT 350 32000
QW0nnnST SQL STMNT 63,168 32000
QW0nnnTH INFORMATION FOR THREAD 204 32000
T1028005 QBMCTEXT BMC DB2 SQL Text 4096
SHDWnn SM06SQSR SQL Source String 32000
3. Datasets NOT created from SMF records, with long variables:
TYPETPFC TPFCUSER User Data Field 1024
RACFnnnn HFSNAME Path Name of File/Directory 1024
TYPEWWW COOKVALU Cookie Value 32000
URIQUERY Request Argument 32000
SITENAME S-Sitename 32000
You can specify COMPRESS=NO globally in your CONFIGV9/AUTOEXEC, but
then compress individual datasets, by redefining the "Keep" macro
to contain MACRO _Kdddddd COMPRESS=YES % for each "dddddd" dataset
to be compressed. All of your _Kdddddd definitions can be stored
in your IMACKEEP tailoring member, or "instream" with:
//SYSIN DD *
%LET MACKEEP=
%BQUOTE(
MACRO _KTY8001 COMPRESS=YES %
MACRO _KTY8002 COMPRESS=YES %
...
MACRO _KTY80nn COMPRESS=YES %
);
%INCLUDE SOURCLIB(....);
If you DO compress, there is no space taken for long LENGTHs, but
if you DON'T compress, and you don't need the full default length,
you can reduce the variable's length and save disk space. You can
change the length of any character variable created from SMF by
adding a LENGTH statement in your IMACFILE member, or by passing
that LENGTH statement with the &MACFILE macro variable:
//SYSIN DD *
%LET MACFILE= %BQUOTE( LENGTH TOKDCE $8; ) ;
%INCLUDE SOURCLIB(TYPS80A);
The IMACFILE exit places that LENGTH statement so it is seen
before the normal MXG LENGTH statement, so TOKDCE would be $8 vs
$1024 in all of the TYPE80nn RACF datasets.
You can NOT change the LENGTH of a NUMERIC variable in the
IMACFILE exit nor with &MACFILE, because SAS uses the LAST
instance of a LENGTH statement for numeric variables (and the
FIRST for character variables!). But you can add a LENGTH
statement in any EXdddddd exit member for that product to change
a numeric variable's length, since that code is seen AFTER the
MXG LENGTH statement.
1. Comparison of BUILDPDB on z/OS with Version 23 and 24, SAS 9.1.3:
Input SMF file: 9070 MegaBytes
CPU Elapsed
Job or Step mm:ss mm:ss
V23 SMF read Step 11:22 15:16
V23 BUILDPDB Job 13:19 19:36
V24 SMF read Step 11:13 15:05
V24 BUILDPDB Job 13:08 19:19
This is a heavily enhanced BUILDPDB that processes many additional
SMF records and creates many more datasets than the MXG default.
The big data step required a REGION=130M for the buffers for those
added datasets. The CPU times are on a SU_SEC=10,000 machine.
III. MVS, a/k/a z/OS, Technical Notes.
20. APAR PK44207 corrects errors introduced in RMF maintenance APARs
that caused RMF messages "WSAM IS UNABLE TO CAPTURE CPU AND PAGING".
19. APAR OA21590 corrects SMF ID=85 OAM errors.
18. APAR OA20955 documents new facilities when SMF data is recorded to
System Logger log streams rather than VSAM files, and an exit that
allows programs to allocate a log stream specifying SUBSYS=LOGR and
read it via normal QSAM I/O instead of via Logger IXGBRWSE service.
17. APAR OA20761 for z/OS 1.9 has corrected the sample IEFACTRT exit
that was initially shipped with z/OS 1.9; the erroneous member
did not handle time stamps correctly as noted in the APAR text.
16. APAR OA18118 added SMF14KET to ID=15 with Tape Encryption elapsed
duration, but also corrected the encryption encoding. Change 25.047
in MXG 25.03 added SMF14KET support.
15. APAR OA21982 corrects invalid ID=0 SMF records that are supposed to
be ID=40 records. Dynamic Allocation is the culprit and these ID=0
records contain all zeros except for the length in the RDW. Since
MXG has always recommended that ID=40 records be turned off except
for ad hoc enablement for specific problems, and since ID=40 SMF
records have NEVER been used in BUILDPDB, the exposure here is that
you reporting might think you missed an IPL because the record is
output in the TYPE0 (a/k/a PDB.IPLS) dataset. Pending a PTF, IBM
also recommends turning off recording of ID=40s in your SMFPRMxx.
14. Specifying the RMF Monitor I CACHE Option in only one SYSTEM's RMF
parameters eliminates redundant records on other systems and has
been always recommended. There are other RMF Monitor I options,
ESS(RANK) and ESS(LINK) and FCD along with CACHE that should all be
in one, and the same, SYSTEM, per these IBM suggestions:
ESS(RANK) - Link performance statistics are gathered.
ESS(LINK) - Extent pool statistics and rank statistics gathered.
As ESS data gathering involves cache activity
measurement, it is recommended to specify both
options in common.
If you specify ESS together with NOCACHE, cache
data is gathered implicitly without writing SMF
74.5 records!
In a SYSPLEX, options CACHE and ESS can be specified
on any system sharing the measured devices.
Therefore specify the ESS and CACHE options together
on one selected system only to avoid duplicate data
gathering.
FCD - FICON director activities are measured.
FICON directory activity is gathered by port address.
There is no indication which system in the sysplex
requested the I/O. Therefore, the data can be
gathered on any system sharing the FICON directories.
To avoid having duplicate data, you should set the
FCD option on one system only.
CACHE - Create SMF 74.5 TYPE74CA Cache Statistics
IMPORTANT: CACHE is the DEFAULT option IBM sets in
RMF Monitor I, so you MUST then ADD a statement with
NOCACHE in RMF parms for all but that one SYSTEM.
13. APAR OA21982 reports SMF records with ID=0 and all other fields zero
are actually SMF ID=40 records written from IEFDB4F9. Disabling the
ID=40 records in SMFPRMxx member of SYS1.PARMLIB will eliminate the
bad records, and MXG does not use ID=40 records, as their EXCPs have
long ago been added to the ID=30 records.
12. OA19058 corrected z/OS 1.7 problems with JES Initiators not
executing (STARTING status for 40 minute).
11. ORACLE Version 10 SMF records contain invalid counts in the
PHYREADS LOGWRITS and LOGREADS variables, but it appears that
Oracle is not interested in correcting their errors. While
they have had BUG 5708760 open for seven months with no fix,
that BUG references an earlier BUG 5702425 that appears to be
marked "NOT FEASIBLE TO FIX". If Oracle finally corrects
these counts in their Accounting data, this technical note
will be revised to cite their correction.
March 2008 Update: Oracle 10.2.0.3 appears to correct I/O counts.
SMF fix 6725982, 6138068, and 6646891 may all be required for both
the I/O count fix and for Oracle on z/OS 1.9
10. APR PK42977 for Websphere documents how to enable SMF 120 record
types in the administrative console.
9. APAR OA21256 reports incorrect WLM calculation of CPU service
units and zIIP service times; the problem is caused by a long
running enclave, and impacts R723CCPU, R723CSUP, R723CRCP,
MXG variables CPUUNITS, R723CSUP, and TRANS in TYPE72GO.
8. APAR OA19440 corrects a bad value in the LCCAWTIM field, which is
used to calculate the wait time then used to calculate LCPUPDTM
(SMF30PDT). The occasionally-bad LCCAWTIM value caused extremely
large values (10**13 or 10**14 seconds!) in LCPUPDTM and very small
(10 millisec) in SMF70ONT. The error was seen in SMF 70 records
from BMC's CMF product, but the cause was the IBM error fixed by the
PTF for the APAR, currently only available for z/OS 1.8.
7. APAR OA18452 changes RMPTTOM to 300 and reduces uncaptured CPU time.
This APAR was actually announced on MXG-L on 23March because MXG'ers
were instrumental in identifying the problem for IBM to correct.
The posting noted that the uncaptured CPU time now is primarily a
function of the SRM Interval (influenced by RMPTTOM), the number of
Address Spaces in the LPAR, and the number of LPARs in the CEC.
6. APAR OA20028 corrects ABENDs in many IO routines that were
introduced by OA10379 (MIDAW Support).
5. APAR PK32855 (planned in Fix Pack 6.0.2.18 for Websphere Application
Server V6.0.1) will remove CPU cost of SMF120CRE field when SMF 120
records are not enabled.
4. APAR OA20477 corrects error in CSA Leak due to Websphere 120 SMF
records with z/OS 1.8 toleration PTFs installed.
3. APAR OA17704 reports incorrect values for "Above 2GB" in RMF VSAM
LRU Overview Buffer Counts by Pool, because SMF data was wrong.
See also InfoApar II1419
2. Under z/OS 1.7 and 1.8 a sequential library on DASD that is backed
up via HBACKDS may potentially lose updates or be truncated, as
documented in SAS Usage Note SN-019315. Under z/OS V1R7 and V1R8,
with every supported version of SAS (6.09 through 9.1.3), updates to
an existing sequential access bound library on DASD can potentially
be lost if the library data set is backed up via HBACKDS, migrated,
and then subsequently recalled. This situation occurs because SAS
processes the library via the open mode INOUT, and the DS1IND08 bit
is not turned on after the library has been updated.
This problem involves an HSM feature known as fast subsequent
migration. In short, if you migrate a data set to tape (ML2),
recall it, and then migrate it back, HSM normally creates a new tape
copy. Fast subsequent migration, if enabled, allows HSM to work a
little smarter: If the data set has not been modified between the
recall and the subsequent migration request, the original tape copy
can be considered valid again (it is marked as stale once the recall
has been done). However, this can result in the data set being
back-leveled if the data set has really been changed.
This problem does not exist for versions of z/OS prior to V1R7, nor
does it exist for direct access bound libraries.
To correct this problem, you need IBM PTFs UA32296 and UA32297.
1. APAR OA19943 is REQUIRED for all users of MXGTMNT/ASMTAPEE, the MXG
Tape Mount Monitor, to be safe. That APAR impacts any task that
acts as an EMCS Console, and that's how MXG traps SYSLOG messages.
but there is NOTHING that the task did wrong, but without the APAR,
if the MXGTMNT task is not dispatched, the dispatcher will chain
SRB's to the task's TCB until it becomes unmanageable, and a spin
loop (i.e., 100% CPU Busy) occurs, without this maintenance.
This problem was repeatedly experienced on day, and it took over
seven hours and IBM's involvement to locate the culprit APAR.
IV. DB2 Technical Notes.
5. APAR PK46171 corrects DB2 zIIP Accounting Field QLACCLS1_ZIIP in
DB2ACCT, "TIME*EXECUTING*ON ZIP" values of 1250999:53 (hh:mm!),
because DB2 was not initializing the field causing residual data.
The bad values occurred only once, when Capacity did a WLM Policy
Change. This APAR cites QWACBJST and QWACEJST as being corrected.
4. The PAR.TASKS CPU TIME in DB2ACCT is NOT captured in CICSTRAN.
The IBM DB2PM (now a/k/a DB2 Performance Expert) Accounting Long
Report section "CP CPU TIME" is the total CP Engine CPU time for
two subgroups, AGENT and PAR.TASKS, and AGENT has four subtotals
for CPU time labeled NONNESTED, STORED PROC, UDF, and TRIGGER.
This note maps the MXG variables/observations in the PDB.DB2ACCT
dataset to those report CPU times, and, for DB2 calls from CICS,
documents which DB2 CPU times are NOT captured in the TASCPUTM
(IBM USRCPUT) in MXG's CICSTRAN dataset from ID=110 SMF records.
That "Long" Report summarizes many DB2ACCT observations, perhaps
for a PLAN, or AUTH, or ACE, and does not map to a single obs.
The "AGENT" subgroup are all DB2ACCT records with DB2PARTY='S',
the non-parallel workload.
The "PAR.TASKS" subgroup are all DB2ACCT records with DB2PARTY of
'P' (Parallel), 'R' (Rollup), or 'O' (Originator).
DB2PM Report Example with annotation:
CP CPU TIME .024 Total CPU time on CP Engines AGENT+PAR.TASKS
Part is in TASCPUTM.
AGENT .008 Allied Agent CPU Time NONNESTED+STORED+
Part is in TASCPUTM. UDF+TRIGGERS
sum of next five:
NONNESTED .008 CPU TIME ON ORIG THREAD QWACEJST-QWACBJST
L8 or L8 TCB.
All NONNESTED is in TASCPUTM.
STORED PROC .000 CPU time in Stored Procedure QWACSPCP
They execute in a WLM ASID.
No QWACSPCP is in TASCPUTM.
UDF .000 CPU time for UDF QWACUDCP
They execute in a WLM ASID.
No QWACUDCP is in TASCPUTM.
TRIGGER .000 Report prints the sum of
TRIGGER-TT .000 Main Task Triggers QWACTRTT
Is Included in EJST-BJST, so
QWACTRTT already in TASCPUTM.
TRIGGER-TE .000 Nested Trigger Activity QWACTRTE
Not Included in EJST-BJST
No QWACTRTE is in TASCPUTM.
PAR.TASKS .016 CPU in Parallel, Rollup DB2TCBTM
or Originator Records
DB2PARTY DEFINITION ACE
R QWHCPARR='Y' QWACPACE
Child Task Rollup
P QWACPACE GT 0 QWACPACE
Child parallel/subtask
O QXMAXDEG,QWACPCNT GT 0 QWHSACE
S ELSE QWHSACE
(no PAR.TASKS is in CICSTRAN).
Conclusions:
-For both AGENT and PAR.TASKS, that is, for all DB2ACCT obs:
DB2TCBTM=(QWACEJST-QWACBJST)+QWACSPCP+QWACUDCP+QWACTRTE;
is total TCB CPU time recorded in that SMF 101 record.
MXG Change 25.291 in MXG 25.25 added QWACUDCP to DB2TCBTM.
-For AGENT records, that is DB2ACCT obs with (DB2PARTY='S') that
are from CICS Attach (QWACATYP=4), the part of that DB2TCBTM that
is NOT recorded in SMF 110 CICSTRAN variable TASCPUTM is:
NOTINCICS= SUM(QWACSPCP,QWACUDCP,QWACTRTE);
-But for PAR.TASKS records, DB2ACCT obs with DB2PARTY=O/P/R/ from
CICS Attach, NONE of the total DB2TCBTM in that DB2ACCT obs is
recorded in the SMF 110 CICSTRAN variable TASCPUTM. Not in CICS:
NOTINCICS=DB2TCBTM;
NOTINCICS=SUM(QWACEJST-QWACBJST)+QWACSPCP+QWACUDCP+QWACTRTE;
On IBM's report, if a group has non-zero QWACTRTT and QWACTRTE, the
AGENT value should be smaller than the sum of its four components,
because IBM includes QWACTRTT in their single TRIGGERS CPU time,
but (presumably) do NOT include it in the sum as it's also already
included in NESTED (EJST-BJST) CPU time.
See Change 25.168 for the _RDB2ACC Diagnostic Report macro that may
be useful for general examination of DB2 Parallel activity.
Jan 2008: IBM'S APAR PK48816 documents that PAR.TASKS CPU time is
not included in SMF 110 TASCPUTM (USRCPUT).
3. APAR PK23432 fixes DB2 GETPAGE counts in the range of 4 billion
due to an incorrect subtraction, in DB2ACCT and DB2ACCTB data.
2. APAR PK37569 reports that COMMIT_COUNT, PARAL_SUBTASK_NUM and
ROLLUP_PARAL_TASK (Tivoli names) are wrong when ACCUMAC option
(to reduce the number of SMF 101 records written) is enabled.
1. APAR PK28561 alters what IBM writes in SMF 101 Subtype 1 IFCID 239
records; previously all five segments were populated by default,
but this APAR creates a new Accounting Class 10, and only if that
class is enabled will all five segments be populated. Without class
10 enabled, only the QPAC and QPKG segments will exist, and data
from the QXPK, QBAC, and QTXA in DB2ACCTP dataset will be missing.
V. IMS Technical Notes.
VI. SAS Technical Notes.
10. Error Unrecognized SAS option name, GUIDE CONFIGURATION ... were
caused by the CONFIG DD for SAS V9 execution pointing to an old
(BATCH) member rather than the correct (BATWO) member.
9. Two examples that will create a Comma Separated Variable (?) CSV
flat file from a SAS dataset:
Example 1:
ODS CSVALL BODY='some file';
PROC PRINT or TABULATE or whatever
RUN;
ODS CSVALL CLOSE;
Example 22:
DATA _NULL_;
SET dataset;
FILE 'some file' DLM=',';
PUT var1 var2 var3 var4 var5...;
8. RECORD FORMATS ARE DIFFERENT error occurred with part of a MONTHly
PDB was written to TAPETEMP with SEQENGINE=V9SEQ, but the program
then changed to specify SEQENGINE=V6SEQ, which cause the error.
7. If you get RC=4 or other non-zero Return Codes on z/OS in SAS V9,
you can insert this statement before and after a section of code
%PUT SYSCC IS &SYSCC;
to print the current value of the return code, and can see when
the value is changed in your code. In SAS Version 9, WARNINGs set
RC=4, but under SAS Version 8, that was not always true.
6. The UTILEXCL utility fails if you have the PROC SYNCSORT add-on
product, with a WER723A or other WER7xxa error message. This is
NOT the SYNCSORT Sorting product, but occurs with the PROC SYNCSORT
add-on that you purchased that is supposed to speed up SAS sorts.
The problem is that the UTILEXCL utility has a very long BY list,
and this causes the PROC SYNCSORT failure. You MUST remove the
PROC SYNCSORT Load Library from your //STEPLIB DD to run UTILEXCL.
Even when you remove the Load Library for PROC SYNCSORT, the normal
SYNCSORT sort won't be used, because SAS will recognize the long
by list and won't call the HOST sort program (telling you so on the
SAS log). Setting OPTIONS SORTPGM=SAS; won't work until you remove
the PROC SYNCSORT Load Library from //STEPLIB DD for this job.
The by list in UTILEXCL is longer than the 4096 byte Syncsort limit;
the existence of the PROC SYNCSORT loadlib prevents SAS from getting
control to switch to its own sort, that has no such limitation.
The example in ANALDUPE also fails if PROC SYNCSORT is installed.
5. SAS Note SN-V9-017038 reports that SAS V9.1.3 with Service Pack 4
and with that Hot Fix can use DSNTYPE=LARGE datasets under z/OS 1.7
and later for bound SAS data libraries on disk.
DSNTYPE=LARGE z/OS datasets can occupy more than 64K tracks on a
single volume, so those datasets can fill all available tracks on
the largest volumes (54GB) that are currently available, reducing
the number of volumes for large SAS data libraries.
To create a "direct access bound library that resides in a DSNTYPE=
LARGE data set", you must externally allocate the library data set
with the DSNTYPE=LARGE parameter:
// EXEC MXGSASV9
//LARGEDB DD DSN=YOUR.DSNTYPE.LARGE,DISP=(,CATLG),UNIT=SYSDA,
// SPACE=(CYL,(5000,5000)),DSNTYPE=LARGE
Nov 7, 2007 update: Prior to z/OS 1.7, there was an IBM limit of
64K tracks when the I/O access method was EXCP, but IBM removed that
limit in 1.7, and the SAS Hot Fix above revised SAS EXCP code so the
DSNTYPE=LARGE can be fully exploited.
4. Some z/OS memory errors (ONLY reported in SYSLOG, not in SAS log!)
were caused by incorrect default OPTION values; setting these values
MEMLEAVE=10M
in the CONFIGV9 member and executing with
REGION=256M
eliminated those errors, which surfaced only in UTILEXCL with a
massive PDB.CICSDICT dataset as input.
3. IBM APAR PQ38655 is required if you are using IBM's OS/390 ftp
server with the SAS ftp Access method to ftp data from tape.
2. MXG 25.02's BUILDPDB has been successfully executed with SAS 9.1.3
on Windows Vista Home Premium Edition, on Windows Vista Enterprise,
and on Windows Vista Enterprise running on Microsoft Virtual PC,
with no errors nor any unexpected warnings.
However, this is NOT a guarantee of safety, since SAS Institute's
official statement in October 2007, in SN-020430, for SAS 9.1.3 is:
( http://support.sas.com/techsup/unotes/SN/020/020430.html )
I. Which Microsoft Windows Vista(TM) operating systems does SAS
9.1.3 support?
* SAS supports the following Windows Vista(TM) 32-bit editions:
- Enterprise
- Business
- Ultimate
* SAS does NOT support Windows Vista(TM) 32-bit Home Editions:
- Premium
- Basic
* SAS does NOT support Windows Vista(TM) 64-bit editions.
SAS has officially announced that VISTA wouldn't be fully supported
until SAS 9.2, which is not expected until late 2007 or early 2008.
Comparing runs on a 2.4GHz Vista with a 2.8GHz XP, with same memory
but unknown disk differences is not a valid point in benchmark-space
but comfortable: 2:05 vs 2:19 run time, 1:42 vs 1:24 User CPU time.
1. ABEND EC6 when running an MXG job processing a lot of DB2 SMF data,
is actually a SYSTEM 322 CPU TIME EXCEEDED condition, that just
happened to occur in z/OS Unix System Services; there was also
BPXP013I THREAD ... WAS TERMINATED DUE TO CPU TIME OUT.
VI.A WPS Technical Notes.
1. World Programming System, WPS, a product of World Programming, in
their words, "an alternative to SAS", has been installed on both
z/OS and Windows platforms, and Merrill Consultants has run tests
under two Beta versions of WPS Software. WPS has already replicated
most of the SAS language functions that are required for execution
of most of MXG Software, and there are several MXG sites that run
their production MXG jobs under WPS Beta's on z/OS quite happily,
While much of MXG does execute under WPS Beta Version 2.0.8, there
are still significant errors to be resolved by WPC before I can
complete the execution of my suite of "SAS Clone" tests, and thus
I don't think it is worth your while to examine the WPS product for
MXG execution until those critical issues have been delivered so
that WPS can be fully evaluated. The current status is:
- MXG Version 25.07 contains the initial minor changes required for
transparent MXG execution under SAS or WPS, and the new AUTOEXEW
for ASCII WPS autoexec.sas, and the new CONFIGW2 and MXGWPSV2 JCL
procedure example for MXG execution under WPS Version 2.0.8.
- "Compiler" tests compile the MXG code, and then execute that code,
to output all datasets with all variables and their attributes
defined, so the output structures can be compared, log messages
be examined. DUMMY input files are used, so all output datasets
have zero observations.
- "Execution" tests compile and execute with actual input data files
and output dataset's variables are populated with values and obs.
- On both Windows and z/OS platforms, almost all of the simple,
straightforward MXG programs that contain a single DATA step and
read an input file do execute without error under WPS, and appear
to produce compatible output datasets.
- A number of other, complicated, MXG programs that do execute under
Windows currently ABENDing under z/OS, notably BUILDPDB and the
DB2 processing TYPSDB2 programs.
However, BUILDPDB did execute successfully in prior tests on
z/OS, so I expect this z/OS-only problem to be resolved soon.
- Almost all of the MXG QA compiler test steps on Windows are now
successful. All variable's attributes (LENGTH,FORMAT,LABEL) in
all MXG datasets can be compared between WPS and SAS compilers.
Many datasets and variables do match perfectly, but we found some
WPS code that did not replicate SAS's handling of attributes.
When corrected by WPC, this phase of the validation be completed.
- Almost all of the MXG QA execution tests on Windows with real data
do execute successfully, but I have not yet done ANY comparison
of the accuracy of the data values of WPS vs SAS data libraries.
- Many of the MXG QA compiler tests on z/OS do execute, but there
are ABENDS in critical SMF-processing steps that are under also
investigation and must be resolved before that the first phase
of z/OS validation can be completed.
- Until the critical z/OS compiler tests are successful, we cannot
run second phase, execution tests, on z/OS for that validation.
- And there are still several both-platform critical problems (all
in progress of repair) that prevent us from continuing testing of
MXG under WPS. Based on their past response, I expect corrections
soon, and likely to be measured in a-few-weeks to a-month-or-so,
before they have successfully corrected all of the errors that MXG
has exposed in this third iteration of WPS QA tests.
- I have not started the third phase, comparing the accuracy of the
data values in the created output datasets with good and bad input
data on either platform.
- I have not started the fourth phase, comparing the execution time
and resource requirements (CPU, Memory, I/O, Disk space) on either
platform. However, compile run times on Windows are similar.
- I am aware that there are a few MXG sites that have been actually
running their tailored MXG production jobs under WPS, even under
earlier WPS beta's. But my job is to evaluate WPS to validate if
all of MXG runs with their product correctly, or to at least then
identify what does, and what doesn't work!
- Upon successful completion of all four phases of my validation on
both z/OS and Windows platforms, I will revisit my original
"SAS Clone" newsletter article with specific updates with regard
to how well WPS meets my criteria.
- I'd also like to formally state the business relationship between
Merrill Consultants and World Programming Company: WPC licenses
MXG Software for its software development, testing, and support,
and Merrill Consultants licenses WPC for its testing. Both of the
company's technicians cooperate on problem resolution.
August 10, 2007.
VII. CICS Technical Notes.
VIII. Windows NT Technical Notes.
IX. z/VM Technical Notes.
1. Adding PAVs to the z/VM configuration caused messaged on zVM:
FCXPMN446E Incomplete monitor data: DCSS size too small
when the IBM performance tool kit processed the MONWRITE data.
The MONWRITE file did NOT have a 1.9 MTRxxx record written at
startup, causing INTERVAL and HFRATE to be missing in VXBYUSR.
Increasing the sample and event storage sizes in the DCSS, the
MONWRITE data was valid.
X. Incompatibilities and Installation of MXG vv.yy.
1. Incompatibilities introduced in MXG 25.yy (since MXG 24.24):
See CHANGES.
2. Installation and re-installation procedures are described in detail
in member INSTALL (which also lists common Error/Warning messages a
new user might encounter), and sample JCL is in member JCLINST9 for
SAS V9.1.3 or JCLINS8 for SAS V8.2.
XI. Online Documentation of MXG Software.
MXG Documentation is now described in member DOCUMENT.
XII. Changes Log
--------------------------Changes Log---------------------------------
You MUST read each Change description to determine if a Change will
impact your site. All changes have been made in this MXG Library.
Member CHANGES always identifies the actual version and release of
MXG Software that is contained in that library.
The CHANGES selection on our homepage at http://www.MXG.com
is always the most current information on MXG Software status,
and is frequently updated.
Important changes are also posted to the MXG-L ListServer, which is
also described by a selection on the homepage. Please subscribe.
The actual code implementation of some changes in MXG SOURCLIB may be
different than described in the change text (which might have printed
only the critical part of the correction that need be made by users).
Scan each source member named in any impacting change for any comments
at the beginning of the member for additional documentation, since the
documentation of new datasets, variables, validation status, and notes,
are often found in comments in the source members.
Alphabetical list of important changes after MXG 24.24 now in MXG 25.01:
Dataset/
Member Change Description
See Member CHANGES or CHANGESS in your MXG Source Library, or
on the homepage www.mxg.com.
Inverse chronological list of all Changes:
Changes 25.yyy thru 25.001 are contained in member CHANGES.