COPYRIGHT (C) 1984-2021 MERRILL CONSULTANTS DALLAS TEXAS USA
MXG NEWSLETTER FORTY-THREE
****************NEWSLETTER FORTY-THREE**********************************
MXG NEWSLETTER NUMBER FORTY-THREE dated Nov 10, 2003
Technical Newsletter for Users of MXG : Merrill's Expanded Guide to CPE
TABLE OF CONTENTS Page
I. MXG Software Version.
II. MXG Technical Notes
III. MVS Technical Notes
IV. DB2 Technical Notes.
V. IMS Technical Notes.
VI. SAS Technical Notes.
VII. CICS Technical Notes.
VIII. Windows NT Technical Notes.
IX. Incompatibilities and Installation of MXG.
See member CHANGES and member INSTALL.
X. Online Documentation of MXG Software.
See member DOCUMENT.
XI. Changes Log
Alphabetical list of important changes
Highlights of Changes - See Member CHANGES.
COPYRIGHT (C) 2003 MERRILL CONSULTANTS DALLAS TEXAS USA
I. MXG Software Version 21.06 is now available.
1. Major enhancements added in MXG VV.RR:
See CHANGES member of MXG Source, or CHANGES frame at www.mxg.com.
II. MXG Technical Notes
2. If you execute MXG on unix and have EMC's IFS product, you can read
the SMF VBS directly from the MVS system, with no file transfer, by
adding ",raw" to the DSNAME operand, to preserve the BDW and RDWs:
FILENAME SMF '/mainframe/day.smf1,raw' RECFM=S370VBS BLKSIZE=32760;
1. Building MXG-recommended daily, weekly, and (maybe) monthly PDBs is
discussed in ACHAP33 (which needs to be updated!), but Chuck Hopf
has a different philosophy, in part because of his data volume:
At some point as a shop grows, it becomes impossible to run the
WEEKBLD/MONTHBLD programs. There is simply not enough disk space
available (or wasn't until it became simpler to build multi-volume
datasets - the STEPS dataset from last week has 1.1M observations
and given 20K online DASD volumes across 9 LPARS for 672
intervals/week it would be a staggering amount for TYPE74.)
Running WTD/MTD allows for the best of both worlds and has also
allowed for the extension of the Barry logic that there are 3
levels of data. Yesterday/Last week/last month. In my
experience, I really only need all of the variables if I am
looking at yesterday so when I go to the WTD, I keep fewer (only
those I am likely to use) and when I go to the MTD I keep even
fewer.
So, what we have available to us at any point in time is:
Daily PDBs - 255 generations
SPIN - 255 generations
Weekly PDBs - 255 generations
Monthly PDBs - 255 generations
WTD PDB - 30 generations
MTD PDB - 30 generations
TREND - 255 generations (moving this to a daily update cycle)
So, I can get to any days data if I need to or any weeks or any
months in a single dataset. It all depends on the need for
information. Not all datasets get carried into the WTD/MTD
process and most of the data is migrated to level 2 but it is all
quickly available and online not on tape.
In this environment, it makes a lot more sense than the classic
MXG approach where if I wanted to look at detail data from Monday
before last I would have to go against a weekly PDB. In fact, I
submit that it makes a lot more sense in almost any environment
(and eventually I will convince you.)
I can't disagree, as with reduced variables kept in his week-to and
month-to-date libraries, he has met his reporting needs, but I've
never been a fan of x-to-date files. Originally, I did use a GDG
for my daily PDBs, but I kept running into DSENQ problems with the
base GDG names when I wanted to read a daily PDB while today's
job was still running (or re-running, as BUILDPDB was still in
development!), and when I found I rarely needed to go back more than
seven days, I created the seven named day-of-week datasets, and the
last week's weekly PDB on DASD (this was before HSM), but copied the
last-weekly to a weekly GDG as my permanent past detail history.
However, if you have CA-11 (Job Scheduler), you MUST use GDGs for
all datasets, so that all jobs are recoverable; i.e., when the job
runs out of space, the data control clerks can increase the space
allocation and restart the jobs.
But if you like x-to-date datasets, then you will want to use the
powerful VMXG2DTE program that Chuck developed to meet his needs!
And now you have more choices in how you structure your PDBs.
III. MVS Technical Notes.
42. APAR PQ79562 adds an SMF exit for IBM Session Manager, ISZE39SM,
which will enable the writing of SMF type 242 records at session end
time. The Facilities Reference manual under 'Slave Session
Termination E39' gives details included in type 242 SMF records.
STATS ON must be set at the System/Profile or User level is SMF
records are required.
May, 2004: Change 22.079, TYPEIBSM, supports this SMF record.
41. APAR OA05526 for SMF 66 records corrects hex nulls in ENTRNAME in
dataset TYPE6156.
40. APAR PQ79901 reports corrections to WebSphere Application Server
V5.0 for z/OS; EJB AverageResponseTime and MaximumResponseTime were
zero when they should not have been zero; numerous other errors are
also corrected.
39. APAR OW55830 for DFSMS/MVS NFS SMF type 92 records now allows their
recording to start at NFS server startup; previously, SMF writing
was disabled and the operator had to explicitly enable SMF using the
SMF=ON operand of the MODIFY command.
38. APAR OA03169 for z/OS 1.4 reports jobs may hang at step termination
if LINKAGE=BRANCH (instead of LINKAGE=WTO) is used in IEFACTRT.
37. APAR OA04402 reports stopping of writing RMF records and CPU loop in
ERBMFCEQ (RMF Address Space) with this "constellation": when ENQ
gathering is restarted and there is an entry from the previous
instance left in the intermediate queue of GRS elements, when during
termination processing an entry is inserted by the RMF ENQ listen
exit. Only OS/390 2.10 thru z/OS 1.2 are impacted. But I liked the
use of "constellation" to describe multiple things!
36. APAR OA04812 is unlikely to impact, but it reports that data in SMF
70 records is missing if the interval is less than 5 seconds!
I have used one minute intervals for benchmarks, but never tried
that short an interval, but apparently it is supported!
I recall only one instance of strange data using a 1 minute RMF
recording interval, where the printer channel activity was only
recorded every tenth minute, and that one interval's counts were
for the preceding ten minutes. IBM's RMF reply was that what
you see is what you get from devices that only update their own
counters every 10 minutes. Oct 30, 2003.
35. SMF Type 108 Domino subtype 2 record's SPR (fix) MIAS5MCJCL corrects
zero values in variable DOMUCPU, User CPU time, in dataset TYPE1082,
for DOMUTYP='NRPC' observations; records with DOMUTYP='HTTP' do have
valid non-zero CPU measurements. Domino Release 6.02 contains that
fix, which apparently cannot be retrofitted on Release 5.
34. A new SYNCSORT "feature", ZSPACE, exists in SYNCSORT for z/OS. If
SYNCSORT determines that enough memory is available and there were
no SORTWKnn's coded (NOTE BENE: MXG ALWAYS HAS RECOMMENDED THAT YOU
HAVE REAL //SORTWKnn DDs in your MXG JCL!!!), SYNCSORT will bypass
the IEFUSI exits and will GETMAIN enough virtual storage to hold the
SORTWKnn space in virtual memory, with no external ability to
control, and no limits on how much virtual storage can be used.
The COREUSED field in the SYNCSORT dataset from their SMF record has
the virtual memory used, but there is no indicator that ZSPACE was
used for a particular sort.
33. HIPER APAR OA03577 from IBM for RSM/SRM, and a planned fix from
SYNCSORT (for their z/OS 1.1 release) should be installed if you
have lots of sorts with DSM enabled. Twenty or so parallel sort
jobs fixed 99% of the page frames between the 16MB Line and the 2GB
Bar, RSM failed to detect the page shortage, so SRM did not take any
action to correct the page shortage "below the bar". The IBM APAR
addresses the RSM problem so that SRM takes action when
available frames become too few; the SYNCSORT fix will limit the
amount of storage that it fixes. This problem can only occur with
SYNCSORT's global DSM option enabled; turning off DSM limits the job
to the memory specified in the VSCORET parm. Turning off DSM helps
the sysplex overall, but can elongate run times for large sorts.
Running out of storage "below the bar" was catastrophic, requiring
the failing system to be removed from the sysplex. Sysplex recovery
on this very large complex took about 3 minutes, and essentially all
work was halted on all systems during that recovery.
MXG Dataset TYPE71 variables SMF71AFB/MFB/XFB track the number of
fixed pages between 16M and 2G, with Average/Min/Max values.
PROC PLOT DATA=PDB.TYPE71;
BY SYSTEM;
PLOT (SMF71AFB SMF71MFB SMF71XFB)* STARTIME;
TITLE 'FIXED FRAMES BELOW THE BAR - 2GB IS 524,288 FRAMES';
You can also look at MXG Dataset TYPESYNC from SYNCSORT user SMF
to see JOBNAME COREUSED to identify the causing culprits and decide
individually to increase/decrease their VSCORET if DSM is OFF. In
TYPESYNC dataset, variable SYNDSMVL shows how much memory DSM added
to the initial memory value for each sort.
Chuck Hopf found that all of the jobs using large fixed memory had
SYNCSORT's WER418I messages, written when SYNCSORT dynamically uses
ZSPACE/HIPERSPACE, and then noted that WER418I only was written for
jobs with no //SORTWKxx DDs: If a job does not have SORTWK DDs,
SYNCSORT tries to do an INCORE sort and fixes all the memory it can:
SORTWK VSCORE VSCORET ZSPACE Fixed Memory Used
NONE NONE NONE YES 124 MegaBytes
NONE 1MB 16MB YES 124 MegaBytes
7 1MB 16MB NO 6 MegaBytes
32. On Friday, August 22nd, IBM introduced their Mainframe Charter on
the internet http://ibm.com/zseries/announce/charter/.
This discussion was contributed to MXG Newsletters by Al Sherkow:
From the IBM website, the charter provides "a framework for planned
future investment and to highlight specific ways in which IBM
intends to deliver ongoing value to zSeries customers." There have
not yet been any new announcements but they are expected soon.
Three areas are of special interest to MXG users are the impact on
software pricing, on sub-capacity pricing, and on WLC:
a. MSUs Lowered for z990 Servers, but not the Performance. This
will lower your Variable WLC software charges and perhaps your
MSU-based licenses from other vendors.
2. z/OS Charges Lowered for Variable WLC Below 315 MSUs. Provides
lower monthly charges for smaller installations.
3. When z/OS is used with a NALC ("New Application License Charge"),
the z/OS Charges have been lowered to z/OS.e levels
What Size is a z990?
IBM has changed the Announced MSU Sizes of the z990 servers without
changing the actual performance; sizes are now about 10% smaller:
Model Previously August 2003 Pct
Announced MSUs Announced MSUs Chang
z990-301 77 70 -9.0%
z990-305 337 302 -10.4%
z990-310 601 538 -10.4%
z990-316 844 761 -9.8%
z990-324 1192 1076 -9.7%
z990-332 1512 1365 -9.7%
To implement this change IBM will upgrade the microcode on all z990s
to reflect the new Announced MSU Values. Microcode updates should be
available in September, used by the z/OS Workload Manager and other
vendor's queries to the hardware for the size/capabilities of LPARs.
The new MSU size is available for sites using pricing metrics of
full-capacity WLC, sub-capacity WLC or PSLC pricing. This new size
may also apply to the license agreements you have with other
software vendors, lowering your software cost-of-ownership for the
same performance.
What This Means to Your Site
This will certainly impact your capacity planning and reporting.
Now you have two different capacities, the existing Hardware MSU
capacity value calculated from SU_SEC in RMF 72 records, and this
new Software MSU Capacity (for Software Charging), based on
SMF70CPA.
If you were planning an upgrade to a z990-310 with 601 Hardware
MSUs, you were also planning a software budget impact based on 601
MSUs. Now, the maximum software charges will be based on the
Software 538 MSU capacity, even though you'll have 601 MSUs of
hardware capacity. Note that the Hardware Constant
(SU_SEC,LOSU_SEC) for the z990-310 is 17003.18, while the Software
SMF70CPA constant is 15220.8239.
Carefully review your chargeback procedures; if you are using CPU
time or service units, as recorded in SMF/RMF, they should not see
any change, but if you are calculating or using MSU as your metric
for chargeback, you will have to decide which MSU capacity it will
be matched against, and possibly change your MSU coefficient in
your billing tables.
If you have been using a formula for conversion from MIPS to MSUs,
that may also require changes in your reporting programs.
MSUs, the SRM Constant, and the z/OS Workload Manager
The z/OS Workload Manager (WLM) uses the SRM Hardware Constant to
recover the original CPU seconds from the RMF Service Units in the
type 72 RMF data, but because the SRM Constant (SU_SEC) is not being
changed, the CPU times in RMF and SMF data will not change. What is
changing is the measure of capacity, as IBM has created this new
"Software MSU Capacity" for the z990 that is smaller than the nearly
new "Hardware MSU Capacity" we've just been learning to understand,
and what is changing in our data records (after a microcode update)
is that IBM will put the Software MSU Capacity in its MSU-related
RMF fields, and use it in the 4-hour rolling MSU averages.
APAR Number OW50998 is needed so that RMF reports and SMF data
would correctly reflect the announced MSU values. Before OW50998
the image capacity on RMF Partition Data Reports and the 4-hour
rolling averages were not correct.)
WLM will use the changed MSU values when calculating 4-hour rolling
averages and image capacities. These values will also be reported
in RMF Partition Data Reports, RMF LPAR Cluster Reports, RMF III CPC
command output, and various other reports and displays, and should
also be used by your system monitors. When software products query
the LPAR and hardware configuration, the changed MSU values will be
returned to the calling program. The changed MSU sizes will be put
in SMF70CPA (MXG variable CPUSUSEC), IBM labeled as the "physical
CPU adjustment factor").
z/OS Charges Lowered
IBM lowered the price of z/OS and some features for sub-capacity
sizes smaller than 315 MSUs on any zSeries server. Of course you
must be using Variable WLC. Earlier this year IBM lowered the entry
point to 3 MSUs; this change results in a further cost reduction.
This price change is effective October 1, 2003. The Lowered MSUs
discussed above also apply. If you have a z990-305 with previously
announced capacity of 337 (Hardware) MSUs, its capacity now is 302
(Software) MSUs, so it also falls into the lower z/OS price range.
There are many installed machines that will benefit from this
change. Five z990 models, and twenty-five z900 models and all the
z800s are smaller than 315 MSUs. You should note this is a change
for z/OS only and not for the other Variable-WLC products. The
actually prices have not been announced yet, only the fact that they
will be changed soon.
z/OS "New Application License Charge"
Additionally, IBM has also lowered z/OS charges when you have a new
workload that qualifies for IBM's "New Application License Charge"
NALC, dropping the price to the level of z/OS.e, and this change is
also planned for G5, G6, and z900. To qualify for NALC pricing you
must be implementing a new workload such as SAP, Domino, PeopleSoft,
WebSphere, or some others. Generally a new machine must be acquired
for the new workload, but you may be able to implement the NALC
workload in an LPAR.
More information is also available at
http://www.sherkow.com/updates/aug2003.html
31. The new large format 3390 DASD (25GB per Volume, 30,000 3990 cyls,
available on the IBM ESS 2105-F08 (Shark) has the same value for
variable DEVMODEL='33909' for both 3390-9's and the new Super 9's.
30. When reading concatenated input files, MVS stops reading when it
encounters a DD DUMMY statement in the concatenation. This is not
new, but I didn't realize that was the case until I read SAS Note
SN-010483 stating that that is expected behavior according to IBM.
When a program attempts to read a dummy dataset the system does an
end-of-data exit immediately and ignores any data sets concatenated
after the DD DUMMY.
29. APAR OW54622 introduced an SQA overflow into CSA condition that
increased CPU time for many STCs over time; the new GETMAIN larger
than FREEMAIN was corrected by APAR OW55360. It has long been known
that when SQA is too small and expands into the CSA area, path
lengths are dramatically increased; you can detect this condition in
MXG dataset TYPE78VS variables SQAEXPNx.
28. SMF Type 42 subtype 6 SMF TYPE42DS missing dataset information can
be caused by BMC's Mainview Batch Optimizer 2.3.0, and BMC Tracking
Number F336716 to correct their error.
27. APAR PQ72222 corrects SMF type 119 (TCP/IP V3) records that had an
invalid BLKSIZE and LRECL of 32768. The APAR text reconfirms the
SMF limit of 32756 bytes of data, i.e., LRECL=32760,BLKSIZE=32760.
26. APAR OA02569 corrects error in ICF Catalog Record SMF 60 that had
trashed values for variable ENTRNAME (IBM field SMF60ENM) for a
DEFINE USERCATALOG request, but the APAR also corrects blank values
for ENTRNAME in SMF 66 records for some ALTER requests.
25. WebSphere Service Level W401504 of Version 4.0.1 APAR PQ71127 fixes
performance problems including high overhead and duplicate SMF 120
records being written.
24. APAR OA02742 documents errors in IBM RMF WLMGL and CF reports, if
you happen to be comparing IBM post-processor reports with MXG's
(correct) ANALRMFR reporting. Errors included using wrong interval
for per-second values, or incomplete input data.
23. APAR OA03055 documents the cases in which SMF 88 records will not be
written for certain logger detected structure full (actually
logstream full) conditions; the correction will increment SMF88ESF
if Logger detects that a Logstream has exceeded its allowable limit
in the structure, but the structure may not be completely full.
For DASDONLY logstreams, previously undefined SMF88CS1 and SMF88CS2
variables are now defined by the PTF. Details are in the APAR text.
22. APAR OA03438 corrects type 42 subtype 6 S42DSMXS (Maximum Data Set
Service Time) that was incorrectly larger than S42DSMXR (Maximum
Data Set Response Time); the time for the Control Unit was included
in the S42DSMXS Service Time, but now is no longer added in.
21. APAR PQ71799 for SMF 103 corrects subtype 2 records created when the
HTTP Server is run in scalable mode. Records written by the queue
manager and by the queue servers contain essentially identical
combined information, and some of the numbers were inaccurate.
The APAR also provides new options: "separate", to cause each
scalable mode server to write only its own statistics to SMF,
instead of combined data, and "sync" to synchronize SMF 103 records
to the hour.
20. APAR OA01883 for RMM is needed if your get EDG4025I VOLUME nnnnnn
REJECTED, or IEC145I 413-08 error messages. These errors occurred
when z/OS 1.2, 1.3 or 1.4 is installed without that RMM APAR.
This APAR also applied to OS/390.
19. APAR OA02898 corrects a problem in SMS when a DELETE GDG FORCE was
used, but not all members of the GDG were deleted; the error was
cause by incorrect IBM code in HDZ11G0 changes to SMS.
18. To ftp MVS V/VB/VBS data files to PCs or workstations, or to MXG
support, you do NOT have to create a RECFM=U copy of the data file,
at least not with IBM's ftp program. Bob Charest found that IBM's
ftp program has a "DD:" argument that can be used to point to the
DDname of the file to be sent, and by using RECFM=U,BLKSIZE=32760 on
that DD statement, the full file, including BDWs and RDWs, will be
downloaded. The below example can be used to send SMF files to the
MXG support ftp site, but by changing the PARM= ip address, and the
"mxgtech mxgtech" (userid password), you can ftp your data file in a
single step, and that downloaded file can be read directly by MXG.
//MXGFTPOU JOB ACCT,'ABCD',CLASS=I,MSGCLASS=T,NOTIFY=&SYSUID
//FTP EXEC PGM=FTP,PARM='ftp.mxg.com'
//SYSPRINT DD SYSOUT=*
//SMFFILE DD DSN=YOUR.SMF.DATA,DCB=RECFM=U,BLKZIZE=32760,DISP=SHR
//INPUT DD *
mxgtech mxgtech
quote PASV
bin
put //DD:SMFFILE yourname.smf
close
quit
/*
The SYSPRINT output will tell you if the ftp transfer succeeded or
not; in a production environment, you probably want a Return Code
set if there was any error; the syntax for the IBM ftp program to
set Return Code 12 on any error is
//FTP EXEC PGM=FTP,PARM='ftp.mxg.com (EXIT=12'
17. APAR PQ73030 corrects incorrect hsf filename offsets in SMF 118
(TCP/IP - supported in MXG TYPETCP because they were originally user
SMF records). The offsets should only be non-zero in a RENAME event
record, but contained trash in other event records.
16. APAR OW54347 (supported in MXG Change 21.058) adds CMR Time to RMF
74 records and to RMF Device Reports, and eliminates CUBDL (CU Busy
Delay time) and DPBDL (Director Port Busy Delay Time) from both RMF
reports and from TYPE74 records.
15. APAR PQ71799 reports incorrect values in RMF reports based on SMF
103 subtype 2 records - Threads Used and Max Threads Used are zero
in RMF reports, but the SMF records (and hence MXG!) are non-zero.
14. DFSORT R14 SMF type 16 has incorrect CPU time (variable SORTCPTM,
IBM field ICECPUT) when DFSORT is called from a program which uses
dynamic allocation (because dynamic allocation also uses STIMER).
The incorrect CPU time is always 24 hours (X'0083D600').
APAR PQ72589 documents the error, with "FIN" - Fixed in Next.
13. JES3 only. SMF type 25 Fetch counts are corrected by OW56112, for
both tape (TAPFETCH) and DASD (DSKFETCH) scratch mounts.
12. ***ERROR.VMAC42.42LN4LEN for SMF 42 subtype 20 and 21 records is
corrected by IBM APARs OA02184 and OA08693. Using STOW DELETE was
fixed in OA02184, and using DESERV FUNC=DELETE is fixed in OA08693.
11. Previously, you could not use products that extend volumes (STOPX37,
PRO/SMS, SRM, and VAM) with SAS, but the Hot Fix Bundle 82BX04 has
changed the way SAS extends volumes so those products can now be
used with SAS V8.2 and that Hot Fix. SN-008936 points to SN-005642,
which points to the 82BB34 Hot Fix List, which shows SN-005642 as
having been introduced in 82BA57.
If you have to disable STOPX37 for a specific job step, add
//PROIGN DD DUMMY
to the step's JCL.
10. APAR OW53698 has been issued for incorrect IOUNITS and/or SERVUNIT
in SMF 30s; IOUNITS was very large (2x10**9) and SERVUNIT was not
the sum of CPUUNITS+SRBUNITS+IOUNITS+MSOUNITS. The error has been
fixed by IBM, and only occurred when running in 64 bit mode when a
SYSEVENT REALSWAP was issued.
9. Cheryl Watson recommended that I include a warning about trying to
calculate the published MSU values from the SU_SEC values. The HDS
Skylines and Amdahl are not close, and IBM has several instances
where the calculated MSUs (technical) and published MSUs (marketing)
differ.
8. Invalid date/times in READTIME in TYPE26J2 records have been caused
by Computer Associate's CA-7 product, as a result of zap's that were
suggested by CA Technical Support (for which there was no formal
"Fix" Number.
7. APAR PQ69575 for SMF 119 corrects negative values in connection
count variables TCHWMRK and TCNCONNS in subtype 7 record.
6. APAR PQ70810 adds new data to SMF 108. Incomplete.
5. APAR PQ67142 sets IBM bit 7 ('......1'B) of byte 5 of TRANFLAG in
CICSTRAN to true if the Task Abnormally Terminated. Previously,
there was no flag to indicate that a transaction had completed and
issued message DFHAC2236, and which have not issued that message.
4. APAR PQ70765 reports Remote and Local IP Addresses in SMF 119 are
incorrect in termination records; the local address is zero and
the remote address is the local address.
3. APAR OW52226 reports, without correction, that type 30 variables
TAPNMNTS (SMF30PTM) and TAPSMNTS (SMF30TPR) mount counts are not
updated for SMS dynamically allocated datasets which reside on tape.
While the specific case was JES3 controlled tapes, the APAR applies
to all SMS tapes, JES2 and JES3.
Sep 5, 2003: An PTF now exists, and the APAR text reports that the
error was corrected in DFSMF/MVS 1.5.
2. APAR OW55803 moves the issuance of IEC705I ("TAPE OPENED") message,
previously issued after the data set labels had been created, until
after the OPEN bit (DCBOFOPN) has been set; ABENDs 013 or 613 RC20
occur after the data set labels were created and before the OPEN was
successful. The APAR does not change the fact that SMF 15 records
will not be created when an OPEN abend is detected. SMF 15 records
will be created only after CLOSE, EOV, and FEOV abends (i.e., only
after the dataset has been successfully opened).
1. APAR OW57716 for SMF 89 records corrects the CPU Version Number,
which, after an upgrade of the CPC, still contained the older
processor, not the upgraded one.
IV. DB2 Technical Notes.
2. APAR PQ75731 fixes QTXANPL which was incorrectly calculated on the
maximum number of locks held by transactions instead of the locks
held by user objects - max locks held included locks for catalog
access that are not considered user locks.
1. DB2ACCT data, especially for DDF transactions, with DB2TCBTM much
greater than ELAPSTM, that have accumulated values (the CPU time
increases across transactions), are due to IBM errors in QWACSPCP,
the CPU Time in Stored Procedures, which MXG includes in DB2TCBTM.
IBM APARs PQ65302 and PQ55259 correct the errors (actual fixes are
in PTFs UQ62410, UQ70983, and UQ62410) but neither the APAR nor PTF
text says anything about QWACSPCP.
V. IMS Technical Notes.
VI. SAS Technical Notes.
10. SAS/ITRM "MVS" executes MXG code, but ITRM renames the datasets
(DETAIL.XTY70 in place of PDB.TYPE70), and MXG variable names are
truncated to seven positions, with a meta-data suffix added.
So you can't use MXG "sample" programs with the ITRM renamed PDB
datasets? Not true!
The MXG "PDB" datasets do exist in the ITRM DETAIL library, because
ITRM creates a SAS View to map their renames back to the original
MXG dataset and variable names. You just use DATA=PDB.TYPE70 or
SET PDB.TYPE70; syntax to access the original MXG PDB dataset.
ITRM builds its data in the DETAIL library, but libref's "PDB" to
that same library, just for this purpose!
All PDB datasets that are defined in the ITRM dictionary will have
a SAS View created, but it will only know about the variables that
are in that dictionary.
However, for completely new datasets added by MXG to the "PDB" that
are not yet in the ITRM dictionary, no VIEW is built, and any new
variables added to MXG datasets won't be in the VIEW until the ITRM
dictionary is updated.
However, however, all of the MXG datasets are actually available in
the ITRM WORK library, and they will have all of the new variables.
They can be copied or used ever before ITRM's dictionary has been
updated, but you will need to insert this statement:
%let cpstgekp = Y ;
before your %CMPROCES/%CPPROCES macro, to prevent the deletion of
all of the WORK datasets. %CMPROCES/%CPPROCES stages all of the
datasets into the WORK libref, and then normally deletes all of
the WORK datasets.
9. With SAS/ITRM (formerly SAS/ITSV), to enable the "MXG Debugging"
options (SOURCE SOURCE2 MACROGEN MPRINT), you need to insert two
statements between the %cpstart and %cmproces/%cpproces calls:
%cpstart ..... ;
%let cp_nmsg=2;
options source source2 macrogen mprint;
%cmproces ... ;
or
%cpproces ... ;
With those options, you can tell which members were included from
which library (yours or mine!), you get the line numbers of the
compiled code, and you see the values of macro variables that you
have changed. Lots of print lines, but it usually lets me resolve
the problem in a single iteration.
8. SAS Note SN-010893 corrects an IBM ETR that claimed SAS had caused
a full system outage. IBM APAR OA04838 now acknowledges that the
system outage was NOT caused in any way by SAS, but instead was a
bad ESTAE routine in IBM's own DB2 REPLIDATA product!
7. New option DTRESET in SAS Version 9 prints the current time and
date in your output, instead of the date/time of the start of SAS.
6. SAS Version 8.2, 9, VM1319 ABEND, or LOGICAL NAME ddname ASSIGNED
BUT NOT IN CURRENT SCOPE, when you reassign a FILENAME statement
and use a FILEREF (aka DDNAME) that was externally allocated (i.e.,
in your JCL), and the DSNAME in the new FILENAME statement is not
the same as the external DD. SAS Notes SN-010623 and SN010629 only
provide a circumvention: "don't do that".
5. SAS Version 9.0 prints spurious NOTE for every variable in a label:
NOTE 49-169: The meaning of an identifier after a quoted string
may change in a future SAS release. Inserting white space
between a quoted string and the succeeding identifier is
recommended.
SAS acknowledges this note is in error, and that nothing in MXG has
to be changed in the future, and have eliminated the spurious note
in SAS Version 9.1.
4. Error message WRITE ACCESS TO MEMBER PDB.CICSACCT.DATA IS DENIED
was seen at one site when it tried to use a PDB dataset that had
been copied by the FDR (Fast Dump Restore) product. Using PROC
COPY instead eliminated the error.
3. High-volume write-once read-once-normally datasets (like CICSTRAN
or DB2ACCT) are frequently written out as a tape GDG on MVS, but
only 255 days can be kept in a daily GDG. An alternative is to
create a unique MVS dataset name (MXG.CICSTRAN.APR0403) each day,
and let HSM migrate the dataset to tape after its one use. You
can use the LIBNAME statement and SAS code to create the date for
the DSNAME with this example for MVS:
DATA _NULL_;
TODAYDTE=UPCASE(PUT(TODAY(),DATE7.));
CALL SYMPUT('TODAYIS',TODAYDTE);
RUN;
LIBNAME CICSTRAN V6SEQ "MXG.CICSTRAN.D&TODAYIS"
DISP=(NEW,CATLG) SPACE=(CYL,(200,200)) UNIT=(SYSDA,5);
RUN;
will create DSN=MXG.CICSTRAN.D04APR03, and force the format to be
sequential (tape) format.
For PC/Unix, this syntax will create a directory name and assign
the libname CICSTRAN to that directory:
OPTIONS NOXWAIT;
DATA _NULL_;
TODAYDTE=UPCASE(PUT(TODAY(),DATE7.));
CALL SYMPUT('TODAYIS',TODAYDTE);
RUN;
x "md 'd:\mxg\cicstran\d&todayis' ";
LIBNAME CICSTRAN "d:\MXG\CICSTRAN\D&TODAYIS" ;
RUN;
That NOXWAIT option for Windows/ASCII systems is needed here only
because the X command is used; without NOXWAIT, after the X command
completed creation of the directory, your SAS session would come
back to wait for terminal input.
2. Using PROC COPY under V8 to copy a Monthly PDB failed on several of
the datasets with ERROR: THE RECORD FORMATS ARE DIFFERENT. The PDB
had been created with an old MNTHBLD that still used the V8SEQ tape
ending (i.e., LIBNAME xxx TAPE; had not been changed to &TAPENG as
documented in Change 18.104). Rebuilding the Month PDB with the
correct (still V6SEQ) tape engine resolved the errors.
1. SAS Version 9.1 has changed the WARNING message that Compression
was Disabled to a NOTE, so the Condition Code will be zero instead
of four. SAS Note SN-008632 discusses.
VII. CICS Technical Notes.
2. APAR PQ75068 reports CICS SMF 110 subtype 2 STID=24 A04 data was
wrong after PQ62574.
1. No observations in Landmark-ASG TMON MONITASK dataset because their
VTCECNTL file (Control file for TMON for CICS) loses the group
association for the CICS jobs when you run the convert program
against it - this causes the jobs to be associated with the default
profile, which has the TA (task) records turned off, and it is the
TA records that create observations in MONITASK dataset.
VIII. Windows NT Technical Notes.
IX. Incompatibilities and Installation of MXG 20.20.
1. Incompatibilities introduced in MXG 21.xx (since MXG 20.20):
See CHANGES.
2. Installation and re-installation procedures are described in detail
in member INSTALL (which also lists common Error/Warning messages a
new user might encounter), and sample JCL is in member JCLINSTL.
X. Online Documentation of MXG Software.
MXG Documentation is now described in member DOCUMENT.
XI. Changes Log
--------------------------Changes Log---------------------------------
You MUST read each Change description to determine if a Change will
impact your site. All changes have been made in this MXG Library.
Member CHANGES always identifies the actual version and release of
MXG Software that is contained in that library.
The CHANGES selection on our homepage at http://www.MXG.com
is always the most current information on MXG Software status,
and is frequently updated.
Important changes are also posted to the MXG-L ListServer, which is
also described by a selection on the homepage. Please subscribe.
The actual code implementation of some changes in MXG SOURCLIB may be
different than described in the change text (which might have printed
only the critical part of the correction that need be made by users).
Scan each source member named in any impacting change for any comments
at the beginning of the member for additional documentation, since the
documentation of new datasets, variables, validation status, and notes,
are often found in comments in the source members.
Alphabetical list of important changes after MXG 20.20 now in MXG 21.xx:
Dataset/
Member Change Description
See Member CHANGES or CHANGESS in your MXG Source Library, or
on the homepage www.mxg.com.
Inverse chronological list of all Changes:
Changes 21.yyy thru 21.001 are contained in member CHANGES.