I know that the COBOL RW is for producing printed reports. However is
it possible to use RW to produce a report for screen display?
On Tuesday, April 11, 2017 at 6:50:57 PM UTC-4, Paul Richards wrote:
I know that the COBOL RW is for producing printed reports. However is
it possible to use RW to produce a report for screen display?
Interesting question!
I have not tried it, but it may be possible with, say, Micro
Focus, to use something like ASSIGN TO DISPLAY ORGANIZATION
LINE SEQUENTIAL.
On Tuesday, April 11, 2017 at 7:56:24 PM UTC-4, Rick Smith wrote:
On Tuesday, April 11, 2017 at 6:50:57 PM UTC-4, Paul Richards wrote:
I know that the COBOL RW is for producing printed reports.
However is it possible to use RW to produce a report for screen
display?
Interesting question!
[snip]
I have not tried it, but it may be possible with, say, Micro
Focus, to use something like ASSIGN TO DISPLAY ORGANIZATION
LINE SEQUENTIAL.
I had to find out!
-----
program-id. rw-test.
environment division.
input-output section.
file-control.
select rpt-file assign to display
organization line sequential
.
data division.
file section.
fd rpt-file
report is rpt.
working-storage section.
1 num comp pic 99.
report section.
rd rpt.
1 rpt-line type is detail.
2 line plus 1.
3 rpt-item column 1 pic z9 source is num.
procedure division.
open output rpt-file
initiate rpt
perform varying num from 1 by 1
until num > 10
generate rpt-line
end-perform
terminate rpt
close rpt-file
stop run
.
end program rw-test.
-----
1
2
3
4
5
6
7
8
9
10
-----
On Tuesday, April 11, 2017 at 7:56:24 PM UTC-4, Rick Smith wrote:
On Tuesday, April 11, 2017 at 6:50:57 PM UTC-4, Paul Richards wrote:
I know that the COBOL RW is for producing printed reports. However is
it possible to use RW to produce a report for screen display?
Interesting question!
[snip]
I have not tried it, but it may be possible with, say, Micro
Focus, to use something like ASSIGN TO DISPLAY ORGANIZATION
LINE SEQUENTIAL.
I had to find out!
-----
program-id. rw-test.
environment division.
input-output section.
file-control.
select rpt-file assign to display
organization line sequential
.
data division.
file section.
fd rpt-file
report is rpt.
working-storage section.
1 num comp pic 99.
report section.
rd rpt.
1 rpt-line type is detail.
2 line plus 1.
3 rpt-item column 1 pic z9 source is num.
procedure division.
open output rpt-file
initiate rpt
perform varying num from 1 by 1
until num > 10
generate rpt-line
end-perform
terminate rpt
close rpt-file
stop run
.
end program rw-test.
-----
1
2
3
4
5
6
7
8
9
10
-----
Rick Smith wrote:
On Tuesday, April 11, 2017 at 7:56:24 PM UTC-4, Rick Smith wrote:
On Tuesday, April 11, 2017 at 6:50:57 PM UTC-4, Paul Richards wrote:
I know that the COBOL RW is for producing printed reports.
However is it possible to use RW to produce a report for screen
display?
Interesting question!
[snip]
I have not tried it, but it may be possible with, say, Micro
Focus, to use something like ASSIGN TO DISPLAY ORGANIZATION
LINE SEQUENTIAL.
I had to find out!
-----
program-id. rw-test.
environment division.
input-output section.
file-control.
select rpt-file assign to display
organization line sequential
.
data division.
file section.
fd rpt-file
report is rpt.
working-storage section.
1 num comp pic 99.
report section.
rd rpt.
1 rpt-line type is detail.
2 line plus 1.
3 rpt-item column 1 pic z9 source is num.
procedure division.
open output rpt-file
initiate rpt
perform varying num from 1 by 1
until num > 10
generate rpt-line
end-perform
terminate rpt
close rpt-file
stop run
.
end program rw-test.
-----
1
2
3
4
5
6
7
8
9
10
-----
Rick
Thanks. I am using MicroFocus compiler so it seems your example will
provide a basis for some experimentation on my side. Thanks again.
On Tuesday, April 11, 2017 at 7:56:24 PM UTC-4, Rick Smith wrote:
On Tuesday, April 11, 2017 at 6:50:57 PM UTC-4, Paul Richards wrote:
I know that the COBOL RW is for producing printed reports.
However is it possible to use RW to produce a report for screen
display?
Interesting question!
[snip]
I have not tried it, but it may be possible with, say, Micro
Focus, to use something like ASSIGN TO DISPLAY ORGANIZATION
LINE SEQUENTIAL.
I had to find out!
-----
program-id. rw-test.
environment division.
input-output section.
file-control.
select rpt-file assign to display
organization line sequential
.
data division.
file section.
fd rpt-file
report is rpt.
working-storage section.
1 num comp pic 99.
report section.
rd rpt.
1 rpt-line type is detail.
2 line plus 1.
3 rpt-item column 1 pic z9 source is num.
procedure division.
open output rpt-file
initiate rpt
perform varying num from 1 by 1
until num > 10
generate rpt-line
end-perform
terminate rpt
close rpt-file
stop run
.
end program rw-test.
-----
1
2
3
4
5
6
7
8
9
10
-----
Rick
I've been experimenting with RW-to-the-screen and have hit a problem I
can't resolve. I have a line sequential file (name.dat) containing 3
records (Last name, Middle name and Initial). I created the file with a normal COBOL program and can read and display the contents with amother program.
When I try to use RW and send the output to the screen the program
fails as it starts to read the third record with a status '9/18, Read
part record error" EOF before EOR of file open in wrong mode'.
I know the name.dat file is correctly formatted since my other 2
programs have created and read it. Any idea what's going wrong? The RW
code is below.
On Monday, April 17, 2017 at 11:34:32 PM UTC-4, Paul Richards wrote:
[snip]
Rick
I've been experimenting with RW-to-the-screen and have hit a
problem I can't resolve. I have a line sequential file (name.dat) containing 3 records (Last name, Middle name and Initial). I
created the file with a normal COBOL program and can read and
display the contents with amother program.
When I try to use RW and send the output to the screen the program
fails as it starts to read the third record with a status '9/18,
Read part record error" EOF before EOR of file open in wrong mode'.
I know the name.dat file is correctly formatted since my other 2
programs have created and read it. Any idea what's going wrong? The
RW code is below.
[snip]
Using Micro Focus 3.2.50:
I copied the code (changing the location of 'name.dat' for my system), created a file using the COBOL editor, and the program ran properly.
I then used notepad to alter the data file in various ways including eliminating all cr+lf pairs, and shortening and lengthening the third
record. I checked the file with hexedit to ensure the intended input
for each test. It, simply, would not fail!
As I recall, the only time I have ever seen a '9/18' was with a
corrupted indexed file, which can't apply here. I would use
the hexedit utility to examine 'name.dat' to make sure everything
is as expected.
notepad display of name.dat -----
Jones Alex P
Jones Mary A
Jones Sam W
hexedit of name.dat -----
- ---------------------------------------------------------------------- ---------- 000000 4A 6F 6E 65 73 20 20 20-20 20 20 20 20 20 20
20 Jones 000010 20 20 20 20 41 6C 65 78-20 20 20 20 20 20 20
20 Alex 000020 50 0D 0A 4A 6F 6E 65 73-20 20 20 20 20 20
20 20 P..Jones 000030 20 20 20 20 20 20 20 4D-61 72 79 20 20
20 20 20 Mary 000040 20 20 20 41 0D 0A 4A 6F-6E 65 73
20 20 20 20 20 A..Jones 000050 20 20 20 20 20 20 20 20-20
20 53 61 6D 20 20 20 Sam 000060 20 20 20 20 20 20
57 0D-0A W..
- -
-
-
-
-
-
-
- ---------------------------------------------------------------------- ---------- -
Hexediting-NAME.DAT-------Len--000069-Adrs-000000--------------------- ---------- F1=help F2=hex/char F3=load-file F4=save-file F5=goto F6=display-all/ASC/EBC F7=hex/dec-address F8=list-file F10=find
Escape
display from running program -----
Jones Alex P
Jones Mary A
Jones Sam W
-----
What I get from the running program is:
Richards Paul A
Richards Zoe
at which point it errors out.
So I'm at a loss to know what's going on.
On Tuesday, April 18, 2017 at 8:42:13 AM UTC-4, Paul Richards wrote:
[snip]
What I get from the running program is:
Richards Paul A
Richards Zoe
at which point it errors out.
So I'm at a loss to know what's going on.
The missing 'K' bothers me.
I suggest putting 'display name-record' before the generate,
comment the generate, and test. If that works, uncomment the
generate and test again. There may be some interaction between
the generate and the read that may not appear with the display
statement only. I have no idea what that interaction may be.
Just trying to turn a loss into a gain.
On Tuesday, April 18, 2017 at 12:14:27 PM UTC-4, Rick Smith wrote:
On Tuesday, April 18, 2017 at 8:42:13 AM UTC-4, Paul Richards wrote:
[snip]
What I get from the running program is:
Richards Paul A
Richards Zoe
at which point it errors out.
So I'm at a loss to know what's going on.
The missing 'K' bothers me.
I suggest putting 'display name-record' before the generate,
comment the generate, and test. If that works, uncomment the
generate and test again. There may be some interaction between
the generate and the read that may not appear with the display
statement only. I have no idea what that interaction may be.
Just trying to turn a loss into a gain.
It also occurs to me that, if there is an interaction, reading
the record into working-storage before generating the output
from working-storage might correct the problem. You can thank
Mr Dwarf if that resolves the problem.
I'm not sure what I did :-( but the 'display' option produced the
required result and when I reinstituted the 'generate', commenting out 'display', the correct output resulted. So I'm a bit stumped. Anyway
got there in the end. Many thanks for your assistance. I'd never uses
Report Writer before so this was a useful lesson.
Incidentally who is Mr Dwarf?docdwarf?
It also occurs to me that, if there is an interaction, reading
the record into working-storage before generating the output
from working-storage might correct the problem. You can thank
Mr Dwarf if that resolves the problem.
In article <01cf8821-bd4e-4961-822b-ed1113061b7e@googlegroups.com>,
Rick Smith <rs847925@gmail.com> wrote:
[snip]
It also occurs to me that, if there is an interaction, reading
the record into working-storage before generating the output
from working-storage might correct the problem. You can thank
Mr Dwarf if that resolves the problem.
Always READ INTO, WRITE FROM... that is the Chant of the Elders which I
was taught.
(It was based on the injunction Thou Shalt Not Perform Arithmetic
Operations In A Buffer)
And please... jes' ol' Doc, that's all.
DD
Always READ INTO, WRITE FROM... that is the Chant of the Elders which I
was taught.
(It was based on the injunction Thou Shalt Not Perform Arithmetic
Operations In A Buffer)
On IBM compilers prior to VS COBOL 1.4 this made sense for read
because the addressability of the read buffer dropped when AT END was detected. This changed to being dropped when a CLOSE was issued with
the implementation of 1.4 and it was easier to find data in the dump
when debugging especially in later releases with LE (Language
Environment). WRITE FROM for files with variable length records
continued to be of value because it allowed best use of the write
buffer.
Clark Morris
In article <01cf8821-bd4e-4961-822b-ed1113061b7e@googlegroups.com>,
Rick Smith <rs847925@gmail.com> wrote:
[snip]
It also occurs to me that, if there is an interaction, reading
the record into working-storage before generating the output
from working-storage might correct the problem. You can thank
Mr Dwarf if that resolves the problem.
Always READ INTO, WRITE FROM... that is the Chant of the Elders which
I was taught.
(It was based on the injunction Thou Shalt Not Perform Arithmetic
Operations In A Buffer)
There is a GOOD reason for this logic -
Many compilers, at least on mainframes release the memory after a record is written and only issues it on a read so by doing READ into and WRITE from cuts down the possibility of getting silly program faults that get trapped
by the OS which issues a large dump for no good reason :)
On Wednesday, April 19, 2017 at 1:50:59 PM UTC+2, docd...@panix.com wrote: >[...]
Always READ INTO, WRITE FROM... that is the Chant of the Elders which I
was taught.
(It was based on the injunction Thou Shalt Not Perform Arithmetic
Operations In A Buffer)
I've never had a problem *not* using READ ... INTO .../WRITE ... FROM
..., so other than being old, I'm not sure what your Elders had.
Hello docdwarf!
Bad enough reading 'em but the carrying was tougher on the arms and back >taking them from the machine room to my office!
On Wednesday, April 19, 2017 at 7:44:54 PM UTC+2, Vince Coen wrote:
[...]
There is a GOOD reason for this logic -
Many compilers, at least on mainframes release the memory after a record is >> written and only issues it on a read so by doing READ into and WRITE from
cuts down the possibility of getting silly program faults that get trapped >> by the OS which issues a large dump for no good reason :)
Not so.
WRITEs of a sequential data set are to a "buffer". After a WRITE, the FD >points to the next available position in the buffer (or in the next
buffer, if the previously current one was deemed "full"). Entirely
Standard behaviour.
To modify that behaviour, in an entirely Standard way, is simple.
Coughing up an old pointless Mantra does a disservice to newcomers to
COBOL (and they are there).
If your program logic is correct, there is no problem with leaving data
where it was read, or building a record in the definitions beneath an
FD.
If your program logic is not correct, then Mantra, even when it "saves"
by coincidence, is just Mantra. The program is still incorrect.
In article <530a5ecd-1222-4609-a1e4-7d38196ac6b6@googlegroups.com>,
Bill Woodger wrote:
On Wednesday, April 19, 2017 at 1:50:59 PM UTC+2, docd...@panix.com wrote: >[...]
Always READ INTO, WRITE FROM... that is the Chant of the Elders which I >> was taught.
(It was based on the injunction Thou Shalt Not Perform Arithmetic
Operations In A Buffer)
I've never had a problem *not* using READ ... INTO .../WRITE ... FROM
..., so other than being old, I'm not sure what your Elders had.
The Elders had great abilities, Mr Wodger, and could code code such as
*ten* programmers cannot, today. That you have had no problem may speak less to what The Elders knew and more to your experience.
<https://groups.google.com/forum/#!search/%22RESERVE$20NO$20ALTERNATE$20AREAS%22/comp.lang.cobol/wtoadHUrX2s/aNmeHAFTvmMJ>
DD
Bill Woodger wrote:is
On Wednesday, April 19, 2017 at 7:44:54 PM UTC+2, Vince Coen wrote:
[...]
There is a GOOD reason for this logic -
Many compilers, at least on mainframes release the memory after a record
written and only issues it on a read so by doing READ into and WRITE from >> cuts down the possibility of getting silly program faults that get trapped >> by the OS which issues a large dump for no good reason :)
Not so.
WRITEs of a sequential data set are to a "buffer". After a WRITE, the FD >points to the next available position in the buffer (or in the next
buffer, if the previously current one was deemed "full"). Entirely
Standard behaviour.
To modify that behaviour, in an entirely Standard way, is simple.
Every line of modified code takes time to disscuss in the Production Implementation Meeting, Mr Woodger.
[snip]
Coughing up an old pointless Mantra does a disservice to newcomers to
COBOL (and they are there).
If your program logic is correct, there is no problem with leaving data >where it was read, or building a record in the definitions beneath an
FD.
If your program logic is not correct, then Mantra, even when it "saves"
by coincidence, is just Mantra. The program is still incorrect.
When black-letter law changes in a way that was not predicted when the program's skeleton was written a half-century ago there is no 'your
program' or 'your program logic'.
When a newcomer to COBOL asks 'what is the reason for this?' an
Elder-to-be has an explanation.
When a newcomer to COBOL has any sort of difficulty with READ INTO, WRITE FROM it is time to suggest a different way of making a living.
DD
On Wednesday, April 19, 2017 at 9:45:42 PM UTC+2, docd...@panix.com wrote:
Bill Woodger wrote:
To modify that behaviour, in an entirely Standard way, is simple.
Every line of modified code takes time to disscuss in the Production
Implementation Meeting, Mr Woodger.
Who's talking of modifying anything?
[snip]
Coughing up an old pointless Mantra does a disservice to newcomers to
COBOL (and they are there).
If your program logic is correct, there is no problem with leaving data
where it was read, or building a record in the definitions beneath an
FD.
If your program logic is not correct, then Mantra, even when it "saves"
by coincidence, is just Mantra. The program is still incorrect.
When black-letter law changes in a way that was not predicted when the
program's skeleton was written a half-century ago there is no 'your
program' or 'your program logic'.
When a newcomer to COBOL asks 'what is the reason for this?' an
Elder-to-be has an explanation.
When a newcomer to COBOL has any sort of difficulty with READ INTO, WRITE >> FROM it is time to suggest a different way of making a living.
When black-letter law is Cant and nonsense it can be forgotten. Simple.
You'd be happy with this?
READ INPUT-MASTER INTO MASTER-RECORD
If you can explain the Cant, so it isn't Cant, then do so. You can't.
On Wednesday, April 19, 2017 at 9:35:08 PM UTC+2, docd...@panix.com wrote:
In article <530a5ecd-1222-4609-a1e4-7d38196ac6b6@googlegroups.com>,
Bill Woodger wrote:
On Wednesday, April 19, 2017 at 1:50:59 PM UTC+2, docd...@panix.com wrote: >> >[...]
Always READ INTO, WRITE FROM... that is the Chant of the Elders which I >> >> was taught.
(It was based on the injunction Thou Shalt Not Perform Arithmetic
Operations In A Buffer)
I've never had a problem *not* using READ ... INTO .../WRITE ... FROM
..., so other than being old, I'm not sure what your Elders had.
The Elders had great abilities, Mr Wodger, and could code code such as
*ten* programmers cannot, today. That you have had no problem may speak
less to what The Elders knew and more to your experience.
<https://groups.google.com/forum/#!search/%22RESERVE$20NO$20ALTERNATE$20AREAS%22/comp.lang.cobol/wtoadHUrX2s/aNmeHAFTvmMJ>
Let's see you try to code that.
Show us the results, please.
Examples of pre-1964 COBOL, even given that NO existed at the time,
are... how useful?
On Wednesday, April 19, 2017 at 7:44:54 PM UTC+2, Vince Coen wrote:
[...]
There is a GOOD reason for this logic -
Many compilers, at least on mainframes release the memory after a
record is written and only issues it on a read so by doing READ into
and WRITE from cuts down the possibility of getting silly program
faults that get trapped by the OS which issues a large dump for no
good reason :)
Not so.
WRITEs of a sequential data set are to a "buffer". After a WRITE, the
FD points to the next available position in the buffer (or in the next buffer, if the previously current one was deemed "full"). Entirely
Standard behaviour.
To modify that behaviour, in an entirely Standard way, is simple.
READs are the same.
We've discussed this before elsewhere, Vince https://sourceforge.net/p/open-cobol/discussion/help/thread/b4890407/
Coughing up an old pointless Mantra does a disservice to newcomers to
COBOL (and they are there).
If your program logic is correct, there is no problem with leaving
data where it was read, or building a record in the definitions
beneath an FD.
If your program logic is not correct, then Mantra, even when it
"saves" by coincidence, is just Mantra. The program is still
incorrect.
[...]
Let's see you try to code that. Show us the results, please.
Examples of pre-1964 COBOL, even given that NO existed at the time,
are... how useful? Out with 1985.
On Wednesday, April 19, 2017 at 5:37:00 PM UTC+2, Clark F Morris wrote:
[...]
On IBM compilers prior to VS COBOL 1.4 this made sense for read
because the addressability of the read buffer dropped when AT END was
detected. This changed to being dropped when a CLOSE was issued with
the implementation of 1.4 and it was easier to find data in the dump
when debugging especially in later releases with LE (Language
Environment). WRITE FROM for files with variable length records
continued to be of value because it allowed best use of the write
buffer.
Clark Morris
Always if doing "one behind", have all necessary data in WORKING STORAGE anyway. So even end-of-file should not be a problem.
I'm interested in what you could mean by how WRITE ... FROM ... "allowed best use of the write buffer".
On Wed, 19 Apr 2017 09:38:37 -0700 (PDT), Bill Woodger[...]
best use of the write buffer".I'm interested in what you could mean by how WRITE ... FROM ... "allowed
For variable length non-VSAM files if WRITE is used then the bufferThanks Clark. That would be nice, but it is not so. Here's the generated code for MOVE followed by WRITE and then WRITE ... FROM ...
must always have to be able to accommodate the maximum size record
being built. If WRITE FROM is used then the buffer is filled by
moving the actual size of the record for each new record until the
record being moved won't fit.
Clark Morris
You can 100% rely on all fields in the current record (subject to the
record description being accurate, and if not accurate INTO/FROM isn't
going to fix it). When the current record is no longer current, you
can 100% rely on the data being different, other than by coincidence (subject to not using a "record area").
If newcomers are told that, rather than "Do as I say, and as I do,
even though neither of us know why, or will ever likely find out,
we've always done it that way", they are done a service.
Why would data be left lying around after a WRITE? Do people expect it to be?Yes.
On Thursday, April 20, 2017 at 10:50:01 AM UTC+12, Bill Woodger wrote:be? Yes.
Why would data be left lying around after a WRITE? Do people expect it to
Even if the WRITE was successful the FD record area may be a differentrecord.
Hello Bill!"There is a GOOD reason for this logic" you said. Well, there isn't, until it is shown.
Wednesday April 19 2017 23:49, Bill Woodger wrote to All:
You can 100% rely on all fields in the current record (subject to the record description being accurate, and if not accurate INTO/FROM isn't going to fix it). When the current record is no longer current, you
can 100% rely on the data being different, other than by coincidence (subject to not using a "record area").
If newcomers are told that, rather than "Do as I say, and as I do,
even though neither of us know why, or will ever likely find out,
we've always done it that way", they are done a service.
I don't tell new programmers how to right code.
I did specify or help to do so depending on site a site standards
document/s for programming, programmer testing, Test docs for the dedicated Test group/s with processes and procedures and where needed CASE tools etc.
I do not tell a programmer how to write Cobol howver that said there are recommendations and these do include the reasons why.
May be now adays with modern compilers it is not needed but new progs to
site DO need to know why coding on existing programs were done in a
specific way and the real rish of problems of changing the processes and
the READ into etc is a good example another one is usage of SAME
(RECORD) AREA clauses - a great way of auto creating a lot of bugs just for the sake of removing them.
I come from the old school at least in management - If it aint broke don't fix it! As more often than not it will be very expense in both time and budget.
I still come across cobol code that goes back to the 60's, heck some of
mine (now made O/S) goes back to 65-7. Now must admit that the code has
been changed since but . .
I will put my hand up that one recommendation going back to the late 60's early 70's was avoid using COMPUTE statements in other than simple math should be totally avoided as compilers now handle them well although not
may be as efficiently as add/subtract/divide etc at least regarding
rounding etc. The last bit is the only one that might still apply :)
VinceCOMPUTE will always (did I say always?) be faster then multiple other arithmetic operations in other than trivial uses. In trivial use, it will be identical. It's been that way ever since I started.
On Thursday, April 20, 2017 at 10:50:01 AM UTC+12, Bill Woodger wrote:
Why would data be left lying around after a WRITE? Do people expect
it to be? Yes.
On an ICL 1900 with ISAM files using bucket overflow one can get
strange results.
A new record had to go into a particular bucket in the pre-allocated,
fixed length file to maintain record sequence. If there was no room in
the bucket then the record would be put into level 1 overflow
(pre-allocated on the end of the same cylinder) and a tag put into the bucket pointing to the record. If there was no room for the tag then a record would be moved from the bucket into overflow and a tag put in
for that. If level 1 overflow became full then level 2 overflow was
used (pre-allocated at the end of the file) and extra blocks were
logically added to the bucket.
The problem occurred when the file ran out of level 2 overflow when
trying to WRITE a new record. The FD record buffer was used to do the
move of an existing record into overflow to give room for the tag
(after the record being written had gone into overflow). There was now
no room in the file for the record being moved. An error was given
that the record could not be written, but the FD record area no longer contains the record just created, it has an old record that is now no
longer in the file.
Even if the WRITE was successful the FD record area may be a different record.
Hello Vince!
On Thursday, April 20, 2017 at 9:30:02 PM UTC+2, Vince Coen wrote:
Hello Bill!
Wednesday April 19 2017 23:49, Bill Woodger wrote to All:
You can 100% rely on all fields in the current record (subject tothe
record description being accurate, and if not accurate INTO/FROMisn't
going to fix it). When the current record is no longer current,you
can 100% rely on the data being different, other than bycoincidence
(subject to not using a "record area").do,
If newcomers are told that, rather than "Do as I say, and as I
even though neither of us know why, or will ever likely find out,
we've always done it that way", they are done a service.
I don't tell new programmers how to right code.
"There is a GOOD reason for this logic" you said. Well, there isn't,
until it is shown.
I did specify or help to do so depending on site a site standards
document/s for programming, programmer testing, Test docs for the
dedicated Test group/s with processes and procedures and where
needed CASE tools etc. I do not tell a programmer how to write
Cobol howver that said there are recommendations and these do
include the reasons why.
Well, the reason why is?
May be now adays with modern compilers it is not needed but new
progs to site DO need to know why coding on existing programs were
done in a specific way and the real rish of problems of changing the
processes and the READ into etc is a good example another one is
usage of SAME (RECORD) AREA clauses - a great way of auto creating a
lot of bugs just for the sake of removing them.
What's to "explain" about READ ... INTO ... and WRITE ... FROM ...?
The manual for the compiler will readily deal with that. There is
nothing intrinsically "good" or "bad" about them. They are the same as
READ ... followed by an equivalent MOVE, or a WRITE ... preceded by
the equivalent MOVE.
"Explaining" in any terms beyond that is...? It's not a technique,
it's not a method, it's the same thing, and the results are the same.
Nothing "happens" to the data under the FD after a READ, nor before a
WRITE, except what has been coded by the programmer. Nothing to
explain here.
I come from the old school at least in management - If it aint
broke don't fix it! As more often than not it will be very expense
in both time and budget. I still come across cobol code that goes
back to the 60's, heck some of mine (now made O/S) goes back to
65-7. Now must admit that the code has been changed since but . .
I will put my hand up that one recommendation going back to the late
60's early 70's was avoid using COMPUTE statements in other than
simple math should be totally avoided as compilers now handle them
well although not may be as efficiently as add/subtract/divide etc
at least regarding rounding etc. The last bit is the only one that
might still apply :) Vince
COMPUTE will always (did I say always?) be faster then multiple other arithmetic operations in other than trivial uses. In trivial use, it
will be identical. It's been that way ever since I started.
Of course, that does not mean you can code any random sequence of
stuff in a COMPUTE and expect it to work like a calculator (until the
COBOL 2014 Standard). Then you see the Cant, Myth, Magic and Mantra
with recommendations for excessive numbers of decimal places, because
"it is the only way that COMPUTE works".
It must be worth all of nothing to write the line of code.
Hello Vince!
The thing about advice is that it ages. Or its merit ages, even though
the advice does not (and should).
If Cant, Rant and Mantra from Olden Times is flapped around in front of >newcomers, it does nothing for them.
"Always sign your binary fields, and make them the maximum number of
digits which fit in the size - for performance!".
Bill Woodger
[snip]
It must be worth all of nothing to write the line of code.
We each evaluate the worth of our production, Mr Woodger.
Hit the machine ; US$25
Knowing where to hit the machine: substantially more
DD
Bill Woodger"Always READ INTO, WRITE FROM... that is the Chant of the Elders which I
Hello Vince!
The thing about advice is that it ages. Or its merit ages, even though
the advice does not (and should).
If Cant, Rant and Mantra from Olden Times is flapped around in front of >newcomers, it does nothing for them.
"Always sign your binary fields, and make them the maximum number of
digits which fit in the size - for performance!".
'Hey, Senior-Level Resource Person, this code has IFs that go on for
pages, what's the reason they didn't use an EVALUATE?'
'Hey, Senior-Level Resource Person, this code's full of 'READ INTOs,
what's the reason for that?'
'Hey, Senior-Level Resource Person, these comments say 'SUBSCRIPT OVERFLOW FOR TABLE SIZE LIMIT'... what does that mean?'
'There's no need to know anything about that... all besides me are fools
and there are no giants upon whose shoulders you might stand.'
... and I am the King of England.
DD
Hello Bill!
Friday April 21 2017 00:19, Bill Woodger wrote to All:
You seem to be writing about reasonably modern compilers that have good
test history over time as many new one are derived from a previous incantation.
The READ / WRITE into/from rules was put into place for good reasons to
avoid possible coding bugs and others have given some of the bug examples.
On Thursday, April 20, 2017 at 3:44:16 AM UTC+2, Clark F Morris wrote:for MOVE followed by WRITE and then WRITE ... FROM ...
On Wed, 19 Apr 2017 09:38:37 -0700 (PDT), Bill Woodger[...]
I'm interested in what you could mean by how WRITE ... FROM ... "allowed best use of the write buffer".
For variable length non-VSAM files if WRITE is used then the buffer
must always have to be able to accommodate the maximum size record
being built. If WRITE FROM is used then the buffer is filled by
moving the actual size of the record for each new record until the
record being moved won't fit.
Clark Morris
Thanks Clark. That would be nice, but it is not so. Here's the generated code
000041 MOVEOUT-STUFF
0004F6 5840 9138 L 4,312(0,9) BLF=1
0004FA D209 4000 8000 MVC 0(10,4),0(8) OUT-REC
000042 WRITE+20
000500 5840 9140 L 4,320(0,9) OUTPUT-FILE
000504 9200 40C9 MVI 201(4),X'00' FCB=2
000508 9200 40B3 MVI 179(4),X'00' FCB=2
00050C 5850 9138 L 5,312(0,9) BLF=1
000510 4B50 A01C SH 5,28(0,10) PGMLIT AT
000514 1F66 SLR 6,6+8
000516 BF63 8010 ICM 6,3,16(8) OUT-LENGTH
00051A 5960 A010 C 6,16(0,10) PGMLIT AT
00051E 4740 B1E2 BC 4,482(0,11)GN=17(00052A)
000522 5960 A010 C 6,16(0,10) PGMLIT AT+8
000526 47D0 B1EC BC 13,492(0,11)GN=16(000534)
00052A GN=17 EQU *FCB=2
00052A 9680 4073 OI 115(4),X'80' FCB=2
00052E D203 4040 4054 MVC 64(4,4),84(4) FCB=2
000534 GN=16 EQU *+20
000534 4A60 A01C AH 6,28(0,10) PGMLIT AT
000538 8960 0010 SLL 6,16(0)PGMLIT AT +118
00053C 5060 5000 ST 6,0(0,5) BUFFER
000540 D203 4074 A07E MVC 116(4,4),126(10) FCB=2
000546 58F0 4040 L 15,64(0,4) FCB=2GN=18(00055C)
00054A 0D5F BASR 5,15
00054C 9500 40C9 CLI 201(4),X'00' FCB=2
000550 4770 B214 BC 7,532(0,11)
000554 4110 1004 LA 1,4(0,1)
000558 5010 9138 ST 1,312(0,9) BLF=1
00055C GN=18 EQU *
000043 WRITEOUT-STUFF
00055C 5840 9138 L 4,312(0,9) BLF=1
000560 D209 4000 8000 MVC 0(10,4),0(8) OUT-REC
000566 5840 9140 L 4,320(0,9) OUTPUT-FILE+20
00056A 9200 40C9 MVI 201(4),X'00' FCB=2
00056E 9200 40B3 MVI 179(4),X'00' FCB=2
000572 5850 9138 L 5,312(0,9) BLF=1
000576 4B50 A01C SH 5,28(0,10) PGMLIT AT
00057A 1F66 SLR 6,6+8
00057C BF63 8010 ICM 6,3,16(8) OUT-LENGTH
000580 5960 A010 C 6,16(0,10) PGMLIT AT
000584 4740 B248 BC 4,584(0,11)GN=20(000590)
000588 5960 A010 C 6,16(0,10) PGMLIT AT+8
00058C 47D0 B252 BC 13,594(0,11)GN=19(00059A)
000590 GN=20 EQU *FCB=2
000590 9680 4073 OI 115(4),X'80' FCB=2
000594 D203 4040 4054 MVC 64(4,4),84(4) FCB=2
00059A GN=19 EQU *+20
00059A 4A60 A01C AH 6,28(0,10) PGMLIT AT
00059E 8960 0010 SLL 6,16(0)PGMLIT AT +118
0005A2 5060 5000 ST 6,0(0,5) BUFFER
0005A6 D203 4074 A07E MVC 116(4,4),126(10) FCB=2
0005AC 58F0 4040 L 15,64(0,4) FCB=2GN=21(0005C2)
0005B0 0D5F BASR 5,15
0005B2 9500 40C9 CLI 201(4),X'00' FCB=2
0005B6 4770 B27A BC 7,634(0,11)
0005BA 4110 1004 LA 1,4(0,1)different are the names of the branches, and the displacements that are branched to.
0005BE 5010 9138 ST 1,312(0,9) BLF=1
0005C2 GN=21 EQU *
Put those side-by-side in a fixed-width font and compare. The only things
The code is otherwise identical, as it should be, because that is what theStandard says.
With IBM's Enterprise COBOL (and before, I'm just not going to check for now*when* before) it works like this:
For blocked, variable-length, sequential files, the FD maps to an area in anoutput buffer. Once you do a WRITE (whether plain or with FROM) the IO routines
This behaviour can be modified (remembering for other readers that thatdoesn't mean "change a program to do something else") by using APPLY WRITE-ONLY
With APPLY WRITE-ONLY, the FD is no longer mapped to the buffer, but has itsown record-area. When the WRITE is attempted it can be determined whether or not the data can fit in the buffer (block). If not, on to the next buffer.
With APPLY WRITE-ONLY there is another, implicit, MOVE of the data - fromrecord-area to whichever buffer. If used where needed, APPLY WRITE-ONLY improves performance when taking into account the subsequent reading of the data. Across the board, with AWO, no guarantees. Most effective when there is a
In article <530a5ecd-1222-4609-a1e4-7d38196ac6b6@googlegroups.com>,
Bill Woodger <bill.woodger@gmail.com> wrote:
On Wednesday, April 19, 2017 at 1:50:59 PM UTC+2, docd...@panix.com wrote: >> [...]
Always READ INTO, WRITE FROM... that is the Chant of the Elders which I
was taught.
(It was based on the injunction Thou Shalt Not Perform Arithmetic
Operations In A Buffer)
I've never had a problem *not* using READ ... INTO .../WRITE ... FROM
..., so other than being old, I'm not sure what your Elders had.
The Elders had great abilities, Mr Wodger, and could code code such as
*ten* programmers cannot, today. That you have had no problem may speak
less to what The Elders knew and more to your experience.
<https://groups.google.com/forum/#!search/%22RESERVE$20NO$20ALTERNATE$20AREAS%22/comp.lang.cobol/wtoadHUrX2s/aNmeHAFTvmMJ>
DD
Hello Bill!
Wednesday April 19 2017 23:49, Bill Woodger wrote to All:
> You can 100% rely on all fields in the current record (subject to the
> record description being accurate, and if not accurate INTO/FROM isn't
> going to fix it). When the current record is no longer current, you
> can 100% rely on the data being different, other than by coincidence
> (subject to not using a "record area").
> If newcomers are told that, rather than "Do as I say, and as I do,
> even though neither of us know why, or will ever likely find out,
> we've always done it that way", they are done a service.
I don't tell new programmers how to right code.
I did specify or help to do so depending on site a site standards
document/s for programming, programmer testing, Test docs for the dedicated Test group/s with processes and procedures and where needed CASE tools etc.
I do not tell a programmer how to write Cobol howver that said there are recommendations and these do include the reasons why.
May be now adays with modern compilers it is not needed but new progs to
site DO need to know why coding on existing programs were done in a
specific way and the real rish of problems of changing the processes and
the READ into etc is a good example another one is usage of SAME
(RECORD) AREA clauses - a great way of auto creating a lot of bugs just for the sake of removing them.
I come from the old school at least in management - If it aint broke don't fix it! As more often than not it will be very expense in both time and budget.
I still come across cobol code that goes back to the 60's, heck some of
mine (now made O/S) goes back to 65-7. Now must admit that the code has
been changed since but . .
I will put my hand up that one recommendation going back to the late 60's early 70's was avoid using COMPUTE statements in other than simple math should be totally avoided as compilers now handle them well although not
may be as efficiently as add/subtract/divide etc at least regarding
rounding etc. The last bit is the only one that might still apply :)
Hello Vince!advice does not (and should).
The thing about advice is that it ages. Or its merit ages, even though the
If Cant, Rant and Mantra from Olden Times is flapped around in front ofnewcomers, it does nothing for them.
"Always sign your binary fields, and make them the maximum number of digitswhich fit in the size - for performance!".
If you do that with Enterprise COBOL on an IBM Mainframe, you'll get nodifference in performance, at best, or worse performance, with poorly-described
There is no intrinsic merit in READ ... INTO .... It is, by Standard, exactlythe same as READ ... followed by MOVE ... TO ... Same with WRITE ... FROM .... What is supposed to happen? The data suddenly "goes bad" after the use of one COBOL verb?
OK, so, due to Ancient Compilers (and lets assume 100% correctproblem-determination) an irrational fear of referencing data under an FD has grown.
Irrational. Until someone can rationalise it (and "We've always done it thatway" or "Joe Blogs said so" don't count as rationalisation) it remains irrational.
I can't rationalise it, and I'd be mightily impressed if someone can, withrelation to compilers to at least the 1985 Standard.
As alluded to, and mentioned in the link, it is entirely possible to have anFD whose data is in a storage area which can be referenced before a file is opened, before a record is read, after end-of-file, and after a file is closed.
Do people do that? Yes. Do people recommend that? Yes. Is it rational? No.Yes.
Why would data be left lying around after a WRITE? Do people expect it to be?
Can mistakes be made with READ ... INTO ... and WRITE ... FROM ...? Ofcourse.
You can 100% rely on all fields in the current record (subject to the recorddescription being accurate, and if not accurate INTO/FROM isn't going to fix it). When the current record is no longer current, you can 100% rely on the data being different, other than by coincidence (subject to not using a "record
If newcomers are told that, rather than "Do as I say, and as I do, eventhough neither of us know why, or will ever likely find out, we've always done it that way", they are done a service.
<01cf8821-bd4e-4961-822b-ed1113061b7e@googlegroups.com> <od7iv1$s4u$1@reader1.panix.com> <530a5ecd-1222-4609-a1e4-7d38196ac6b6@googlegroups.com> <od8e5b$722$1@reader1.panix.com> <92c35cf5-0a
Hello Richard!
Thursday April 20 2017 22:40, Richard wrote to All:
Thanks for that, now I remembered the problem, one amonst many others :)
For the ICL 1900 compiler I was at 2 - 3 sites where I was the Compiler liason between ICT/ICL and the site and that included passing bug reports back to the dev. team as well as fixes to the compiler that I had coded and tested against inhouse programs and test systems. Yes I had the source code for the compiler - it saved time and money.
After the test completed giving correct results the code changes was then passed to the compiler liason person for inclusion into the next release or patch and these were issued to all sites (well all sites that had a maintainance contract of one description or another).
My site/s had this as a freebie to cover my costs not that I was doing this full time as I was a A/P or Lead A/P at the time working on site
applications using Cobol, Plan and what ever else. The role continued with the introduction of new range (2900) by which time I was working for ICL
and was also involved in writing VME. I said writing it not designing the missmash that was the first release - It was written in Algol 68R in a very modular fashion with very detailed specs so all programming was in fact
just coding directly from the module spec. It did NOT help to see bugs in the design as each programmer had a block of specs to code that did not
link together. It did allow VME to be written very quickly but the testing of it was another kettle of fish and yes tested on a 1900 series.
After the first release some modules was rewritten (after design changes) into S3.
Those were the days :)
> On Thursday, April 20, 2017 at 10:50:01 AM UTC+12, Bill Woodger wrote:
>> Why would data be left lying around after a WRITE? Do people expect
>> it to be? Yes.
> On an ICL 1900 with ISAM files using bucket overflow one can get
> strange results.
> A new record had to go into a particular bucket in the pre-allocated,
> fixed length file to maintain record sequence. If there was no room in
> the bucket then the record would be put into level 1 overflow
> (pre-allocated on the end of the same cylinder) and a tag put into the
> bucket pointing to the record. If there was no room for the tag then a
> record would be moved from the bucket into overflow and a tag put in
> for that. If level 1 overflow became full then level 2 overflow was
> used (pre-allocated at the end of the file) and extra blocks were
> logically added to the bucket.
> The problem occurred when the file ran out of level 2 overflow when
> trying to WRITE a new record. The FD record buffer was used to do the
> move of an existing record into overflow to give room for the tag
> (after the record being written had gone into overflow). There was now
> no room in the file for the record being moved. An error was given
> that the record could not be written, but the FD record area no longer
> contains the record just created, it has an old record that is now no
> longer in the file.
> Even if the WRITE was successful the FD record area may be a different
> record.
Vince
Hello Bill!
Wednesday April 19 2017 19:59, Bill Woodger wrote to All:
Your logic may weell be totally valid today BUT for compilers going back to say :
Well I could say 1401, but will start with 360 such as:
ANSI Cobol
OS/VS 1 and possibly 2.
There was also some of the next batch of compiler with similar issues -
sorry can't remeber the details - too far back. Heck, have enough problems trying to remember why I went from one room to another at times.
Vince
On 20/04/17 7:35 AM, docdwarf@panix.com wrote:
The Elders had great abilities, Mr Wodger, and could code code such asReally enjoyed this linked story.
*ten* programmers cannot, today. That you have had no problem may speak
less to what The Elders knew and more to your experience.
<https://groups.google.com/forum/#!search/%22RESERVE$20NO$20ALTERNATE$20AREAS%22/comp.lang.cobol/wtoadHUrX2s/aNmeHAFTvmMJ>
On Friday, April 21, 2017 at 4:52:57 PM UTC+2, docd...@panix.com wrote:
Bill Woodger
[snip]
It must be worth all of nothing to write the line of code.
We each evaluate the worth of our production, Mr Woodger.
Hit the machine ; US$25
Knowing where to hit the machine: substantially more
Well, it just can't be done. Avoid that if you will.
On Friday, April 21, 2017 at 5:00:01 PM UTC+2, docd...@panix.com wrote:
Bill Woodger
Hello Vince!
The thing about advice is that it ages. Or its merit ages, even though
the advice does not (and should).
If Cant, Rant and Mantra from Olden Times is flapped around in front of
newcomers, it does nothing for them.
"Always sign your binary fields, and make them the maximum number of
digits which fit in the size - for performance!".
'Hey, Senior-Level Resource Person, this code has IFs that go on for
pages, what's the reason they didn't use an EVALUATE?'
'Hey, Senior-Level Resource Person, this code's full of 'READ INTOs,
what's the reason for that?'
'Hey, Senior-Level Resource Person, these comments say 'SUBSCRIPT OVERFLOW >> FOR TABLE SIZE LIMIT'... what does that mean?'
'There's no need to know anything about that... all besides me are fools
and there are no giants upon whose shoulders you might stand.'
... and I am the King of England.
DD
"Always READ INTO, WRITE FROM... that is the Chant of the Elders which I
was taught.
(It was based on the injunction Thou Shalt Not Perform Arithmetic
Operations In A Buffer)"
You can't (well, you can, you did) side-slip into a completely different >thing (explaining) from the promulgating the dictation of Cant and
Mantra. Make your Straw Man if you like, but it is clear what it is.
Hello Vince!
On Friday, April 21, 2017 at 12:59:48 PM UTC+2, Vince Coen wrote:
Hello Bill!
Friday April 21 2017 00:19, Bill Woodger wrote to All:
You seem to be writing about reasonably modern compilers that have good
test history over time as many new one are derived from a previous
incantation.
Yes, I've never used a compiler prior to the 1968 Standard. Newcomers
will be unlucky if they have to use a compiler to the 1974 Standard
(there are some still extant).
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 984 |
Nodes: | 10 (0 / 10) |
Uptime: | 52:33:04 |
Calls: | 12,850 |
Calls today: | 1 |
Files: | 186,574 |
Messages: | 3,212,412 |