Very OT -- The end of an era

2»

Comments

  • LeatherGryphonLeatherGryphon Posts: 11,666
    edited April 2014

    The first computer I ever saw was at Syracuse University in 1965. I was there as a Junior in High School for a several day series of lectures (saw my first example of chemo-luminescence, long before glow sticks came on the market). But the computer we were shown demonstrated a game of BlackJack on an ASR-33 teletype and was programmed with paper tape. 8-o

    The first machine I had use of was an IBM-1130 in college at Florida Institute of Technology in 1967 but we had no network of any sort. In order to transfer the university's student data records from the 1130 to the new Xerox Sigma5 we had to punch everything to cards. :-(

    The first machine I was in charge of was in 1974 at the Kennedy Space Center (KSC) in the Launch Control Center (LCC). It was a Raytheon 706. A year or so later we added a Raytheon RDS-500. Wow, two computers in the same room! At some point it became desirable to communicate between them to synchronize high speed events and data transfer but there was no direct capability from machine to machine built-in by Raytheon. However, I examined the schematics for both and found a place where I could toggle a few bits on the backplane using software. I wirewrapped to the appropriate places on the backplanes of each and ta-da I had connectivity but still had to invent a signaling protocol. So that's how I designed my first serial data transfer protocol. I never calculated data transfer rate in BAUD but it essentially worked just as fast as I could toggle the bits in software. Granted, the 706 machine used core technology memory and was no speed demon by modern standards but the data transfer was a hell of a lot faster than 300 BAUD. :-)

    PS: When I first started using the 706 machine it was programmed with paper tape and didn't even have a hard drive but did have a 7-track 200cpi magnetic tape drive. Believe me, running through all the steps to compile a FORTRAN program by reading the compiler from paper tape into core, inputting your program by paper tape, punching intermediate output to paper tape at 10cps, loading the assembler program by paper tape, punching out the binary executable to paper tape at 10cps was a tedious process and you learned quickly that you did NOT want to make a mistake if you were in a hurry. It would have been nice to be able to write the intermediate outputs to the mag tape but the compiler wasn't designed that way. The mag tape drive was used only to store research data.

    By time I left the lab 5 years later we'd built quite a little system with graphic display monitors, high resolution dot-matrix electrostatic printer, 9-track 800cpi tape drives, several hard drives, high speed card reader/punch, video disk recorder, high speed direct computer data transfer, remote data display miles away. We were measuring electric field around the KSC and using it to draw contour maps of the electric field potential maps and we also were involved in plotting lightning strike locations. Your tax dollars at work! You see the results of some of this research on your local weather report these days when they show you lighting locations.

    The RDS-500 was more sophisticated than the 706. It came with hard drive and floating point hardware. But the 706 had neither. When we added a hard drive to the 706 I had to write my own driver for it. Neither computer had a driver for the high resolution electrostatic graphing printer so I had to write drivers for both machines. And I had to write the floating point arithmetic subroutines for the 706 because the hardware didn't do anything but integer arithmetic. And of course there were NO graphic routines from Raytheon for either machine. Everything we did, all the mapping and image display stuff, I wrote from scratch from basic principles.

    Post edited by LeatherGryphon on
  • fixmypcmikefixmypcmike Posts: 19,611
    edited December 1969

    Ah, the IBM-1130. Computer I learned on in High School in the mid 70's, quite an old geezer by then, did a lot of coughing and wheezing when it started up in the morning.

  • namffuaknamffuak Posts: 4,185
    edited December 1969

    LeatherGryphon - kewl!

    I never got to play with the real low-level driver code; nearest I ever came was re-working from scratch two vendor's versions of the IBM 2780 protocol. OTOH, as the sysprog for a mid-size manufacturer I got to play with all kinds of equipment. We started with Burroughs medium systems running MCP-V (Pronounced Master Control Program 5), added Raytheon PTS-1200 programable cluster controllers at our remote sites for key to disk and report transmission (and one of the systems I re-worked the 2780 code on).

    Then we moved to a Honeywell mainframe (DPS-8) -- and I had to re-work the NPS 2780 code . . . and then we put in a leased network to replace the dial-up, and the Raytheons became Honeywell DPS-6 systems, running GCOS-6 (almost completely 100% not quite unix, but close). Then we brought in an HP-3000 to run our plant MRP requirements on, so the leased point-to-point network got Tellabs statistical multiplexors and X.25 packet switches so we could hang the HP terminals and printers at the remotes and still drive the DPS-6 synchronous communications.

    Then the marketing department finally found an order-entry/accounting/inventory management package they could live with and we brought in an IBM 4300 running MVS. And the Honeywell was on the way out - just about the time Honeywell was getting out of computers anyway. So I got to pick up CICS as well as MVS, and do the upgrades to XA and finally ESA. And three hardware swaps on the IBM gear. And then - we went with SAP/R3 on IBM SP-frames running IBM AIX (Almost unIX). And 12 years of continuous hardware and software upgrades . . .

    So roughly every 8 years we tipped the company over and started again, and roughly every three years, for a thirty year span, I got to learn and play with new operating systems, system software, communications gear and protocols. I only started getting bored at the end, when we brought in my planned replacement. In my free time I was the backup/recovery specialist and, the final 10 years, the D/R hotsite procedure writer.

    And now that I'm retired I no longer have to worry about those late-night system down phonecalls. :-) Getting rid of the cell phone was the best part of the retirement. :-)

  • Subtropic PixelSubtropic Pixel Posts: 2,388
    edited December 1969

    My college used a 4341. There was no way to view your output in the JES spool, so we had to wait for anything/everything we ran to be printed. In my career I've worked on 370, CDC Omega, 3081, 309x, 9xxx, z10, z14, even the Hitachi mainframes in the 80s. Today's new ones from IBM still have the oomph, though my rendering system has more memory than MVS (z/OS) uses today and I don't get charged by the manufacturer for using 100% of my Intel CPU. But if you can afford the bills, the latest mainframe processors have more engines and mainframes still have faster disk subsystems than anything using Windows, Linux, or Unix. And then there are these things called ziips, which are "specialty processors." These can take on certain types of work for reduced cost of software licensing.

    Cryptography is huge in the mainframe world, right now probably second only to security.

    3380 and 3390 disk devices are still around, but only in the virtual form of a genned esoteric referencing subsystems that contain dozens or hundreds of standard hard disk drives; not anything you could hide behind or hit with a hammer. You can define "mod-gajillion" devices far greater than the old 2GB(ish) limits, and a disk "volume" probably spans many many physical disks, but looks to your COBOL program like it's an old 3390. But you probably wouldn't want to because system managed storage can handle the mundane very well.

    So gone are the big 3380 cabinets housing a bi-volume laterally-spinning washing machine sized platter. Gone are the banks and banks of disk drive cabinets. Gone for years now are the old tape cartridges with auto-loaders, or even the old silos that could pass tapes between each other. Most shops have even gone beyond "virtual tape" technology to "remote flash copy". Imagine being able to backup your system remotely with just a pause for a few seconds on grave shift...no shutdowns, no quiescing your databases, no boxing up tapes for delivery to an offsite warehouse. No fuss!

    Today's computer room has lots and lots of ... racks. With doors, without doors. Very few "big box cabinets" remain. Usually only the mainframe processor has a cabinet, but if you open the front door, you'll see... racks!

    The OS and databases have been 64 bit for several years now, allowing for ginormously-sized data warehouse DBMSs. Comm protocols such as VTAM/BTAM are fading as IP becomes the way of the world. The old languages such as Assembler, PL/1, COBOL, Fortran, and mainframe versions of C and C++ etc. are still present, but a lot of shops do most of their new development from the client/server side, with some sort of 4GL tools, just using the mainframe as a big file server or database server. Many users only use their development tools and don't even use TSO anymore and wouldn't know how to log on or code a batch job using JCL.

    You can even run varieties of Linux/Unix on mainframes; been available for over a decade now. But not Windows as far as I know, so no blue screens yet! :lol:

    Some things have gotten easier. No more paper manuals; it's all online. But a lot of things are still cumbersome, and this tends to chase young people just getting into IT far and fast from the mainframe. For example, the online manuals are still not "built into" the software, and the old ISPF user interface is still not graphical. Batch jobs still receive hieroglyphic abend codes such as S-80A, S-B37, S0C4, and so on. And unless you install special software from IBM or a vendor, you still can't debug or dynamically step through program code like you can on Windows or Linux. You will instead have to have dump-reading skills, or maybe pay somebody to write your code for you.

    It's a very powerful computing platform; very stable, reliable, and secure, and that's for sure. But IBM has let many things become stagnant.

    Still...Heavy Metal lives and it ain't just a movie from the 80's. It's still a great ride if you like this sort of thing. :coolsmirk:

  • kyoto kidkyoto kid Posts: 41,226
    edited December 1969

    The first computer I ever saw was at Syracuse University in 1965. I was there as a Junior in High School for a several day series of lectures (saw my first example of chemo-luminescence, long before glow sticks came on the market). But the computer we were shown demonstrated a game of BlackJack on an ASR-33 teletype and was programmed with paper tape. 8-o

    The first machine I had use of was an IBM-1130 in college at Florida Institute of Technology in 1967 but we had no network of any sort. In order to transfer the university's student data records from the 1130 to the new Xerox Sigma5 we had to punch everything to cards. :-(

    The first machine I was in charge of was in 1974 at the Kennedy Space Center (KSC) in the Launch Control Center (LCC). It was a Raytheon 706. A year or so later we added a Raytheon RDS-500. Wow, two computers in the same room! At some point it became desirable to communicate between them to synchronize high speed events and data transfer but there was no direct capability from machine to machine built-in by Raytheon. However, I examined the schematics for both and found a place where I could toggle a few bits on the backplane using software. I wirewrapped to the appropriate places on the backplanes of each and ta-da I had connectivity but still had to invent a signaling protocol. So that's how I designed my first serial data transfer protocol. I never calculated data transfer rate in BAUD but it essentially worked just as fast as I could toggle the bits in software. Granted, the 706 machine used core technology memory and was no speed demon by modern standards but the data transfer was a hell of a lot faster than 300 BAUD. :-)

    PS: When I first started using the 706 machine it was programmed with paper tape and didn't even have a hard drive but did have a 7-track 200cpi magnetic tape drive. Believe me, running through all the steps to compile a FORTRAN program by reading the compiler from paper tape into core, inputting your program by paper tape, punching intermediate output to paper tape at 10cps, loading the assembler program by paper tape, punching out the binary executable to paper tape at 10cps was a tedious process and you learned quickly that you did NOT want to make a mistake if you were in a hurry. It would have been nice to be able to write the intermediate outputs to the mag tape but the compiler wasn't designed that way. The mag tape drive was used only to store research data.

    By time I left the lab 5 years later we'd built quite a little system with graphic display monitors, high resolution dot-matrix electrostatic printer, 9-track 800cpi tape drives, several hard drives, high speed card reader/punch, video disk recorder, high speed direct computer data transfer, remote data display miles away. We were measuring electric field around the KSC and using it to draw contour maps of the electric field potential maps and we also were involved in plotting lightning strike locations. Your tax dollars at work! You see the results of some of this research on your local weather report these days when they show you lighting locations.

    The RDS-500 was more sophisticated than the 706. It came with hard drive and floating point hardware. But the 706 had neither. When we added a hard drive to the 706 I had to write my own driver for it. Neither computer had a driver for the high resolution electrostatic graphing printer so I had to write drivers for both machines. And I had to write the floating point arithmetic subroutines for the 706 because the hardware didn't do anything but integer arithmetic. And of course there were NO graphic routines from Raytheon for either machine. Everything we did, all the mapping and image display stuff, I wrote from scratch from basic principles.


    ...Win!
  • LeatherGryphonLeatherGryphon Posts: 11,666
    edited April 2014

    namffuak said:
    LeatherGryphon - kewl!

    I never got to play with the real low-level driver code; nearest I ever came was re-working from scratch two vendor's versions of the IBM 2780 protocol. OTOH, as the sysprog for a mid-size manufacturer I got to play with all kinds of equipment. We started with Burroughs medium systems running MCP-V (Pronounced Master Control Program 5), added Raytheon PTS-1200 programable cluster controllers at our remote sites for key to disk and report transmission (and one of the systems I re-worked the 2780 code on).

    Then we moved to a Honeywell mainframe (DPS-8) -- and I had to re-work the NPS 2780 code . . . and then we put in a leased network to replace the dial-up, and the Raytheons became Honeywell DPS-6 systems, running GCOS-6 (almost completely 100% not quite unix, but close). Then we brought in an HP-3000 to run our plant MRP requirements on, so the leased point-to-point network got Tellabs statistical multiplexors and X.25 packet switches so we could hang the HP terminals and printers at the remotes and still drive the DPS-6 synchronous communications.

    Then the marketing department finally found an order-entry/accounting/inventory management package they could live with and we brought in an IBM 4300 running MVS. And the Honeywell was on the way out - just about the time Honeywell was getting out of computers anyway. So I got to pick up CICS as well as MVS, and do the upgrades to XA and finally ESA. And three hardware swaps on the IBM gear. And then - we went with SAP/R3 on IBM SP-frames running IBM AIX (Almost unIX). And 12 years of continuous hardware and software upgrades . . .

    So roughly every 8 years we tipped the company over and started again, and roughly every three years, for a thirty year span, I got to learn and play with new operating systems, system software, communications gear and protocols. I only started getting bored at the end, when we brought in my planned replacement. In my free time I was the backup/recovery specialist and, the final 10 years, the D/R hotsite procedure writer.

    And now that I'm retired I no longer have to worry about those late-night system down phonecalls. :-) Getting rid of the cell phone was the best part of the retirement. :-)

    Yeah, being on call was a bitch! I avoided it for most of my career but when they issued me a Windows95 laptop and a pager I knew I'd made a wrong turn in my career somewhere. I'd made myself too valuable but not indespensible. :-( It was shortly after that point that I had the epiphany that we spend the first half of our lives becoming known and the second half trying to reclaim our privacy.

    And yeah, I burned out after 20 years, took a year off to ride motorcycle around the US, Canada, and Australia while blowing my entire retirement savings, then came back to the business for another 5 years till I burned out again big-time, and quickly went through what little savings I'd recouped in 5 years and was literally pennyless. I've been "semi-retired"* since 2001 scratching out a living chasing viruses out of other people's PCs and replacing broken CD drives. But now the government has declared me officially ancient and I quite happily collect my well earned Social Security Check which is almost three times what I earned fixing PCs. Yea! :-) And I have more compute power in my living room than all the computer labs I ever managed put together. And more storage on one hard drive than I ever had in any corporate setting. But what I miss the most is being around smart people. People who care about the news. People who know how things work. People who can talk about ideas rather than just things or other people or (shudder) sports.

    * Definition: "Semi-retired" --- Over 50 and unemployed.


    A note about manuals: I loved manuals. Back in the day, the computers came with yards of paper manuals. In my lab in the LCC I had a 6 foot tall cabinet full of manuals. And the frequently used stuff was on a table next to the control console in a 3 foot long flip display like at an auto supply store. All meticulously fitted with index tabs so that you could flip the pages to whatever subroutine or control description you needed in a half second. When updates for software or hardware came out, the manufacturer issued real paper documents and page changes to the manual that you meticulously would replace in your archive or risk getting hopelessly behind. The hundreds of paper tapes were kept in hundreds of little plastic drawers like for tiny parts. All neatly labeled. A tape was retrieved from the drawer, read through the reader and immediately wound into a roll and put back in the drawer or you risked having a chaotic pile of tapes on the table for six months. Nobody fxxx'd with my tapes or I'd break their arm!!

    Post edited by LeatherGryphon on
  • namffuaknamffuak Posts: 4,185
    edited December 1969

    And I have more compute power in my living room than all the computer labs I ever managed put together. And more storage on one hard drive than I ever had in any corporate setting. But what I miss the most is being around smart people. People who care about the news. People who know how things work. People who can talk about ideas rather than just things or other people or (shudder) sports.

    I can't come close to either the computer power or storage I used to run. Back in 2001/2002 it was easy to break down IBM AIX users into three groups on usenet and the IBM support sites : people running flat files or home-grown data file management - they thought 9 to 12 GB was large; people running Oracle or DB/2 databases - they thought 40 t0 60 GB was large; and then those of us running SAP/R3 on Oracle or DB/2 - who thought 120 to 150 GB was small. When I left we had three copies of our main SAP instance, at 1.2 TB each and two copies of our data warehouse, at 1.6 TB each. On two IBM ESS (Electronic Storage Subsystems), one with a capacity of 21 TB and the other at 40 TB. Both now off lease and replaced with IBM XIV systems - one 40 TB and the other 60 TB.

    But like you, I miss the people. We get together for lunch about 3 times a year, and that helps.

    Interesting trivia - the IBM 3380, marketed as a Count-Key-Data (CKD) device - wasn't. It was designed when VM was big, and supported Fixed Block Architecture (FBA) and was designed accordingly. And then the MVS support team reported that CKD was so entwined in MVS there was no way to strip it out. So the microcode was changed on the 3380 to reduce the block size to 32 Bytes. Thus the resaon that ALL 3380 capacity calculations indicated that everything would be written with a true length that was modulo 32.

    For that matter, when Intel started down the road for Itanium as a Reduced Instruction Set Computer (RISC) the IBM mainframe was held up as a prime example of the Complex Instruction Set (CIS) computer. Again, wrong. The EXPOSED instruction set was complex, but all the instructions were written in microcode. With (I think) still the 72-bit instruction word from the original 360 series.

    On our last IBM mainframe. running MVS ESA, the 'microcode' occupied 26 MB of our 128 MB main memory.


    A note about manuals: I loved manuals. Back in the day, the computers came with yards of paper manuals. In my lab in the LCC I had a 6 foot tall cabinet full of manuals. And the frequently used stuff was on a table next to the control console in a 3 foot long flip display like at an auto supply store. All meticulously fitted with index tabs so that you could flip the pages to whatever subroutine or control description you needed in a half second. When updates for software or hardware came out, the manufacturer issued real paper documents and page changes to the manual that you meticulously would replace in your archive or risk getting hopelessly behind. The hundreds of paper tapes were kept in hundreds of little plastic drawers like for tiny parts. All neatly labeled. A tape was retrieved from the drawer, read through the reader and immediately wound into a roll and put back in the drawer or you risked having a chaotic pile of tapes on the table for six months. Nobody fxxx'd with my tapes or I'd break their arm!!

    Never had the paper tapes - but I agree on the manuals. And I wouldn't let ANYONE else put the updates in the manuals; If it got screwed up, I wanted no-one else in the blame loop (and I kept the replaced pages for at least on additional update cycle, just in case . . .)

  • LeatherGryphonLeatherGryphon Posts: 11,666
    edited April 2014

    namffuak said:
    ... I can't come close to either the computer power or storage I used to run. Back in 2001/2002 it was easy to break down IBM AIX users into three groups on usenet and the IBM support sites : people running flat files or home-grown data file management - they thought 9 to 12 GB was large; people running Oracle or DB/2 databases - they thought 40 t0 60 GB was large; and then those of us running SAP/R3 on Oracle or DB/2 - who thought 120 to 150 GB was small. When I left we had three copies of our main SAP instance, at 1.2 TB each and two copies of our data warehouse, at 1.6 TB each. On two IBM ESS (Electronic Storage Subsystems), one with a capacity of 21 TB and the other at 40 TB. Both now off lease and replaced with IBM XIV systems - one 40 TB and the other 60 TB.

    I never was responsible for large systems like that but I have worked for a few days in some. The labs I managed were mostly research with a small network of mostly Sun or HP UNIX machines some with as much a 1GB of RAM! Wow!! :-s

    namffuak said:
    ... when Intel started down the road for Itanium as a Reduced Instruction Set Computer (RISC) the IBM mainframe was held up as a prime example of the Complex Instruction Set (CIS) computer. Again, wrong. The EXPOSED instruction set was complex, but all the instructions were written in microcode. With (I think) still the 72-bit instruction word from the original 360 series.

    My expertise was assembly programing. I loved assemblers and the more complex the instruction-set the better. The Motorola 680xx series was my favorite for many years. Glacially slow by modern standards but I did some amazing real-time stuff with them for the time.

    One of my favorite features of the 680xx series was when they introduced the atomic (uninterruptable/indivisible) instructions for manipulating lists. I used linked lists to manage real-time events and processes. It was how I thought about problems. Circular or finite, singly linked, doubly linked, forward scans, backward scans, multiple sliding indices, splitting and joining. Any one of these operations if interrupted at the wrong moment would corrupt the list, so the atomic list manipulation instructions guaranteed that you performed several small but related operations with assured uninterruptability. This gave me the ability to manage several independent processes that used common resources on an as-needed basis with calculable maximum delays, so necessary for proper real-time processes.

    The programming practice in vogue at the time was the much simpler idea of structured loops where you grabbed a resource and locked it and it was unavailable for other processes until you unlocked it. If your program was designed poorly the unlock operation could be buried deep in a nest of loops or inside a commercial subroutine package that you had no control of, and could sometimes take excessively long to release the resource, or worse yet, cause a deadlock where two (or more) processes are waiting for each other to finish with the resource. Windows was rife with problems like that. It's somewhat better now but I still see this problem in various products today. Ever wonder why it sometimes takes so long to abort some operations?

    I was not afraid to design solutions that used multiple processes using shared resources on the same or even different computers (multi-tasking, multi-processing) with processes being interruptible by other processes or the operator at any time. Very good ideas for real-time robots. And some of my ideas and code made it to the GM auto manufacturing floor back in the early days of manufacturing robots.

    A properly designed set of independent processes CAN be designed to maintain their data integrity yet be able to immediately terminate operations (or at least give notice that the abort operation in in progress and then permit other tasks within the application to continue without danger of corrupting the shared data. It's just very difficult to do that in a pure structured program environment. Multiple processes handle it quite well but you have to be able to think in 4 dimensions to design that way.

    Unfortunately I never made the leap to RISC assembler level programming. I'd adopted "C" by then. And as my career progressed I did less and less actual programming. But as I look back, I was happiest when I was deep in the logic of an impossible problem finding the way to squeeze 4 pounds of Crisco into a 3 pound can. Which given the memory limitations and speed limitations of computers in the 60s, 70s and early 80s was much of what we were required to do.

    ...Never had the paper tapes - but I agree on the manuals. And I wouldn't let ANYONE else put the updates in the manuals; If it got screwed up, I wanted no-one else in the blame loop (and I kept the replaced pages for at least on additional update cycle, just in case . . .)

    Yeah, changing pages was an opportunity to screw up royally. One false move and you could find yourself terribly confused or missing information at the most inopportune moment. Keeping the old pages was the only way to back out of the mess if you found yourself there.

    The frequent arrival of the update pages was kind of like a mini-Christmas and an opportunity to learn. The person putting the updates into the manual had the advantage of having to take the time to actually look over the changes that were being made. It kept you up-to-date in your head about what the problems were and what had been fixed. It made you the expert in the room.

    Post edited by LeatherGryphon on
  • Subtropic PixelSubtropic Pixel Posts: 2,388
    edited December 1969

    Those manual insert pages from IBM were called "TNL"s. In one of my early VS1 shops, we never inserted the pages. There just wasn't time and so many of them didn't apply to application programmers. I was the only sysprog in a shop of 10 developers, so instead, we put the TNLs at the back of the binder. :smirk:

    Those binders were the plastic spine-hanger types that mounted in a huge rack, keeping the books facing "spine up" (though they were all loose-bound back then and didn't have an actual spine). I didn't like the spine-hangers because they would not lay flat on your desk while you typed control cards from the book into your job JCL.

    In the 90s, IBM went to a thing called "Book Manager" and "Library Reader", both of which I believe were written with Java so that they would run on Linux, Unix, or Windows equally well. The books were distributed on a set of CDs and later DVDs, and were discontinued only recently, within the last 5 years or so. I really did like these, because now you could carry all of the books on your hard drive, and you could periodically update your library directly from IBM. No more contentious plastic hangers and no more TNLs to ignore; you always had the most current book.

    Now IBM's mainframe books are all available on IBM.com. For free access. Go ahead, Google your most baneful IEC message, the ABEND code that kept you up all week one night in 1985 with a S0C7, or even the DDL you would need to drop an entire DB2 database. This new manual resource is much better. It requires web access, which you probably have anyway because almost nobody works "down the hall" from the computer room anymore. :smirk:

  • zigraphixzigraphix Posts: 2,787
    edited December 1969

    I got my start in 6th grade, when my dad worked for DEC (and it was still called that). We'd get to go in with him on weekends and play in the customer demo center. Colossal Cave, Eliza, Hamurabi, and a handful of other games kept us entertained for hours... All completely text-based, of course. Later, we had a VT52 at home and a 300 baud modem. Sweet!

    In my first full-time job, I mounted tapes and disks (RM05 and RP06), managed print queues, and occasionally ran a card reader.

    I sometimes still have dreams in which I go back to the machine room for a visit and get offered a job there again, and I always take it. It's funny, because I didn't work there all that long, and I was frustrated most of the time I was there by the lack of advancement opportunities. Evidently there's something I miss about it, though. Probably third shift work. I like the quiet. ;)

  • namffuaknamffuak Posts: 4,185
    edited December 1969


    My expertise was assembly programing. I loved assemblers and the more complex the instruction-set the better. The Motorola 680xx series was my favorite for many years. Glacially slow by modern standards but I did some amazing real-time stuff with them for the time.

    One of my favorite features of the 680xx series was when they introduced the atomic (uninterruptable/indivisible) instructions for manipulating lists. I used linked lists to manage real-time events and processes. It was how I thought about problems. Circular or finite, singly linked, doubly linked, forward scans, backward scans, multiple sliding indices, splitting and joining. Any one of these operations if interrupted at the wrong moment would corrupt the list, so the atomic list manipulation instructions guaranteed that you performed several small but related operations with assured uninterruptability. This gave me the ability to manage several independent processes that used common resources on an as-needed basis with calculable maximum delays, so necessary for proper real-time processes.


    From a shear availability and richness, you'd have liked the Raytheon, Mind, the cpu was a 16-bit machine with 16 direct memory operations and the same 16 with indirect memory access, for a whopping total of 32 instructions. But the user-facing assembler had in excess of 550 op codes, including matrix and search ops. Little things like 'search backward for not equal' or 'find first empty' or 'last empty' or 'count empty' - the system was designed as a direct replacement for leased-line bisync 3270 cluster controllers from IBM and most of the airlines used them.

    Remember, back in the day, seating assignments were done at check-in - so picture a 5 (seats) by 45 (rows) matrix and go for 'find first empty' . . .

    Sucker was slow as molasses, but reliable (and faster than multi-drop 9600 bps lines).


    The programming practice in vogue at the time was the much simpler idea of structured loops where you grabbed a resource and locked it and it was unavailable for other processes until you unlocked it. If your program was designed poorly the unlock operation could be buried deep in a nest of loops or inside a commercial subroutine package that you had no control of, and could sometimes take excessively long to release the resource, or worse yet, cause a deadlock where two (or more) processes are waiting for each other to finish with the resource. Windows was rife with problems like that. It's somewhat better now but I still see this problem in various products today. Ever wonder why it sometimes takes so long to abort some operations?

    The Burroughs medium systems had a linear search operator and a linked-list search/delink operator. And it was trivial in COBOL to get to the linear search - the 'search all' construct that on other systems required the table to be in ascending sequence and generated a binary search algorithm gave 4 instructions of set-up and then the atomic linear search -- and the table did NOT have to be in any specific order. I beat the crap out of that construct . . .


    Unfortunately I never made the leap to RISC assembler level programming. I'd adopted "C" by then. And as my career progressed I did less and less actual programming. But as I look back, I was happiest when I was deep in the logic of an impossible problem finding the way to squeeze 4 pounds of Crisco into a 3 pound can. Which given the memory limitations and speed limitations of computers in the 60s, 70s and early 80s was much of what we were required to do.

    I finally made the jump to 'C' (more of a stumble in my case) by then I was coherent, if not fluent in so bloody many languages, including some esoteric and some really mundane - roughly in the order I picked them up : Fortran IV, IBMAP (7094 assembler), MAD (Michigan Algorithmic Decoder), compass (CDC-6500 assembler), PL/1, BAL (IBM's 360 assembler), COBOL, BPL (Burroughs Programming Language, an algol-like assembler), MACROL (the Raytheon assembler), GMAP (Honeywell assembler), NPS Micro-ops (comm protocol coding for the Honeywell network processor), B (Thompson's precursor to 'C'), GCOS-6 shell script, PDL (a panel description language for the DPS-6), IQF (Interactive Query Facility - a cross between basic and a 4gl on the DPS-6), and ISPF/PDF (IBM Mainframe panel and menuing system) - and then Korn shell, perl, and 'C'. I was quite good in PL/1 and COBOL and managed to get the job done in the others - but by the time I got to 'C' I wasn't really interested and just managed, for the most part, to build some simple fill-ins for the shell scripts.

    And the list starting at COBOL was all at the one employer and a good bit of the reason I was there for 34 years. There was always something new to learn and I never had the time to get bored.

  • LeatherGryphonLeatherGryphon Posts: 11,666
    edited April 2014

    One of the most interesting machines I worked on was one from Siemens. It was what was classified as a "mini-computer" back then (1981) What made it interesting to me was that it had 16 identical banks of 16 registers that could be swapped out instantly. It was a machine specifically designed for real-time multi-tasking. It was used in a robot control system for positioning the head of a laser cutting machine.

    I'd taken over from the previous programmer at the company that made the cutting machine. Every machine they made was custom built and nobody ever bothered to keep records of the schematics and program code. Oy! Stupid, stupid, stupid, lazy, lazy, lazy!!! The field service technicians told horror stories when they got back from a fix-it job.

    A few months after I took over, one of the technicians was charged with stealing spare parts for a DEC computer and building one up in his living room! (Ah, so THAT'S where all those parts went!!! :-s )

    But I digress. The Siemens computer was a German machine. The manuals were in German, the instruction set for the Assembler was in German and the original programs were written by Germans. So the comments (as sparse as they were) were written in German. :-( The only resources I had to be able to read all of that was 10 weeks of first level German in college and a German-English technical dictionary. Moreover I had no documentation on how the servo motors for the laser cutting heads were controlled, so I had to dis-assemble the running program code to figure all that out. And the program code for the operator interface and functional logic needed to be understood.

    The task given to me was to transfer this program code from the Siemens mini-computer machine to a Motorola 68000 micro-computer machine on a new laser system. The computers had very different hardware philosophies of operation. The Siemens was a specialist machine, the Motorola was a generalist machine. I ended up actually simulating the operation of the Siemens computer on the Motorola computer as a first attempt at migrating the programs to the new Motorola based machines. It sort of worked but I finally convinced the management that we were trying to squeeze an airplane into a truck with wings and hope it would fly. By then I'd learned enough about how the original program and laser systems worked that we were able to make a good start on rewriting, in English, the program code before I too quit. Somehow the company stayed in business all these years though.

    But as an example of some of their problems, During my studies of the program code for the laser controllers, I'd noticed that the safety mechanisms on some of the machines already in the field could fail to be engaged under certain circumstances and wrote a memo to that effect. Then I was driving to work one morning and heard on the radio that some large metal cutting laser system used to trim huge propellers in a shipyard had gone wild and endangered people and property. I got to work and asked my supervisor if that was one of our company's machines. He said "you don't want to know!". I left it at that.

    Post edited by LeatherGryphon on
Sign In or Register to comment.