ibmi-brunch-learn

Announcement

Collapse
No announcement yet.

Moving tables from I to Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Moving tables from I to Linux

    We are being asked to move tables from our power system to a db2 on Linux database, long story. Either way we are looking at how to communicate between the two as our power system will still have to populate those tables.

    Right now the only thing I can come up with is RDB or running an jdbc connection from our rpg programs? Would like to be able to have a DDM connection between the two but as I understand it that will never be possible.

    Just wanted to see if that was all there was to say about it or if I missed something.

  • #2
    Assuming this is a one-time transfer, you can use CPYTOIMPF to create a CSV file containing your database data. Then import that on the Linux side (I don't know how to import on DB2 LUW, sorry, but I'm sure there must be a way, as all databases have a way to import from CSV)

    Comment


    • #3
      Should have specified this will be a recurring event, these are tables we populate for our web team. Either nightly or some other kind of schedule.

      Comment


      • #4
        Lakeview had a tool to do this, not sure if Vision kept it going? It was called Replicate1. You could roll your own by reading the journals, creating a SQL request from the content and running that SQL request on the target. Not an easy task but possible, especially if you are going to do it for only a few files.

        Chris...

        Comment


        • #5
          Looks interesting, renamed to double-take share but looks like it would work well.

          Comment


          • #6
            If you're only going to do this once each day, and are going to either replace the Linux DB2 table each time, or add to it, then using something like CPYTOIMPF will probably run the fastest.

            If you need more complex logic, like the ability to see if a row already exists so you can update it, or need it to happen when the updates to the IBM i table happen, etc, then you're likely to have to write something more sophisticated. You can do this either using JDBC drivers (which work on all platforms), or by setting up the DB2 as a remote database and accessing it through SQL. I have not done this, but it is my undersstanding that since they are both DB2, it's possible to do that. Doing row-by-row logic will run slower than a batch import/export like CPYTOIMPF, though.

            Best of luck.

            Comment


            • #7
              Originally posted by jj_dahlheimer View Post
              Would like to be able to have a DDM connection between the two but as I understand it that will never be possible.
              Interesting thought, though I've never tried it nor even seriously looked into it. I'd initially expect DDM to work, albeit with some probable restrictions and needing some preparation. DDM is part of the DRDA protocol and I think all DB2 platforms support it.
              Tom

              There are only two hard things in Computer Science: cache invalidation, naming things and off-by-one errors.

              Why is it that all of the instruments seeking intelligent life in the universe are pointed away from Earth?

              Comment


              • #8
                I am getting mixed info on DDM but I think that is because I don't fully understand DDM. One on hand I am reading that all DB2 databases are based on DDM but that you cannot do a DDM file from I to anything else because nothing else understands file IO as they have no need to, which makes sense.

                Comment


                • #9
                  Would be nice if DB2 supported the dump option available in MySQL. Then you could dump the sql to a file, transport the file to the remote Linux system and run the rebuild.

                  dump in MySQL also has the ability to add a drop table if it exists as part of the output.

                  Chris..

                  Comment


                  • #10
                    I think that would be similar to the cpytoimpf that Scott suggested, which we may end up doing depending on what we are all doing with these tables. I have never personally worked with them, I was just asked to look into connectivity options to come up with a couple of different possibilities. Though I think we will have to be careful about doing any kind of drop or delete as our website is running all the time and is fairly busy.

                    Comment


                    • #11
                      An alternative to the CSV idea is to send a script that has the inserts for the data. Paul Tuohy wrote about this in Four Hundred Guru.

                      Comment


                      • #12
                        The link in the previous post didn't come out right. Let's try again.

                        http://www.itjungle.com/fhg/fhg031516-story03.html

                        Comment


                        • #13
                          As a different approach, would it be possible to use NFS so a file system can be mounted on the Unix host to get the data directly?
                          There would be a benefit in that the data would be real time, rather than an old copy.

                          Comment


                          • #14
                            Originally posted by jj_dahlheimer View Post
                            I am getting mixed info on DDM but I think that is because I don't fully understand DDM. One on hand I am reading that all DB2 databases are based on DDM but that you cannot do a DDM file from I to anything else because nothing else understands file IO as they have no need to, which makes sense.
                            Yes, using a "DDM file", e.g., via a RPG F-spec, isn't quite equivalent to "DDM". I figure that it takes some pretty solid DRDA background before it all makes sense.

                            I'd never seriously looked into it in depth before. The relationship between DRDA and DDM is pretty strong, making them nearly synonymous in a big part. Just for anyone who's interested, I'll post some basics I dug up.

                            The opening of the DRDA specification says:
                            "DRDA is an open, published architecture that enables communication between applications and database systems on disparate platforms, whether those applications and database systems are provided by the same or different vendors and whether the platforms are the same or different hardware/software architectures. DRDA is a combination of other architectures and the environmental rules and process model for using them. The architectures that actually comprise DRDA are Distributed Data Management (DDM) and Formatted Data Object Content Architecture (FD:OCA).

                            "The Distributed Data Management (DDM) architecture provides the overall command and reply structure used by the distributed database. Fewer than 20 commands are required to implement all of the distributed database functions for communication between the application requester (client) and the application server.

                            "The Formatted Data Object Content Architecture (FD:OCA) provides the data definition architectural base for DRDA. Descriptors defined by DRDA provide layout and data type information for all the information routinely exchanged between the application requesters and servers. A descriptor organization is defined by DRDA to allow dynamic definition of user data that flows as part of command or reply data. DRDA also specifies that the descriptors only have to flow once per answer set, regardless of the number of rows actually returned, thus minimizing data traffic on the wire."

                            From that I can see that a "DDM file" object probably implements some lower-level DDM commands to communicate with a remote database server that supports DRDA. We (almost) never see the actual DDM commands that get sent nor the replies that are returned unless maybe we do communication traces.

                            I'm not sure what any of it means, though, for trying to use a "DDM file" defined against a DB2 table on Linux. I have a number of Linux systems (e.g., this laptop), but I don't have DB2 on any of them. It'd be interesting to try just to see what obstacles show up.
                            Tom

                            There are only two hard things in Computer Science: cache invalidation, naming things and off-by-one errors.

                            Why is it that all of the instruments seeking intelligent life in the universe are pointed away from Earth?

                            Comment


                            • #15
                              If we do set it up I will definitely report back with our findings, sounds like they are going to get a DB2 LUW setup so we can test a few things. Thanks for the feedback.

                              Comment

                              Working...
                              X