ibm.infosvr-import-export

ansible-role-infosvr-import-export

This is an Ansible role that helps automate the process of importing and exporting content and structures in IBM InfoSphere Information Server.

Are you new to Ansible? Check out this simple introduction for help.

Requirements

  • Ansible version 2.8 or higher
  • Access to an IBM Information Server environment with dsadm privileges
  • Inventory group names should match those set up for the IBM.infosvr role
  • (For easier use, install and configure the IBM.infosvr role)
  • Install jmespath on your control machine, as this role needs it for the json_query module.

This role may need to elevate privileges to root for some setup tasks. If your environment does not allow this, make sure all requirements are met manually and set the ibm_infosvr_impexp_priv_escalate variable in defaults/main.yml to False to skip root access attempts.

If you set the escalation to false, you must do the following in your target environment before running the role:

  • Install the python-requests library (for example, use yum)
  • Install the python-lxml library (for example, use yum)
  • Install curl (for example, use yum)
  • Ensure the {IS_HOME}/ASBServer/logs directory is writable by the user running the role, including all .log files in that directory.

(The dsadm privilege escalation is mainly used for managing operational metadata and handling DataStage project variables. If you don’t need these features, you might not need escalation.)

Role Variables

Check defaults/main.yml for more detailed documentation and the main variables needed. For specifics on expected variables and their structures for different object types, see the documentation.

By default, the role verifies SSL certificates if they were retrieved using the IBM.infosvr role’s get_certificate.yml task. This can be controlled by the variable ibm_infosvr_openigc_verify_selfsigned_ssl. If you only want to trust properly signed SSL certificates, change this variable to False, and self-signed certificates will not be trusted.

Example Playbook

This role can export and import various asset types in Information Server. You can include it in another playbook with only the necessary variables to limit which assets are processed during import or export (if a variable is empty, that asset type will be skipped).

The following variables define the actions to be performed and will always run in this order, regardless of how they are listed:

  1. gather - Get details about the environment (like version numbers)
  2. export - Extract assets from the environment into files
  3. merge - Combine multiple asset files into one
  4. ingest - Load assets into the environment from files (using ingest instead of import because import is reserved in Ansible)
  5. progress - Move assets through a workflow (this does nothing if workflows are not enabled)
  6. validate - Check that the environment is in the expected state using asset counts

If any variables are missing, those actions will be skipped.

For example:

---

- name: setup Information Server vars
  hosts: all
  tasks:
    - import_role: name=IBM.infosvr tasks_from=setup_vars.yml
    - import_role: name=IBM.infosvr tasks_from=get_certificate.yml

- name: load and validate assets
  hosts: all
  roles:
    - IBM.infosvr-import-export
  vars:
    isx_mappings:
      - { type: "HostSystem", attr: "name", from: "MY_HOST", to "YOUR_HOST" }
    gather: True
    ingest:
      datastage:
        - from: /some/directory/file1.isx
          into_project: dstage1
          with_options:
            overwrite: True
      common:
        - from: file2.isx
          with_options:
            transformed_by: "{{ isx_mappings }}"
            overwrite: True
    validate:
      that:
        - number_of: dsjob
          meeting_all_conditions:
            - { property: "transformation_project.name", operator: "=", value: "dstage1" }
          is: 5
        - number_of: database_table
          meeting_all_conditions:
            - { property: "database_schema.database.host.name", operator: "=", value: "YOUR_HOST" }
          is: 10

This playbook starts by gathering environment details where the playbook is run.

It then imports common metadata from file2.isx (which should be in a files/ sub-directory relative to your playbook), renaming any hostnames from MY_HOST to YOUR_HOST, and overwriting any existing assets with the same names. Next, it imports DataStage assets from /some/directory/file1.isx into the dstage1 project, again overwriting existing assets.

Remember that the order of how variables are defined doesn’t matter; the role will manage the export and import sequence correctly to maintain dependencies. However, the order of multiple objects defined within a single type could matter.

Finally, the playbook will check that the load resulted in the expected assets: 5 DataStage jobs in the dstage1 project and 10 database tables on the YOUR_HOST server.

(Because neither progress nor export actions were specified, they will not run.)

Action (and object) structures

This section describes all actions and object types the role currently supports, along with their expected structures.

  1. gather - gathering environment details
  2. export / merge / ingest metadata asset types (the order listed here also dictates how they will be processed)
    1. customattrs - custom attribute definitions
    2. common - common metadata (should be avoided if possible by using specific types)
    3. logicalmodel - logical model metadata
    4. physicalmodel - physical model metadata
    5. mdm - master data management model metadata
    6. database - database metadata
    7. datafile - data file metadata
    8. dataclass - data class metadata
    9. datastage - DataStage assets
    10. ds_vars - DataStage project variables
    11. infoanalyzer - Information Analyzer assets
    12. openigc - OpenIGC bundles and assets
    13. extendedsource - extended data sources
    14. extensionmap - extension mapping documents
    15. glossary - glossary assets
    16. relationships - metadata relationships
    17. omd - operational metadata
  3. progress - workflow progression
  4. validate - validation framework

For the export, merge, and ingest processes, mappings can be used to modify metadata between environments (like renaming or changing formats), and most asset types can have limits set through conditions.

You can write these variable structures in any supported format by Ansible, so feel free to choose what you prefer:

var_name: [ { a: "", b: "", c: "" }, { d: "", e: "", f: "" } ]

var_name:
  - { a: "", b: "", c: "" }
  - { d: "", e: "", f: "" }

var_name:
  - a: ""
    b: ""
    c: ""
  - d: ""
    e: ""
    f: ""

License

Apache 2.0

Author Information

Christopher Grote

Informazioni sul progetto

Automates extraction and loading of content and structures within Information Server

Installa
ansible-galaxy install ibm.infosvr-import-export
Licenza
apache-2.0
Download
178