infosvr-import-export

ansible-role-infosvr-import-export

Ansible role for automating the import and export of content and structures within IBM InfoSphere Information Server.

New to Ansible? This simple introduction might help.

Requirements

  • Ansible v2.8+
  • dsadm-become-able network access to an IBM Information Server environment
  • Inventory group names setup the same as IBM.infosvr role
  • (And for ease of use, the IBM.infosvr role installed and configured)
  • jmespath installed on your control machine (this role makes use of the json_query module, which depends on this library)

The role optionally uses privilege escalation to root to automate very few setup tasks. If your environment does not allow this privilege escalation, please ensure these pre-requisites are already fulfilled manually in your environment and change the defaults/main.yml variable ibm_infosvr_impexp_priv_escalate to False (this will skip any attempts at privilege escalation to root).

In case you set the escalation to false, ensure that the following are done in your target environment prior to running the role:

  • Installation of the python-requests library (eg. via yum)
  • Installation of the python-lxml library (eg. via yum)
  • Installation of curl (eg. via yum)
  • The {IS_HOME}/ASBServer/logs directory of the domain tier must be write-able by the user running the role (as well as each of the .log files in that directory)

(The privilege escalation to dsadm is used primarily for the operational metadata handling and the extraction and loading of DataStage project variables; if you do not need to use these, you may not need any privilege escalation.)

Role Variables

See defaults/main.yml for inline documentation, and the example below for the main variables needed. For any clarification on the expected action variables and sub-structures for the various object types, refer to the documentation below.

By default, the role will do SSL verification of self-signed certificates if you have retrieved them using IBM.infosvr's get_certificate.yml task (see example playbook below). This is controlled by the ibm_infosvr_openigc_verify_selfsigned_ssl variable of the role: if you want to only verify against properly signed and trusted SSL certificates, you can set this variable to False and any self-signed domain tier certificate will no longer be trusted.

Example Playbook

The role includes the ability to both export and import a number of different asset types in Information Server. The role can be imported into another playbook providing only the variables of interest in order to restrict the assets to include in either an import or export (empty variables will mean the role will skip any processing of those asset types).

The first level of variables provided to the role define the broad actions to take, and will always run in this order regardless of the order in which they're specified:

  1. gather - retrieve details about the environment (ie. version numbers)
  2. export - extract assets from an environment into file(s)
  3. merge - merge multiple asset files into a single file
  4. ingest - load assets into an environment from file(s) (import is a reserved variable in Ansible, hence ingest...)
  5. progress - move assets through a workflow (will do nothing if workflow is not enabled)
  6. validate - validate an environment is in an expected state using objective asset counts

Any missing variables will simply skip that set of actions.

For example:

---

- name: setup Information Server vars
  hosts: all
  tasks:
    - import_role: name=IBM.infosvr tasks_from=setup_vars.yml
    - import_role: name=IBM.infosvr tasks_from=get_certificate.yml

- name: load and validate assets
  hosts: all
  roles:
    - IBM.infosvr-import-export
  vars:
    isx_mappings:
      - { type: "HostSystem", attr: "name", from: "MY_HOST", to "YOUR_HOST" }
    gather: True
    ingest:
      datastage:
        - from: /some/directory/file1.isx
          into_project: dstage1
          with_options:
            overwrite: True
      common:
        - from: file2.isx
          with_options:
            transformed_by: "{{ isx_mappings }}"
            overwrite: True
    validate:
      that:
        - number_of: dsjob
          meeting_all_conditions:
            - { property: "transformation_project.name", operator: "=", value: "dstage1" }
          is: 5
        - number_of: database_table
          meeting_all_conditions:
            - { property: "database_schema.database.host.name", operator: "=", value: "YOUR_HOST" }
          is: 10

... will start by gathering environment details from the environment the playbook is running against.

It will then import the common metadata from a file file2.isx (expected a files/ sub-directory relative to your playbook), renaming any hostnames from MY_HOST to YOUR_HOST, and overwriting any existing assets with the same identities. It will then import the DataStage assets from /some/directory/file1.isx into the dstage1 project, overwriting any existing assets with the same identities.

Note that the order in which the variables are defined does not matter: the role will take care of exporting and importing objects in the appropriate order to ensure dependencies between objects are handled (ie. that common and business metadata are loaded before relationships, etc). However, the order of multiple objects defined within a given type may matter, depending on your own dependencies.

Finally, the playbook will validate the load has resulted in the expected assets in the target environment: 5 DataStage jobs in the dstage1 project and 10 database tables in some combination of schemas and databases on the YOUR_HOST server.

(Since neither progress nor export actions are specified, they will not be run.)

Action (and object) structures

The following describes all of the actions and object types currently covered by this role, and their expected structures.

  1. gather - environment detail gathering
  2. export / merge / ingest metadata asset types (as with the actions above, the ordering below defines the order in which these object types will be extracted and loaded -- irrespective of the order in which they appear within an action)
    1. customattrs - custom attribute definitions
    2. common - common metadata (should be considered low-level, and where possible avoided by using one of the type-specific options)
    3. logicalmodel - logical model metadata
    4. physicalmodel - physical model metadata
    5. mdm - master data management model metadata
    6. database - database metadata
    7. datafile - data file metadata
    8. dataclass - data class metadata
    9. datastage - DataStage assets
    10. ds_vars - DataStage project variables
    11. infoanalyzer - Information Analyzer assets
    12. openigc - OpenIGC bundles and assets
    13. extendedsource - extended data sources
    14. extensionmap - extension mapping documents
    15. glossary - glossary assets
    16. relationships - metadata relationships
    17. omd - operational metadata
  3. progress - progressing the workflow
  4. validate - validation framework

For the export, merge and ingest, mappings can be applied to transform metadata between environments (eg. renaming, changing containment, etc), and most asset types can also be limited through the use of conditions.

Note that you can generally write these variable structures using any form supported by Ansible, eg. these are all equivalent and simply up to your personal preference:

var_name: [ { a: "", b: "", c: "" }, { d: "", e: "", f: "" } ]

var_name:
  - { a: "", b: "", c: "" }
  - { d: "", e: "", f: "" }

var_name:
  - a: ""
    b: ""
    c: ""
  - d: ""
    e: ""
    f: ""

License

Apache 2.0

Author Information

Christopher Grote

About

Automates extraction and loading of content and structures within Information Server

Install
ansible-galaxy install IBM/ansible-role-infosvr-import-export
GitHub repository
License
apache-2.0
Downloads
169