Installing CTERA Portal Instances
  • 6 Minutes to read
  • PDF

Installing CTERA Portal Instances

  • PDF

Article summary

Use the following workflow to install CTERA Portal.

  1. Creating a Portal Instance.
  2. For the primary database server and secondary, replication, server Logging in to the Server to Create the Archive Pool.
  3. Configuring Network Settings.
  4. Optionally, configure a default gateway.
  5. Additional Installation Instructions for Customers Without Internet Access
  6. For the first server you install, follow the steps in Configuring the Primary Server.
  7. For any additional servers beside the primary server, install the server as described below and configure it as an additional server, as described in Installing and Configuring Additional CTERA Portal Servers.
  8. Make sure that you replicate the database, as described in Configuring the CTERA Portal Database for Backup.
  9. Backup the server as described in Backing Up the CTERA Portal Servers and Storage.
    Note

    You can use block-storage-level snapshots for backup, but VSAM snapshots are periodical in nature, configured to run every few hours. Therefore, using snapshots, you cannot recover the metadata to any point-in-time, and can lose a significant amount of data on failure. Also, many storage systems do not support block-level snapshots and replication, or do not do so efficiently.

Creating a Portal Instance

Contact CTERA Support, and request the latest ESXi CTERA Portal OVA file.

Note

The following procedure uses the vSphere Client. You can also use the vSphere Host Client. When using the vSphere Host Client, because the OVA file is larger than 2GB, you must unpack the OVA file, which includes the OVF file, VMDK and MF files. Use the OVF and VMDK files to deploy the CTERA Portal.

Note

For the primary database server and secondary, replication, server, the portal instance is created with a fixed size data pool. If you require a larger data pool, which should be approximately 1% of the expected global file system size, you can extend the data pool as described in ESXi Specific Management.

To create a portal instance:

Note

The following procedure is based on vSphere Client 7.0.3. The order of actions might be different in different versions.

  1. In the vSphere Client console click File > Deploy OVF Template.

  2. The Deploy OVF Template wizard is displayed.

  3. Browse to the CTERA Portal OVA file and choose it.

  4. Click NEXT.

  5. Continue through the wizard specifying the following information, as required for your configuration:

    • A name to identify the CTERA Portal in vCenter.
    • The location for the CTERA Portal: either a datacenter or a folder in a datacenter.
    • The compute resource to run the CTERA Portal.
  6. Click NEXT to review the configuration details.

    Note

    Click Ignore in the warning to be able to proceed.

  7. Click NEXT.

  8. Select the virtual disk format for the CTERA Portal software and the storage to use for this software. Refer to VMware documentation for a full explanation of the disk provisioning formats. For Select virtual disk format select either Thick Provision Lazy Zeroed or Thick Provision Eager Zeroed according to your preference.
    * Thick Provision Lazy Zeroed – Creates a virtual disk in a default thick format. Space required for the virtual disk is allocated when the virtual disk is created. Data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine. Using the default flat virtual disk format does not zero out or eliminate the possibility of recovering deleted files or restoring old data that might be present on this allocated space.
    * Thick Provision Eager Zeroed – Creates a virtual disk that supports clustering features such as Fault Tolerance. Space required for the virtual disk is allocated at creation time. In contrast to the flat format, the data remaining on the physical device is zeroed out when the virtual disk is created. It might take much longer to create disks in this format than to create lazy zeroed disks.

  9. Click NEXT.

  10. Select the Destination Network that the CTERA Portal will use.

  11. When installing a 8.1.x portal server running on CentOS 9, from 8.1.1417.19, click NEXT to customize the template.
    image.png
    You can customize the following:

    • The name of the portal.
    • The IP address, netmask, gateway and DNS server.

    If you do not enter values, the default values for the portal are applied.

  12. Click NEXT to review the configuration before creating the VM and then click FINISH.
    The CTERA Portal is created and powered off.

  13. For the primary database server and secondary, replication, server, right-click the portal VM in the navigation page and choose Edit Settings.
    For a 8.1.x portal server running on CentOS 7, the Hard disk 2 and Hard disk 3 entries together make up the data pool.
    For a 8.1.x portal server from 8.1.1417.19 running on CentOS 9, the Hard disk 2 entry is the data pool.

    1. Click ADD NEW DEVICE > Hard Disk.
      A new hard disk is added to the list of hard disks.
    2. Enter a size for the new hard disk.
      Note

      The minimum archive pool should be 200GB but it should be sized around 2% of the expected global file system size or twice the size of the data pool.

    3. Expand the New Hard disk item and for Location browse to a the datastore you want for the archive pool.
    4. Select the disk to use for the archive pool.
    5. Click OK.
  14. Power on the CTERA Portal virtual machine.
    The virtual machine starts up and on the first start up a script is run to create a data pool from the data disk and then to load portal dockers on to this data pool. Loading the dockers can take a few minutes.
    21, Log in as root, using SSH or through the console.
    The default password is ctera321
    You are prompted to change the password on your first login.

  15. For the primary database server and the secondary, replication, server, continue with Creating the Storage.

  16. Start CTERA Portal services, by running the following command: portal-manage.sh start

    Note

    Do not start the portal until both the sdconv and envoy dockers have been loaded to the data pool. You can check that these dockers have loaded in /var/log/ctera_firstboot.log

Logging in to the Server to Create the Storage

You need a data pool on every server. The data pool disk is created automatically when you first start the virtual machine and dockers loaded on to it.

You need to create an archive pool on the primary database server, and when PostgreSQL streaming replication is required, also on the secondary, replication, server. See Using PostgreSQL Streaming Replication for details about PostgreSQL streaming replication.

To create the archive pool:

  1. Log in as root, using SSH or through the console.
  2. Run fdisk -l to identify the disk to use for the archive pool.
  3. Run the following command to create the archive pool: portal-storage-util.sh create_db_archive_pool Device
    where Device is the Device name of the disk to use for the archive pool.
    For example: portal-storage-util.sh create_db_archive_pool sdd
    This command creates both a logical volume and an LVM volume group using the specified device. Therefore, multiple devices can be specified if desired. For example: `portal-storage-util.sh create_db_archive_pool sdd sde sdf'
    Note

    When using NFS storage, before you can create the database archive pool, you have to disable the root squashing security setting for NFS export while setting up the database replication. In the NFS implementation use the disable root squash setting. After disabling root squash, run the following command to create the database archive pool: portal-storage-util.sh create_db_archive_pool -nfs <NFS_IP>:/export/db_archive_dir
    where NFS_IP is the IP address of the NFS mount point.

    CTERA recommends re-enabling root squash for the NFS export after the database replication is set up and verified to be working.

Troubleshooting the Installation

You can check on the progress of the docker loads in /var/log/ctera_firstboot.log to ensure that all the dockers are loaded: The last docker to load is called zookeeper.

If all the dockers do not load you need to run the script /usr/bin/ ctera-firstboot.sh


Was this article helpful?