Dell EMC Isilon Node Migration

VirtuIT

[vc_row][vc_column][rs_section_heading big_heading=”Dell EMC Isilon Node Migration”][vc_column_text]If you’re in the process, or will in the future need to upgrade Isilon nodes, we’re breaking down some steps for you to follow. It’s important to note that all environments are different so this document should only act as a guide for walking you through that process.

Whether you have the in-house expertise for a Dell EMC Isilon Node Migration, or required outsourcing to a solution provider, our goal here is to give a guideline on migration from Generations 4 & 5 to generation 6 Isilon nodes.

We’ll use the below example of a client environment with 6x X200 (gen4) and 1x X210 (gen5) Isilon nodes in their current environment and looking to migrate to 4x H500 (gen6) Isilon nodes.

Current Client Environment

  • 6 x X200 (gen4) nodes
  • 1 x X210 (gen5) nodes

Client is Migrating the Above Environment to

  • 4 x H500 (gen6) nodes

Let’s start with some assumptions and prerequisites prior to migration:

 

Assumptions

  • Current nodes are in a single node pool (most likely a node compatibility pool since it is mixed generations)
  • Isilon is licensed for and using SmartPools
  • Isilon is licensed for and using SmartConnect Pools (front-end, client connectivity)

 

Prerequisites

  • Nodes in current cluster must be on OneFS 8.1.x support new Gen6 nodes.
  • Networking is in place and ports available and configured for front-end connectivity.
  • SmartConnect Pool has enough IP addresses to add in the new nodes properly.

Let’s Get Started

Same Rack or new Rack?

If you’re moving into a different rack, rack proximity needs to be considered as you need to move back-end InfiniBand connections from the old 8 port switches to new 18 port switches to accommodate all nodes at once (old and new).  As far as InfiniBand cables go, the standard, passive copper cables go up to 50m in length and Active Fiber cables go up to 100m.

**DR Environment should be upgraded first, production environment should be upgraded after.

Procedure

 

Start by migrating current environment from 8 port InfiniBand switches to new 18 port InfiniBand switches:

  1. Trace and label (if not already done) all cables and document in a port mapper.
  2. Check the Isilon to InfiniBand switch connectivity
    • Log into Isilon node via ssh and check all IP ranges for InfiniBand switches.
    • Ping all node addresses.
  1. Power down the InfiniBand switch for the A side cabling.
    • Check OneFS GUI to ensure Isilon sees the redundancy/connectivity errors, but still functions as expected.
  1. Remove InfiniBand cables from old A side, switch.
  2. If staying in the same rack, un-rack old switch and rack new 18 port InfiniBand switch.
  3. Plug InfiniBand cables into new switch in the same order/port.
  4. Power up new InfiniBand switch and let it boot up (this will take a few minutes).
  5. Once switch has booted, check the OneFS GUI to make sure the redundancy/connectivity errors disappear/clear and then acknowledge/clear where necessary to clean up your events log.
  6. Repeat step 2 to check Isilon to InfiniBand switch connectivity.
  7. Repeat steps 1-9 for the B side InfiniBand switch.

Adding New Nodes to Your Isilon Cluster

  1. Rack and stack new nodes.
  2. Cable new nodes to InfiniBand switches (A and B sides).
  3. Cable new nodes to front-end network switches.
  4. Power nodes on.
  5. Add nodes to the existing cluster via the front panel, WebUI or CLI (as you would with any other new node).
    • New nodes will be seen by the cluster and a new “Node Pool” will be created with the Gen6 hardware.
  1. If there are SmartConnect rules for the cluster in place for front-end ports, data rebalance will begin to take place.
    • The system will begin to re-balance data between the old node pool and the new node pool.  This could take some time to complete depending on total size of the Isilon.

Prepare New Nodes for New Data Ingestion and Migrate Data Off Old Nodes

 

  1. Change the default Pool Policy to write all new data to the new Node Pool.
  2. Suspend the old nodes from the SmartConnect Pool (this won’t affect any existing connections but will make sure new connections go to the new Node Pool.
    • This will serve to ween connections off of the old nodes and on to the new ones.
    • The longer the system runs like this the more connections will age off of the old nodes and on to the new.

Migrate Data off of Old Nodes on to The New Nodes

**Requires SmartPools to be licensed and activated

  1. Create a new Pool Policy or update any existing policies to point to the new node pool.
    • May need to create a Tiering Policy where you would assign the old node pool a Tier (old_tier) and the new node pool a different Tier (new_tier).
  1. Manually run a SmartPool job to force the system to move existing data from the old nodes (old_tier to new_teir).  Depending on the size of the Isilon, this could take some time to complete.

Removal of Old Nodes

  1. Once the number of connections to the old nodes drops to 0, begin to remove the interfaces from the SmartConnect pool.
  2. When the old node pool is empty, begin to remove the old nodes from the cluster.
    • Prior to removing old nodes, identify the node with the lowest NodeID.
    • Remove nodes 1 at a time and make sure that the node identified above (the node with the lowest NodeID) is removed last.
  1. Once all old nodes have been removed from the cluster, then can be powered down, un-cabled and un-racked.

**Repeat above steps for additional sites.[/vc_column_text][/vc_column][/vc_row]