feat(migration): adding support to migrate the PV to a new node (#304)

Usecase: A node in the Kubernetes cluster is replaced with a new node. The 
new node gets a different `kubernetes.io/hostname`. The storage devices
that were attached to the old node are re-attached to the new node. 

Fix: Instead of using the default `kubenetes.io/hostname` as the node affinity 
label, this commit changes to use `openebs.io/nodeid`. The ZFS LocalPV driver 
will pick the value from the nodes and set the affinity.

Once the old node is removed from the cluster, the K8s scheduler will continue 
to schedule applications on the old node only.

User can now modify the value of `openebs.io/nodeid` on the new node to the same
value that was available on the old node. This will make sure the pods/volumes are 
scheduled to the node now. 


Note: Now to migrate the PV to the other node, we have to move the disks to the other node
and remove the old node from the cluster and set the same label on the new node using
the same key, which will let k8s scheduler to schedule the pods to that node.

Other updates: 
* adding faq doc
* renaming the config variable to nodename

Signed-off-by: Pawan <pawan@mayadata.io>
Co-authored-by: Akhil Mohan <akhilerm@gmail.com>

* Update docs/faq.md

Co-authored-by: Akhil Mohan <akhilerm@gmail.com>
This commit is contained in:
Pawan Prakash Sharma 2021-05-01 19:05:01 +05:30 committed by GitHub
parent da7f4c2320
commit 1b30116e5f
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
12 changed files with 104 additions and 34 deletions

View file

@ -0,0 +1 @@
adding support to migrate the PV to other nodes

View file

@ -55,7 +55,7 @@ func main() {
cmd.Flags().AddGoFlagSet(flag.CommandLine)
cmd.PersistentFlags().StringVar(
&config.NodeID, "nodeid", zfs.NodeID, "NodeID to identify the node running this driver",
&config.Nodename, "nodename", zfs.NodeID, "Nodename to identify the node running this driver",
)
cmd.PersistentFlags().StringVar(
@ -88,11 +88,11 @@ func run(config *config.Config) {
klog.Infof("ZFS Driver Version :- %s - commit :- %s", version.Current(), version.GetGitCommit())
klog.Infof(
"DriverName: %s Plugin: %s EndPoint: %s NodeID: %s",
"DriverName: %s Plugin: %s EndPoint: %s Node Name: %s",
config.DriverName,
config.PluginType,
config.Endpoint,
config.NodeID,
config.Nodename,
)
err := driver.New(config).Run()

View file

@ -997,11 +997,11 @@ spec:
image: openebs/zfs-driver:ci
imagePullPolicy: IfNotPresent
args:
- "--nodeid=$(OPENEBS_NODE_ID)"
- "--nodename=$(OPENEBS_NODE_NAME)"
- "--endpoint=$(OPENEBS_CSI_ENDPOINT)"
- "--plugin=$(OPENEBS_NODE_DRIVER)"
env:
- name: OPENEBS_NODE_ID
- name: OPENEBS_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName

View file

@ -37,7 +37,7 @@ spec:
type: string
- description: Node where the volume is created
jsonPath: .spec.ownerNodeID
name: Node
name: NodeID
type: string
- description: Size of the volume
jsonPath: .spec.capacity

View file

@ -58,7 +58,7 @@ spec:
type: string
- description: Node where the volume is created
jsonPath: .spec.ownerNodeID
name: Node
name: NodeID
type: string
- description: Size of the volume
jsonPath: .spec.capacity
@ -2204,11 +2204,11 @@ spec:
image: openebs/zfs-driver:ci
imagePullPolicy: IfNotPresent
args:
- "--nodeid=$(OPENEBS_NODE_ID)"
- "--nodename=$(OPENEBS_NODE_NAME)"
- "--endpoint=$(OPENEBS_CSI_ENDPOINT)"
- "--plugin=$(OPENEBS_NODE_DRIVER)"
env:
- name: OPENEBS_NODE_ID
- name: OPENEBS_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName

View file

@ -224,3 +224,34 @@ Then driver will find the nearest size in Mi, the size allocated will be ((1G +
PVC size as zero in not a valid capacity. The minimum allocatable size for the ZFS-LocalPV driver is 1Mi, which means that if we are requesting 1 byte of storage space then 1Mi will be allocated for the volume.
### 8. How to migrate PVs to the new node in case old node is not accessible
The ZFS-LocalPV driver will set affinity on the PV to make the volume stick to the node so that pod gets scheduled to that node only where the volume is present. Now, the problem here is, when that node is not accesible due to some reason and we move the disks to a new node and import the pool there, the pods will not be scheduled to this node as k8s scheduler will be looking for that node only to schedule the pod.
From release 1.7.0 of ZFS-LocalPV, the driver has the ability to use the user defined affinity for creating the PV. While deploying the ZFS-LocalPV driver, first we should label all the nodes using the key `openebs.io/nodeid` with some unique value.
```
$ kubectl label node node-1 openebs.io/nodeid=custom-value-1
```
In the above command, we have labelled the node `node-1` using the key `openebs.io/nodeid` and the value we have used here is `custom-value-1`. You can pick your own value, just make sure that the value is unique for all the nodes. We have to label all the nodes in the cluster with the unique value. For example, `node-2` and `node-3` can be labelled as below:
```
$ kubectl label node node-2 openebs.io/nodeid=custom-value-2
$ kubectl label node node-3 openebs.io/nodeid=custom-value-3
```
Now, the Driver will use `openebs.io/nodeid` as the key and the corresponding value to set the affinity on the PV and k8s scheduler will consider this affinity label while scheduling the pods.
Now, when a node is not accesible, we need to do below steps
1. remove the old node from the cluster or we can just remove the above node label from the node which we want to remove.
2. add a new node in the cluster
3. move the disks to this new node
4. import the zfs pools on the new nodes
5. label the new node with same key and value. For example, if we have removed the node-3 from the cluster and added node-4 as new node, we have to label the node `node-4` and set the value to `custom-value-3` as shown below
```
$ kubectl label node node-4 openebs.io/nodeid=custom-value-3
```
Once the above steps are done, the pod should be able to run on this new node with all the data it has on the old node. Here, there is one limitation that we can only move the PVs to the new node, we can not move the PVs to the node which was already used in the cluster as there is only one allowed value for the custom key for setting the node label.

View file

@ -29,7 +29,7 @@ import (
// +kubebuilder:storageversion
// +kubebuilder:resource:scope=Namespaced,shortName=zfsvol;zv
// +kubebuilder:printcolumn:name="ZPool",type=string,JSONPath=`.spec.poolName`,description="ZFS Pool where the volume is created"
// +kubebuilder:printcolumn:name="Node",type=string,JSONPath=`.spec.ownerNodeID`,description="Node where the volume is created"
// +kubebuilder:printcolumn:name="NodeID",type=string,JSONPath=`.spec.ownerNodeID`,description="Node where the volume is created"
// +kubebuilder:printcolumn:name="Size",type=string,JSONPath=`.spec.capacity`,description="Size of the volume"
// +kubebuilder:printcolumn:name="Status",type=string,JSONPath=`.status.state`,description="Status of the volume"
// +kubebuilder:printcolumn:name="Filesystem",type=string,JSONPath=`.spec.fsType`,description="filesystem created on the volume"

View file

@ -136,9 +136,9 @@ func (b *Builder) WithThinProv(thinprov string) *Builder {
return b
}
// WithOwnerNode sets owner node for the ZFSVolume where the volume should be provisioned
func (b *Builder) WithOwnerNode(host string) *Builder {
b.volume.Object.Spec.OwnerNodeID = host
// WithOwnerNodeID sets owner nodeid for the ZFSVolume where the volume should be provisioned
func (b *Builder) WithOwnerNodeID(nodeid string) *Builder {
b.volume.Object.Spec.OwnerNodeID = nodeid
return b
}

View file

@ -37,11 +37,10 @@ type Config struct {
// - This will be a unix based socket
Endpoint string
// NodeID helps in differentiating the nodes on
// which node drivers are running. This is useful
// in case of topologies and publishing or
// unpublishing volumes on nodes
NodeID string
// Nodename helps in differentiating the nodes on
// which node drivers are running. This is used
// to set the topologies for the driver
Nodename string
}
// Default returns a new instance of config

View file

@ -204,9 +204,9 @@ func (ns *node) NodeGetInfo(
req *csi.NodeGetInfoRequest,
) (*csi.NodeGetInfoResponse, error) {
node, err := k8sapi.GetNode(ns.driver.config.NodeID)
node, err := k8sapi.GetNode(ns.driver.config.Nodename)
if err != nil {
klog.Errorf("failed to get the node %s", ns.driver.config.NodeID)
klog.Errorf("failed to get the node %s", ns.driver.config.Nodename)
return nil, err
}
/*
@ -229,11 +229,13 @@ func (ns *node) NodeGetInfo(
// support all the keys that node has
topology := node.Labels
// add driver's topology key
topology[zfs.ZFSTopologyKey] = ns.driver.config.NodeID
// add driver's topology key if not labelled already
if _, ok := topology[zfs.ZFSTopologyKey]; !ok {
topology[zfs.ZFSTopologyKey] = ns.driver.config.Nodename
}
return &csi.NodeGetInfoResponse{
NodeId: ns.driver.config.NodeID,
NodeId: ns.driver.config.Nodename,
AccessibleTopology: &csi.Topology{
Segments: topology,
},

View file

@ -227,7 +227,12 @@ func CreateZFSVolume(ctx context.Context, req *csi.CreateVolumeRequest) (string,
// try volume creation sequentially on all nodes
for _, node := range prfList {
vol, _ := volbuilder.BuildFrom(volObj).WithOwnerNode(node).WithVolumeStatus(zfs.ZFSStatusPending).Build()
nodeid, err := zfs.GetNodeID(node)
if err != nil {
continue
}
vol, _ := volbuilder.BuildFrom(volObj).WithOwnerNodeID(nodeid).WithVolumeStatus(zfs.ZFSStatusPending).Build()
timeout := false
@ -392,7 +397,12 @@ func (cs *controller) CreateVolume(
sendEventOrIgnore(pvcName, volName, strconv.FormatInt(int64(size), 10), "zfs-localpv", analytics.VolumeProvision)
topology := map[string]string{zfs.ZFSTopologyKey: selected}
nodeid, err := zfs.GetNodeID(selected)
if err != nil {
return nil, status.Errorf(codes.Internal, "GetNodeID failed : %s", err.Error())
}
topology := map[string]string{zfs.ZFSTopologyKey: nodeid}
cntx := map[string]string{zfs.PoolNameKey: pool}
return csipayload.NewCreateVolumeResponseBuilder().

View file

@ -21,6 +21,7 @@ import (
"strconv"
"time"
k8sapi "github.com/openebs/lib-csi/pkg/client/k8s"
apis "github.com/openebs/zfs-localpv/pkg/apis/openebs.io/zfs/v1"
"github.com/openebs/zfs-localpv/pkg/builder/bkpbuilder"
"github.com/openebs/zfs-localpv/pkg/builder/restorebuilder"
@ -49,7 +50,7 @@ const (
// ZFSNodeKey will be used to insert Label in ZfsVolume CR
ZFSNodeKey string = "kubernetes.io/nodename"
// ZFSTopologyKey is supported topology key for the zfs driver
ZFSTopologyKey string = "openebs.io/nodename"
ZFSTopologyKey string = "openebs.io/nodeid"
// ZFSStatusPending shows object has not handled yet
ZFSStatusPending string = "Pending"
// ZFSStatusFailed shows object operation has failed
@ -70,19 +71,45 @@ var (
)
func init() {
var err error
OpenEBSNamespace = os.Getenv(OpenEBSNamespaceKey)
if OpenEBSNamespace == "" && os.Getenv("OPENEBS_NODE_DRIVER") != "" {
klog.Fatalf("OPENEBS_NAMESPACE environment variable not set")
}
NodeID = os.Getenv("OPENEBS_NODE_ID")
if NodeID == "" && os.Getenv("OPENEBS_NODE_DRIVER") != "" {
klog.Fatalf("NodeID environment variable not set")
if os.Getenv("OPENEBS_NODE_DRIVER") != "" {
if OpenEBSNamespace == "" {
klog.Fatalf("OPENEBS_NAMESPACE environment variable not set for daemonset")
}
nodename := os.Getenv("OPENEBS_NODE_NAME")
if nodename == "" {
klog.Fatalf("OPENEBS_NODE_NAME environment variable not set")
}
if NodeID, err = GetNodeID(nodename); err != nil {
klog.Fatalf("GetNodeID failed for node=%s err: %s", nodename, err.Error())
}
klog.Infof("zfs: node(%s) has node affinity %s=%s", nodename, ZFSTopologyKey, NodeID)
} else if os.Getenv("OPENEBS_CONTROLLER_DRIVER") != "" {
if OpenEBSNamespace == "" {
klog.Fatalf("OPENEBS_NAMESPACE environment variable not set for controller")
}
}
GoogleAnalyticsEnabled = os.Getenv(GoogleAnalyticsKey)
}
func GetNodeID(nodename string) (string, error) {
node, err := k8sapi.GetNode(nodename)
if err != nil {
return "", fmt.Errorf("failed to get the node %s", nodename)
}
nodeid, ok := node.Labels[ZFSTopologyKey]
if !ok {
// node is not labelled, use node name as nodeid
return nodename, nil
}
return nodeid, nil
}
func checkVolCreation(ctx context.Context, volname string) (bool, error) {
timeout := time.After(10 * time.Second)
for {
@ -104,7 +131,7 @@ func checkVolCreation(ctx context.Context, volname string) (bool, error) {
return false, fmt.Errorf("zfs: volume creation failed")
}
klog.Infof("zfs: waiting for volume %s/%s to be created on node %s",
klog.Infof("zfs: waiting for volume %s/%s to be created on nodeid %s",
vol.Spec.PoolName, volname, vol.Spec.OwnerNodeID)
time.Sleep(time.Second)
@ -135,7 +162,7 @@ func ProvisionVolume(
}
if err != nil {
klog.Infof("zfs: volume %s/%s provisioning failed on node %s err: %s",
klog.Infof("zfs: volume %s/%s provisioning failed on nodeid %s err: %s",
vol.Spec.PoolName, vol.Name, vol.Spec.OwnerNodeID, err.Error())
}