whereabouts/pkg/reconciler/controlloop/pod.go
Miguel Duarte Barroso 59f1052972
IP control loop (#185)
* build: generate ip pool clientSet/informers/listers

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* vendor: update vendor stuff

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* build: vendor net-attach-def-client types

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* config: look for the whereabouts config file in multiple places

The reconciler controller will have access to the whereabouts
configuration via a mount point. As such, we need a way to specify its
path.

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* reconcile-loop: requires the IP ranges in normalized format

The IP reconcile loop also requires the IP ranges in a normalized
format; as such, we export it into a function, which will be used in a
follow-up commit.

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* config: allow IPAM config parsing from a NetConfList

Currently whereabouts is only able to parse network configurations in
the strict [0] format - i.e. **do not accept** a plugin list - [1].

The `ip-control-loop` must recover the full plugin configuration, which
may be in the network configuration format.

This commit allows whereabouts to now understand both formats.

Furthermore, the current CNI release - v1.0.Z - removed the support for
[0], meaning that only the configuration list format is now supported
[2].

[0] - https://github.com/containernetworking/cni/blob/v0.8.1/SPEC.md#network-configuration
[1] - https://github.com/containernetworking/cni/blob/v0.8.1/SPEC.md#network-configuration-lists
[2] - https://github.com/containernetworking/cni/blob/master/SPEC.md#released-versions

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* reconcile-loop: add a controller

Listen to pod deletion, and for every deleted pod, assure their IPs
are gone.

The rough algorithm goes like this:
  - for every network-status in the pod's annotations:
    - read associated net-attach-def from the k8s API
    - extract the range from the net-attach-def
    - find the corresponding IP pool
    - look for allocations belonging to the deleted pod
    - delete them using `IPManagement(..., types.Deallocate, ...)`

All the API reads go through the informer cache, which is kept updated
whenever the objects are updated on the API.

The dockerfiles are also updated, to ship this new binary.

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* e2e tests: remove manual cluster reconciliation

This would leave the `ip-control-loop` as the reconciliation tool.

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* unit tests: assure stale IPAllocation cleanup

This commit adds a unit where it is checked that the pod deletion leads
to the cleanup of a stale IP address.

This commit features the automatic provisioning of the controller informer cache
with the data present on the fake clientset tracker (the "fake" datastore).

This way, users can just create the client with provisioned data, and
that'll trickle down to the informer cache of the pod controller.

Because the `network-attachment-definitions` resources feature dashes,
the heuristic function that guesses - yes, guesses. very deterministic
... - the name of the resource can't be used - [0]. As such, it was
needed to create an alternate `newFakeNetAttachDefClient` where it is
possible to specify the correct resource name.

[0] - 2fd7267afc/vendor/k8s.io/client-go/testing/fixture.go (L331)

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* unit tests: move helper funcs to other files

The helper files are tagged with the `test` build tag, to prevent them
from being shipped on the production code binary.

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* control loop, queueing: use a rate-limiting queue

Using a queue allows us to re-queue errors.

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* control loop: add IPAllocation cleanup related events

Adds two new events related to garbage collection of the whereabouts IP
addresses:
  - when an IP address is garbage collected
  - when a cleanup operation fails and is not re-queued

The former event looks like:
```
116s        Normal    IPAddressGarbageCollected   pod/macvlan1-worker1 \
            successful cleanup of IP address [192.168.2.1] from network \
            whereabouts-conf
```

The latter event looks like:
```
10s         Warning    IPAddressGarbageCollectionFailed    failed to garbage \
            collect addresses for pod default/macvlan1-worker1
```

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* e2e tests: check out statefulset scenarios

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* e2e tests: test different scale up/down order and instance deltas

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* ci: test e2e bash scripts last

These ugly tests do not cleanup after themselves; this way, the golang
based tests (which **do** cleanup after themselves) will not be impacted by
these left-overs.

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* ip control loop, unit tests: test negative scenarios

Check the event thrown when a request is dropped from the queue, and
assure reconciling an allocation is impossible without having access to
the attachment configuration data.

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* e2e tests: test fix for issue #182

Issue [0] reports an error when a pod associated to a `StatefulSet`
whose IPPool is already full is deleted. According to it, the new pod -
scheduled by the `StatefulSet` - cannot run because the IPPool is
already full, and the old pod's IP cannot be garbage collected because
we match by pod reference - and the "new" pod is stuck in `creating`
phase.

[0] - https://github.com/k8snetworkplumbingwg/whereabouts/issues/182

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* ip-control-loop: strip pod before queueing it

The ip reconcile loop only requires the pod metadata and its network
status annotatations to garbage collect the stale IP addresses.

As such, we remove the status and spec parameters from the pod before
queueing it.

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>

* reconcile-loop: focus on networks w/ whereabouts IPAM type

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>
2022-04-13 10:49:18 -04:00

366 lines
12 KiB
Go

package controlloop
import (
"context"
"encoding/json"
"fmt"
"net"
"os"
"strconv"
"strings"
"time"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/wait"
v1coreinformerfactory "k8s.io/client-go/informers"
v1corelisters "k8s.io/client-go/listers/core/v1"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/workqueue"
nadv1 "github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/apis/k8s.cni.cncf.io/v1"
nadinformers "github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions"
nadlister "github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/listers/k8s.cni.cncf.io/v1"
"github.com/k8snetworkplumbingwg/whereabouts/pkg/allocate"
whereaboutsv1alpha1 "github.com/k8snetworkplumbingwg/whereabouts/pkg/api/whereabouts.cni.cncf.io/v1alpha1"
wbinformers "github.com/k8snetworkplumbingwg/whereabouts/pkg/client/informers/externalversions"
wblister "github.com/k8snetworkplumbingwg/whereabouts/pkg/client/listers/whereabouts.cni.cncf.io/v1alpha1"
"github.com/k8snetworkplumbingwg/whereabouts/pkg/config"
"github.com/k8snetworkplumbingwg/whereabouts/pkg/logging"
wbclient "github.com/k8snetworkplumbingwg/whereabouts/pkg/storage/kubernetes"
"github.com/k8snetworkplumbingwg/whereabouts/pkg/types"
)
const (
defaultMountPath = "/host"
ipReconcilerQueueName = "pod-updates"
syncPeriod = time.Second
whereaboutsConfigPath = "/etc/cni/net.d/whereabouts.d/whereabouts.conf"
maxRetries = 2
)
const (
addressGarbageCollected = "IPAddressGarbageCollected"
addressGarbageCollectionFailed = "IPAddressGarbageCollectionFailed"
)
type garbageCollector func(ctx context.Context, mode int, ipamConf types.IPAMConfig, containerID string, podRef string) (net.IPNet, error)
type PodController struct {
arePodsSynched cache.InformerSynced
areIPPoolsSynched cache.InformerSynced
areNetAttachDefsSynched cache.InformerSynced
podsInformer cache.SharedIndexInformer
ipPoolInformer cache.SharedIndexInformer
netAttachDefInformer cache.SharedIndexInformer
podLister v1corelisters.PodLister
ipPoolLister wblister.IPPoolLister
netAttachDefLister nadlister.NetworkAttachmentDefinitionLister
broadcaster record.EventBroadcaster
recorder record.EventRecorder
workqueue workqueue.RateLimitingInterface
mountPath string
cleanupFunc garbageCollector
}
// NewPodController ...
func NewPodController(k8sCoreInformerFactory v1coreinformerfactory.SharedInformerFactory, wbSharedInformerFactory wbinformers.SharedInformerFactory, netAttachDefInformerFactory nadinformers.SharedInformerFactory, broadcaster record.EventBroadcaster, recorder record.EventRecorder) *PodController {
return newPodController(k8sCoreInformerFactory, wbSharedInformerFactory, netAttachDefInformerFactory, broadcaster, recorder, wbclient.IPManagement)
}
func newPodController(k8sCoreInformerFactory v1coreinformerfactory.SharedInformerFactory, wbSharedInformerFactory wbinformers.SharedInformerFactory, netAttachDefInformerFactory nadinformers.SharedInformerFactory, broadcaster record.EventBroadcaster, recorder record.EventRecorder, cleanupFunc garbageCollector) *PodController {
k8sPodFilteredInformer := k8sCoreInformerFactory.Core().V1().Pods()
ipPoolInformer := wbSharedInformerFactory.Whereabouts().V1alpha1().IPPools()
netAttachDefInformer := netAttachDefInformerFactory.K8sCniCncfIo().V1().NetworkAttachmentDefinitions()
poolInformer := ipPoolInformer.Informer()
networksInformer := netAttachDefInformer.Informer()
podsInformer := k8sPodFilteredInformer.Informer()
queue := workqueue.NewNamedRateLimitingQueue(
workqueue.DefaultControllerRateLimiter(),
ipReconcilerQueueName)
podsInformer.AddEventHandler(
cache.ResourceEventHandlerFuncs{
DeleteFunc: func(obj interface{}) {
onPodDelete(queue, obj)
},
})
return &PodController{
arePodsSynched: podsInformer.HasSynced,
areIPPoolsSynched: poolInformer.HasSynced,
areNetAttachDefsSynched: networksInformer.HasSynced,
broadcaster: broadcaster,
recorder: recorder,
podsInformer: podsInformer,
ipPoolInformer: poolInformer,
netAttachDefInformer: networksInformer,
podLister: k8sPodFilteredInformer.Lister(),
ipPoolLister: ipPoolInformer.Lister(),
netAttachDefLister: netAttachDefInformer.Lister(),
workqueue: queue,
cleanupFunc: cleanupFunc,
}
}
// Start runs worker thread after performing cache synchronization
func (pc *PodController) Start(stopChan <-chan struct{}) {
logging.Verbosef("starting network controller")
defer pc.workqueue.ShutDown()
if ok := cache.WaitForCacheSync(stopChan, pc.arePodsSynched, pc.areNetAttachDefsSynched, pc.areIPPoolsSynched); !ok {
logging.Verbosef("failed waiting for caches to sync")
}
go wait.Until(pc.worker, syncPeriod, stopChan)
<-stopChan
logging.Verbosef("shutting down network controller")
}
func (pc *PodController) worker() {
for pc.processNextWorkItem() {
}
}
func (pc *PodController) processNextWorkItem() bool {
queueItem, shouldQuit := pc.workqueue.Get()
if shouldQuit {
return false
}
defer pc.workqueue.Done(queueItem)
pod := queueItem.(*v1.Pod)
err := pc.garbageCollectPodIPs(pod)
logging.Verbosef("result of garbage collecting pods: %+v", err)
pc.handleResult(pod, err)
return true
}
func (pc *PodController) garbageCollectPodIPs(pod *v1.Pod) error {
podNamespace := pod.GetNamespace()
podName := pod.GetName()
ifaceStatuses, err := podNetworkStatus(pod)
if err != nil {
return fmt.Errorf("failed to access the network status for pod [%s/%s]: %v", podName, podNamespace, err)
}
for _, ifaceStatus := range ifaceStatuses {
if ifaceStatus.Default {
logging.Verbosef("skipped net-attach-def for default network")
continue
}
nad, err := pc.ifaceNetAttachDef(ifaceStatus)
if err != nil {
return fmt.Errorf("failed to get network-attachment-definition for iface %s: %+v", ifaceStatus.Name, err)
}
mountPath := defaultMountPath
if pc.mountPath != "" {
mountPath = pc.mountPath
}
logging.Verbosef("the NAD's config: %s", nad.Spec)
ipamConfig, err := ipamConfiguration(nad, podNamespace, podName, mountPath)
if err != nil && isInvalidPluginType(err) {
logging.Debugf("error while computing something: %v", err)
continue
} else if err != nil {
return fmt.Errorf("failed to create an IPAM configuration for the pod %s iface %s: %+v", podID(podNamespace, podName), ifaceStatus.Name, err)
}
pool, err := pc.ipPool(ipamConfig.Range)
if err != nil {
return fmt.Errorf("failed to get the IPPool data: %+v", err)
}
logging.Verbosef("pool range [%s]", pool.Spec.Range)
for allocationIndex, allocation := range pool.Spec.Allocations {
if allocation.PodRef == podID(podNamespace, podName) {
logging.Verbosef("stale allocation to cleanup: %+v", allocation)
if _, err := pc.cleanupFunc(context.TODO(), types.Deallocate, *ipamConfig, allocation.ContainerID, podID(podNamespace, podName)); err != nil {
logging.Errorf("failed to cleanup allocation: %v", err)
}
if err := pc.addressGarbageCollected(pod, nad.GetName(), pool.Spec.Range, allocationIndex); err != nil {
logging.Errorf("failed to issue event for successful IP address cleanup: %v", err)
}
}
}
}
return nil
}
func isInvalidPluginType(err error) bool {
_, isInvalidPluginError := err.(*config.InvalidPluginError)
return isInvalidPluginError
}
func (pc *PodController) handleResult(pod *v1.Pod, err error) {
if err == nil {
pc.workqueue.Forget(pod)
return
}
podNamespace := pod.GetNamespace()
podName := pod.GetName()
currentRetries := pc.workqueue.NumRequeues(pod)
if currentRetries <= maxRetries {
logging.Verbosef(
"re-queuing IP address reconciliation request for pod %s; retry #: %d",
podID(podNamespace, podName),
currentRetries)
pc.workqueue.AddRateLimited(pod)
return
}
pc.addressGarbageCollectionFailed(pod, err)
}
func (pc *PodController) ifaceNetAttachDef(ifaceStatus nadv1.NetworkStatus) (*nadv1.NetworkAttachmentDefinition, error) {
const (
namespaceIndex = 0
nameIndex = 1
)
logging.Debugf("pod's network status: %+v", ifaceStatus)
ifaceInfo := strings.Split(ifaceStatus.Name, "/")
if len(ifaceInfo) < 2 {
return nil, fmt.Errorf("pod %s name does not feature namespace/pod name syntax", ifaceStatus.Name)
}
netNamespaceName := ifaceInfo[namespaceIndex]
netName := ifaceInfo[nameIndex]
nad, err := pc.netAttachDefLister.NetworkAttachmentDefinitions(netNamespaceName).Get(netName)
if err != nil {
return nil, err
}
return nad, nil
}
func (pc *PodController) ipPool(cidr string) (*whereaboutsv1alpha1.IPPool, error) {
pool, err := pc.ipPoolLister.IPPools(ipPoolsNamespace()).Get(wbclient.NormalizeRange(cidr))
if err != nil {
return nil, err
}
return pool, nil
}
func (pc *PodController) addressGarbageCollected(pod *v1.Pod, networkName string, ipRange string, allocationIndex string) error {
if pc.recorder != nil {
ip, _, err := net.ParseCIDR(ipRange)
if err != nil {
return err
}
index, err := strconv.Atoi(allocationIndex)
if err != nil {
return err
}
pc.recorder.Eventf(
pod,
v1.EventTypeNormal,
addressGarbageCollected,
"successful cleanup of IP address [%s] from network %s",
allocate.IPAddOffset(ip, uint64(index)),
networkName)
}
return nil
}
func (pc *PodController) addressGarbageCollectionFailed(pod *v1.Pod, err error) {
logging.Errorf(
"dropping pod [%s] deletion out of the queue - could not reconcile IP: %+v",
podID(pod.GetNamespace(), pod.GetName()),
err)
pc.workqueue.Forget(pod)
if pc.recorder != nil {
pc.recorder.Eventf(
pod,
v1.EventTypeWarning,
addressGarbageCollectionFailed,
"failed to garbage collect addresses for pod %s",
podID(pod.GetNamespace(), pod.GetName()))
}
}
func onPodDelete(queue workqueue.RateLimitingInterface, obj interface{}) {
pod, err := podFromTombstone(obj)
if err != nil {
logging.Errorf("cannot create pod object from %v on pod delete: %v", obj, err)
return
}
logging.Verbosef("deleted pod [%s]", podID(pod.GetNamespace(), pod.GetName()))
queue.Add(stripPod(pod)) // we only need the pod's metadata & its network-status annotations. Hence we strip it.
}
func podID(podNamespace string, podName string) string {
return fmt.Sprintf("%s/%s", podNamespace, podName)
}
func podNetworkStatus(pod *v1.Pod) ([]nadv1.NetworkStatus, error) {
var ifaceStatuses []nadv1.NetworkStatus
networkStatus, found := pod.Annotations[nadv1.NetworkStatusAnnot]
if found {
if err := json.Unmarshal([]byte(networkStatus), &ifaceStatuses); err != nil {
return nil, err
}
}
return ifaceStatuses, nil
}
func ipamConfiguration(nad *nadv1.NetworkAttachmentDefinition, podNamespace string, podName string, mountPath string) (*types.IPAMConfig, error) {
mounterWhereaboutsConfigFilePath := mountPath + whereaboutsConfigPath
ipamConfig, err := config.LoadIPAMConfiguration([]byte(nad.Spec.Config), "", mounterWhereaboutsConfigFilePath)
if err != nil {
return nil, err
}
ipamConfig.PodName = podName
ipamConfig.PodNamespace = podNamespace
ipamConfig.Kubernetes.KubeConfigPath = mountPath + ipamConfig.Kubernetes.KubeConfigPath // must use the mount path
return ipamConfig, nil
}
func ipPoolsNamespace() string {
const wbNamespaceEnvVariableName = "WHEREABOUTS_NAMESPACE"
if wbNamespace, found := os.LookupEnv(wbNamespaceEnvVariableName); found {
return wbNamespace
}
const wbDefaultNamespace = "kube-system"
return wbDefaultNamespace
}
func podFromTombstone(obj interface{}) (*v1.Pod, error) {
pod, isPod := obj.(*v1.Pod)
if !isPod {
tombstone, ok := obj.(cache.DeletedFinalStateUnknown)
if !ok {
return nil, fmt.Errorf("received unexpected object: %v", obj)
}
pod, ok = tombstone.Obj.(*v1.Pod)
if !ok {
return nil, fmt.Errorf("deletedFinalStateUnknown contained non-Pod object: %v", tombstone.Obj)
}
}
return pod, nil
}
func stripPod(pod *v1.Pod) *v1.Pod {
newPod := pod.DeepCopy()
newPod.Spec = v1.PodSpec{}
newPod.Status = v1.PodStatus{}
return newPod
}