Mar 7 00:54:34.225308 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 7 00:54:34.225353 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 6 22:59:59 -00 2026 Mar 7 00:54:34.225378 kernel: KASLR disabled due to lack of seed Mar 7 00:54:34.225395 kernel: efi: EFI v2.7 by EDK II Mar 7 00:54:34.225411 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Mar 7 00:54:34.225427 kernel: ACPI: Early table checksum verification disabled Mar 7 00:54:34.225446 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 7 00:54:34.225462 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 7 00:54:34.225478 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 7 00:54:34.225494 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 7 00:54:34.225515 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 7 00:54:34.225531 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 7 00:54:34.225547 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 7 00:54:34.225563 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 7 00:54:34.225582 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 7 00:54:34.225603 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 7 00:54:34.225621 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 7 00:54:34.225637 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 7 00:54:34.225654 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 7 00:54:34.225670 kernel: printk: bootconsole [uart0] enabled Mar 7 00:54:34.225687 kernel: NUMA: Failed to initialise from firmware Mar 7 00:54:34.225703 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 7 00:54:34.225720 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 7 00:54:34.225736 kernel: Zone ranges: Mar 7 00:54:34.225753 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 7 00:54:34.225769 kernel: DMA32 empty Mar 7 00:54:34.225790 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 7 00:54:34.225806 kernel: Movable zone start for each node Mar 7 00:54:34.225823 kernel: Early memory node ranges Mar 7 00:54:34.225839 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 7 00:54:34.225856 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 7 00:54:34.225872 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 7 00:54:34.225889 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 7 00:54:34.225905 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 7 00:54:34.225922 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 7 00:54:34.225939 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 7 00:54:34.225955 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 7 00:54:34.225972 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 7 00:54:34.225992 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 7 00:54:34.226009 kernel: psci: probing for conduit method from ACPI. Mar 7 00:54:34.226034 kernel: psci: PSCIv1.0 detected in firmware. Mar 7 00:54:34.226052 kernel: psci: Using standard PSCI v0.2 function IDs Mar 7 00:54:34.226070 kernel: psci: Trusted OS migration not required Mar 7 00:54:34.226091 kernel: psci: SMC Calling Convention v1.1 Mar 7 00:54:34.226109 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Mar 7 00:54:34.226127 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 7 00:54:34.226165 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 7 00:54:34.226186 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 7 00:54:34.226204 kernel: Detected PIPT I-cache on CPU0 Mar 7 00:54:34.226222 kernel: CPU features: detected: GIC system register CPU interface Mar 7 00:54:34.226239 kernel: CPU features: detected: Spectre-v2 Mar 7 00:54:34.226257 kernel: CPU features: detected: Spectre-v3a Mar 7 00:54:34.226275 kernel: CPU features: detected: Spectre-BHB Mar 7 00:54:34.226292 kernel: CPU features: detected: ARM erratum 1742098 Mar 7 00:54:34.226315 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 7 00:54:34.226333 kernel: alternatives: applying boot alternatives Mar 7 00:54:34.226353 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 00:54:34.226372 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 00:54:34.226390 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 00:54:34.226408 kernel: Fallback order for Node 0: 0 Mar 7 00:54:34.226426 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 7 00:54:34.226444 kernel: Policy zone: Normal Mar 7 00:54:34.226461 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 00:54:34.226479 kernel: software IO TLB: area num 2. Mar 7 00:54:34.226497 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 7 00:54:34.226520 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Mar 7 00:54:34.226538 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 00:54:34.226556 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 00:54:34.226574 kernel: rcu: RCU event tracing is enabled. Mar 7 00:54:34.226592 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 00:54:34.226610 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 00:54:34.226628 kernel: Tracing variant of Tasks RCU enabled. Mar 7 00:54:34.226646 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 00:54:34.226663 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 00:54:34.226681 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 7 00:54:34.226698 kernel: GICv3: 96 SPIs implemented Mar 7 00:54:34.226720 kernel: GICv3: 0 Extended SPIs implemented Mar 7 00:54:34.226738 kernel: Root IRQ handler: gic_handle_irq Mar 7 00:54:34.226755 kernel: GICv3: GICv3 features: 16 PPIs Mar 7 00:54:34.226772 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 7 00:54:34.226790 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 7 00:54:34.226807 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 7 00:54:34.226826 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 7 00:54:34.226843 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 7 00:54:34.226861 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 7 00:54:34.226879 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 7 00:54:34.226896 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 00:54:34.226914 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 7 00:54:34.226937 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 7 00:54:34.226954 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 7 00:54:34.226972 kernel: Console: colour dummy device 80x25 Mar 7 00:54:34.226990 kernel: printk: console [tty1] enabled Mar 7 00:54:34.227008 kernel: ACPI: Core revision 20230628 Mar 7 00:54:34.227026 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 7 00:54:34.227045 kernel: pid_max: default: 32768 minimum: 301 Mar 7 00:54:34.227063 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 00:54:34.227081 kernel: landlock: Up and running. Mar 7 00:54:34.227102 kernel: SELinux: Initializing. Mar 7 00:54:34.227121 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:54:34.227154 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:54:34.227179 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:54:34.227197 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:54:34.227215 kernel: rcu: Hierarchical SRCU implementation. Mar 7 00:54:34.227233 kernel: rcu: Max phase no-delay instances is 400. Mar 7 00:54:34.227252 kernel: Platform MSI: ITS@0x10080000 domain created Mar 7 00:54:34.227303 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 7 00:54:34.227330 kernel: Remapping and enabling EFI services. Mar 7 00:54:34.227348 kernel: smp: Bringing up secondary CPUs ... Mar 7 00:54:34.227366 kernel: Detected PIPT I-cache on CPU1 Mar 7 00:54:34.227384 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 7 00:54:34.227401 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 7 00:54:34.227419 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 7 00:54:34.227437 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 00:54:34.227455 kernel: SMP: Total of 2 processors activated. Mar 7 00:54:34.227472 kernel: CPU features: detected: 32-bit EL0 Support Mar 7 00:54:34.227494 kernel: CPU features: detected: 32-bit EL1 Support Mar 7 00:54:34.227512 kernel: CPU features: detected: CRC32 instructions Mar 7 00:54:34.227530 kernel: CPU: All CPU(s) started at EL1 Mar 7 00:54:34.227559 kernel: alternatives: applying system-wide alternatives Mar 7 00:54:34.227581 kernel: devtmpfs: initialized Mar 7 00:54:34.227601 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 00:54:34.227619 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 00:54:34.227638 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 00:54:34.227656 kernel: SMBIOS 3.0.0 present. Mar 7 00:54:34.227679 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 7 00:54:34.227698 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 00:54:34.227717 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 7 00:54:34.227736 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 7 00:54:34.227754 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 7 00:54:34.227774 kernel: audit: initializing netlink subsys (disabled) Mar 7 00:54:34.227792 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Mar 7 00:54:34.227811 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 00:54:34.227834 kernel: cpuidle: using governor menu Mar 7 00:54:34.227852 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 7 00:54:34.227871 kernel: ASID allocator initialised with 65536 entries Mar 7 00:54:34.227889 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 00:54:34.227908 kernel: Serial: AMBA PL011 UART driver Mar 7 00:54:34.227926 kernel: Modules: 17488 pages in range for non-PLT usage Mar 7 00:54:34.227945 kernel: Modules: 509008 pages in range for PLT usage Mar 7 00:54:34.227963 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 00:54:34.227982 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 00:54:34.228005 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 7 00:54:34.228024 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 7 00:54:34.228042 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 00:54:34.228061 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 00:54:34.228080 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 7 00:54:34.228099 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 7 00:54:34.228135 kernel: ACPI: Added _OSI(Module Device) Mar 7 00:54:34.228175 kernel: ACPI: Added _OSI(Processor Device) Mar 7 00:54:34.228198 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 00:54:34.228223 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 00:54:34.228244 kernel: ACPI: Interpreter enabled Mar 7 00:54:34.228264 kernel: ACPI: Using GIC for interrupt routing Mar 7 00:54:34.228283 kernel: ACPI: MCFG table detected, 1 entries Mar 7 00:54:34.228302 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Mar 7 00:54:34.228628 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 00:54:34.228846 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 7 00:54:34.229057 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 7 00:54:34.229312 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Mar 7 00:54:34.229530 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Mar 7 00:54:34.229556 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 7 00:54:34.229576 kernel: acpiphp: Slot [1] registered Mar 7 00:54:34.229595 kernel: acpiphp: Slot [2] registered Mar 7 00:54:34.229614 kernel: acpiphp: Slot [3] registered Mar 7 00:54:34.229633 kernel: acpiphp: Slot [4] registered Mar 7 00:54:34.229652 kernel: acpiphp: Slot [5] registered Mar 7 00:54:34.229677 kernel: acpiphp: Slot [6] registered Mar 7 00:54:34.229697 kernel: acpiphp: Slot [7] registered Mar 7 00:54:34.229715 kernel: acpiphp: Slot [8] registered Mar 7 00:54:34.229733 kernel: acpiphp: Slot [9] registered Mar 7 00:54:34.229752 kernel: acpiphp: Slot [10] registered Mar 7 00:54:34.229770 kernel: acpiphp: Slot [11] registered Mar 7 00:54:34.229788 kernel: acpiphp: Slot [12] registered Mar 7 00:54:34.229807 kernel: acpiphp: Slot [13] registered Mar 7 00:54:34.229825 kernel: acpiphp: Slot [14] registered Mar 7 00:54:34.229843 kernel: acpiphp: Slot [15] registered Mar 7 00:54:34.229867 kernel: acpiphp: Slot [16] registered Mar 7 00:54:34.229885 kernel: acpiphp: Slot [17] registered Mar 7 00:54:34.229904 kernel: acpiphp: Slot [18] registered Mar 7 00:54:34.229922 kernel: acpiphp: Slot [19] registered Mar 7 00:54:34.229941 kernel: acpiphp: Slot [20] registered Mar 7 00:54:34.229959 kernel: acpiphp: Slot [21] registered Mar 7 00:54:34.229977 kernel: acpiphp: Slot [22] registered Mar 7 00:54:34.229996 kernel: acpiphp: Slot [23] registered Mar 7 00:54:34.230014 kernel: acpiphp: Slot [24] registered Mar 7 00:54:34.230037 kernel: acpiphp: Slot [25] registered Mar 7 00:54:34.230056 kernel: acpiphp: Slot [26] registered Mar 7 00:54:34.230074 kernel: acpiphp: Slot [27] registered Mar 7 00:54:34.230092 kernel: acpiphp: Slot [28] registered Mar 7 00:54:34.230111 kernel: acpiphp: Slot [29] registered Mar 7 00:54:34.230130 kernel: acpiphp: Slot [30] registered Mar 7 00:54:34.230186 kernel: acpiphp: Slot [31] registered Mar 7 00:54:34.230208 kernel: PCI host bridge to bus 0000:00 Mar 7 00:54:34.230445 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 7 00:54:34.230648 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 7 00:54:34.230952 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 7 00:54:34.231281 kernel: pci_bus 0000:00: root bus resource [bus 00] Mar 7 00:54:34.236387 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 7 00:54:34.236653 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 7 00:54:34.236874 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 7 00:54:34.237124 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 7 00:54:34.237416 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 7 00:54:34.237627 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 7 00:54:34.237853 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 7 00:54:34.238069 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 7 00:54:34.238314 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 7 00:54:34.238530 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 7 00:54:34.240620 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 7 00:54:34.240825 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 7 00:54:34.241016 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 7 00:54:34.241291 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 7 00:54:34.241322 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 7 00:54:34.241342 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 7 00:54:34.241361 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 7 00:54:34.241381 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 7 00:54:34.241409 kernel: iommu: Default domain type: Translated Mar 7 00:54:34.241429 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 7 00:54:34.241449 kernel: efivars: Registered efivars operations Mar 7 00:54:34.241467 kernel: vgaarb: loaded Mar 7 00:54:34.241486 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 7 00:54:34.241505 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 00:54:34.241525 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 00:54:34.241544 kernel: pnp: PnP ACPI init Mar 7 00:54:34.241775 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 7 00:54:34.241810 kernel: pnp: PnP ACPI: found 1 devices Mar 7 00:54:34.241829 kernel: NET: Registered PF_INET protocol family Mar 7 00:54:34.241848 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 00:54:34.241867 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 00:54:34.241885 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 00:54:34.241904 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 00:54:34.241923 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 00:54:34.241942 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 00:54:34.241965 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:54:34.241985 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:54:34.242004 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 00:54:34.242022 kernel: PCI: CLS 0 bytes, default 64 Mar 7 00:54:34.242040 kernel: kvm [1]: HYP mode not available Mar 7 00:54:34.242059 kernel: Initialise system trusted keyrings Mar 7 00:54:34.242077 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 00:54:34.242096 kernel: Key type asymmetric registered Mar 7 00:54:34.242114 kernel: Asymmetric key parser 'x509' registered Mar 7 00:54:34.242137 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 7 00:54:34.242186 kernel: io scheduler mq-deadline registered Mar 7 00:54:34.242205 kernel: io scheduler kyber registered Mar 7 00:54:34.242224 kernel: io scheduler bfq registered Mar 7 00:54:34.242460 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 7 00:54:34.242498 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 7 00:54:34.242539 kernel: ACPI: button: Power Button [PWRB] Mar 7 00:54:34.242581 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 7 00:54:34.242603 kernel: ACPI: button: Sleep Button [SLPB] Mar 7 00:54:34.242629 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 00:54:34.242649 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 7 00:54:34.242877 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 7 00:54:34.242903 kernel: printk: console [ttyS0] disabled Mar 7 00:54:34.242922 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 7 00:54:34.242941 kernel: printk: console [ttyS0] enabled Mar 7 00:54:34.242960 kernel: printk: bootconsole [uart0] disabled Mar 7 00:54:34.242978 kernel: thunder_xcv, ver 1.0 Mar 7 00:54:34.242997 kernel: thunder_bgx, ver 1.0 Mar 7 00:54:34.243021 kernel: nicpf, ver 1.0 Mar 7 00:54:34.243039 kernel: nicvf, ver 1.0 Mar 7 00:54:34.243323 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 7 00:54:34.243534 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-07T00:54:33 UTC (1772844873) Mar 7 00:54:34.243560 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 00:54:34.243580 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 7 00:54:34.243600 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 7 00:54:34.243618 kernel: watchdog: Hard watchdog permanently disabled Mar 7 00:54:34.243645 kernel: NET: Registered PF_INET6 protocol family Mar 7 00:54:34.243664 kernel: Segment Routing with IPv6 Mar 7 00:54:34.243682 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 00:54:34.243701 kernel: NET: Registered PF_PACKET protocol family Mar 7 00:54:34.243719 kernel: Key type dns_resolver registered Mar 7 00:54:34.243738 kernel: registered taskstats version 1 Mar 7 00:54:34.243756 kernel: Loading compiled-in X.509 certificates Mar 7 00:54:34.243775 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: e62b4e4ebcb406beff1271ecc7444548c4ab67e9' Mar 7 00:54:34.243793 kernel: Key type .fscrypt registered Mar 7 00:54:34.243816 kernel: Key type fscrypt-provisioning registered Mar 7 00:54:34.243835 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 00:54:34.243854 kernel: ima: Allocated hash algorithm: sha1 Mar 7 00:54:34.243872 kernel: ima: No architecture policies found Mar 7 00:54:34.243891 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 7 00:54:34.243910 kernel: clk: Disabling unused clocks Mar 7 00:54:34.243928 kernel: Freeing unused kernel memory: 39424K Mar 7 00:54:34.243946 kernel: Run /init as init process Mar 7 00:54:34.243965 kernel: with arguments: Mar 7 00:54:34.243987 kernel: /init Mar 7 00:54:34.244006 kernel: with environment: Mar 7 00:54:34.244024 kernel: HOME=/ Mar 7 00:54:34.244042 kernel: TERM=linux Mar 7 00:54:34.244065 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 00:54:34.244088 systemd[1]: Detected virtualization amazon. Mar 7 00:54:34.244109 systemd[1]: Detected architecture arm64. Mar 7 00:54:34.244182 systemd[1]: Running in initrd. Mar 7 00:54:34.244212 systemd[1]: No hostname configured, using default hostname. Mar 7 00:54:34.244232 systemd[1]: Hostname set to . Mar 7 00:54:34.244253 systemd[1]: Initializing machine ID from VM UUID. Mar 7 00:54:34.244273 systemd[1]: Queued start job for default target initrd.target. Mar 7 00:54:34.244294 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:54:34.244314 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:54:34.244335 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 00:54:34.244356 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:54:34.244382 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 00:54:34.244403 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 00:54:34.244426 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 00:54:34.244447 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 00:54:34.244468 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:54:34.244488 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:54:34.244513 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:54:34.244534 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:54:34.244554 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:54:34.244574 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:54:34.244594 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:54:34.244615 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:54:34.244635 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 00:54:34.244656 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 00:54:34.244676 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:54:34.244700 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:54:34.244721 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:54:34.244742 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:54:34.244762 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 00:54:34.244782 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:54:34.244802 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 00:54:34.244823 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 00:54:34.244843 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:54:34.244863 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:54:34.244888 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:54:34.244908 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 00:54:34.244965 systemd-journald[251]: Collecting audit messages is disabled. Mar 7 00:54:34.245009 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:54:34.245035 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 00:54:34.245056 systemd-journald[251]: Journal started Mar 7 00:54:34.245096 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2aecf19c4b801f0af076830cfc08cc) is 8.0M, max 75.3M, 67.3M free. Mar 7 00:54:34.233186 systemd-modules-load[252]: Inserted module 'overlay' Mar 7 00:54:34.264180 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 00:54:34.271902 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:54:34.280589 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:54:34.290189 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 00:54:34.292980 systemd-modules-load[252]: Inserted module 'br_netfilter' Mar 7 00:54:34.295058 kernel: Bridge firewalling registered Mar 7 00:54:34.296963 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:54:34.308589 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:54:34.314719 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:54:34.330553 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:54:34.345440 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:54:34.351885 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:54:34.352812 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:54:34.385228 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:54:34.390200 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:54:34.406509 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:54:34.414483 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:54:34.422773 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 00:54:34.459957 dracut-cmdline[290]: dracut-dracut-053 Mar 7 00:54:34.469181 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 00:54:34.507112 systemd-resolved[285]: Positive Trust Anchors: Mar 7 00:54:34.512374 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:54:34.515701 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:54:34.618181 kernel: SCSI subsystem initialized Mar 7 00:54:34.626177 kernel: Loading iSCSI transport class v2.0-870. Mar 7 00:54:34.639187 kernel: iscsi: registered transport (tcp) Mar 7 00:54:34.661187 kernel: iscsi: registered transport (qla4xxx) Mar 7 00:54:34.661260 kernel: QLogic iSCSI HBA Driver Mar 7 00:54:34.743175 kernel: random: crng init done Mar 7 00:54:34.743814 systemd-resolved[285]: Defaulting to hostname 'linux'. Mar 7 00:54:34.748310 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:54:34.756402 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:54:34.776245 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 00:54:34.788443 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 00:54:34.823991 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 00:54:34.824067 kernel: device-mapper: uevent: version 1.0.3 Mar 7 00:54:34.824096 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 00:54:34.891204 kernel: raid6: neonx8 gen() 6700 MB/s Mar 7 00:54:34.908176 kernel: raid6: neonx4 gen() 6555 MB/s Mar 7 00:54:34.925174 kernel: raid6: neonx2 gen() 5462 MB/s Mar 7 00:54:34.942175 kernel: raid6: neonx1 gen() 3952 MB/s Mar 7 00:54:34.959174 kernel: raid6: int64x8 gen() 3798 MB/s Mar 7 00:54:34.976175 kernel: raid6: int64x4 gen() 3701 MB/s Mar 7 00:54:34.993174 kernel: raid6: int64x2 gen() 3599 MB/s Mar 7 00:54:35.011204 kernel: raid6: int64x1 gen() 2734 MB/s Mar 7 00:54:35.011236 kernel: raid6: using algorithm neonx8 gen() 6700 MB/s Mar 7 00:54:35.030214 kernel: raid6: .... xor() 4840 MB/s, rmw enabled Mar 7 00:54:35.030262 kernel: raid6: using neon recovery algorithm Mar 7 00:54:35.038179 kernel: xor: measuring software checksum speed Mar 7 00:54:35.038236 kernel: 8regs : 10257 MB/sec Mar 7 00:54:35.040413 kernel: 32regs : 11916 MB/sec Mar 7 00:54:35.041711 kernel: arm64_neon : 9495 MB/sec Mar 7 00:54:35.041752 kernel: xor: using function: 32regs (11916 MB/sec) Mar 7 00:54:35.126560 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 00:54:35.145378 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:54:35.156523 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:54:35.191619 systemd-udevd[472]: Using default interface naming scheme 'v255'. Mar 7 00:54:35.200350 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:54:35.225550 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 00:54:35.252761 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Mar 7 00:54:35.309786 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:54:35.320475 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:54:35.435909 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:54:35.456560 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 00:54:35.518405 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 00:54:35.531329 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:54:35.542911 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:54:35.558241 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:54:35.575024 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 00:54:35.618811 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:54:35.650485 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 7 00:54:35.650547 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 7 00:54:35.661478 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 7 00:54:35.661811 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 7 00:54:35.668192 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 7 00:54:35.668715 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 00:54:35.671663 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 7 00:54:35.669017 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:54:35.686809 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:36:6b:4b:0d:1f Mar 7 00:54:35.690471 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 7 00:54:35.680519 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:54:35.683107 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:54:35.683415 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:54:35.686694 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:54:35.700641 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:54:35.719443 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 00:54:35.719505 kernel: GPT:9289727 != 33554431 Mar 7 00:54:35.719531 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 00:54:35.719567 kernel: GPT:9289727 != 33554431 Mar 7 00:54:35.722341 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 00:54:35.723447 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 00:54:35.726694 (udev-worker)[535]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:54:35.747351 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:54:35.760506 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:54:35.807118 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:54:35.826493 kernel: BTRFS: device fsid 237c8587-8110-47ef-99f9-37e4ed4d3b31 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (529) Mar 7 00:54:35.875189 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (532) Mar 7 00:54:35.886324 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 7 00:54:35.928319 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 7 00:54:35.931269 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 7 00:54:35.968053 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 00:54:35.992389 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 7 00:54:36.006529 disk-uuid[664]: Primary Header is updated. Mar 7 00:54:36.006529 disk-uuid[664]: Secondary Entries is updated. Mar 7 00:54:36.006529 disk-uuid[664]: Secondary Header is updated. Mar 7 00:54:36.020175 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 00:54:36.033168 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 00:54:36.039175 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 00:54:36.524361 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 7 00:54:37.043228 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 00:54:37.044206 disk-uuid[665]: The operation has completed successfully. Mar 7 00:54:37.227480 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 00:54:37.227836 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 00:54:37.280478 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 00:54:37.293671 sh[1007]: Success Mar 7 00:54:37.321220 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 7 00:54:37.431029 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 00:54:37.440782 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 00:54:37.447627 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 00:54:37.479289 kernel: BTRFS info (device dm-0): first mount of filesystem 237c8587-8110-47ef-99f9-37e4ed4d3b31 Mar 7 00:54:37.479359 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:54:37.481305 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 00:54:37.482727 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 00:54:37.483935 kernel: BTRFS info (device dm-0): using free space tree Mar 7 00:54:37.521176 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 00:54:37.538762 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 00:54:37.543681 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 00:54:37.557392 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 00:54:37.563459 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 00:54:37.609438 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:54:37.609509 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:54:37.611199 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 00:54:37.632205 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 00:54:37.654447 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 00:54:37.657963 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:54:37.672539 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 00:54:37.688021 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 00:54:37.761099 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:54:37.782489 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:54:37.849382 systemd-networkd[1200]: lo: Link UP Mar 7 00:54:37.849403 systemd-networkd[1200]: lo: Gained carrier Mar 7 00:54:37.853430 systemd-networkd[1200]: Enumeration completed Mar 7 00:54:37.853621 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:54:37.860445 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:54:37.860465 systemd-networkd[1200]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:54:37.862385 systemd[1]: Reached target network.target - Network. Mar 7 00:54:37.865366 systemd-networkd[1200]: eth0: Link UP Mar 7 00:54:37.865374 systemd-networkd[1200]: eth0: Gained carrier Mar 7 00:54:37.865392 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:54:37.893233 systemd-networkd[1200]: eth0: DHCPv4 address 172.31.17.228/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 7 00:54:37.928783 ignition[1150]: Ignition 2.19.0 Mar 7 00:54:37.929343 ignition[1150]: Stage: fetch-offline Mar 7 00:54:37.931681 ignition[1150]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:37.931706 ignition[1150]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:37.932295 ignition[1150]: Ignition finished successfully Mar 7 00:54:37.941208 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:54:37.956077 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 00:54:37.980238 ignition[1210]: Ignition 2.19.0 Mar 7 00:54:37.980265 ignition[1210]: Stage: fetch Mar 7 00:54:37.982091 ignition[1210]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:37.982119 ignition[1210]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:37.982718 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:38.002087 ignition[1210]: PUT result: OK Mar 7 00:54:38.006934 ignition[1210]: parsed url from cmdline: "" Mar 7 00:54:38.006957 ignition[1210]: no config URL provided Mar 7 00:54:38.006973 ignition[1210]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 00:54:38.007027 ignition[1210]: no config at "/usr/lib/ignition/user.ign" Mar 7 00:54:38.007061 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:38.011292 ignition[1210]: PUT result: OK Mar 7 00:54:38.011384 ignition[1210]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 7 00:54:38.020275 ignition[1210]: GET result: OK Mar 7 00:54:38.020622 ignition[1210]: parsing config with SHA512: 053b23b2d6efdbabc98c3515eb39c4a7c25e2c9dbacaba941e6ab9f0fc935e7a380661a1648062738369ff88e89f116cd9690ae4c338347fa721ea4aeae9f88b Mar 7 00:54:38.030342 unknown[1210]: fetched base config from "system" Mar 7 00:54:38.030607 unknown[1210]: fetched base config from "system" Mar 7 00:54:38.031690 ignition[1210]: fetch: fetch complete Mar 7 00:54:38.030623 unknown[1210]: fetched user config from "aws" Mar 7 00:54:38.031711 ignition[1210]: fetch: fetch passed Mar 7 00:54:38.039130 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 00:54:38.031809 ignition[1210]: Ignition finished successfully Mar 7 00:54:38.055595 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 00:54:38.086242 ignition[1216]: Ignition 2.19.0 Mar 7 00:54:38.086261 ignition[1216]: Stage: kargs Mar 7 00:54:38.086869 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:38.086894 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:38.087044 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:38.089556 ignition[1216]: PUT result: OK Mar 7 00:54:38.102021 ignition[1216]: kargs: kargs passed Mar 7 00:54:38.102119 ignition[1216]: Ignition finished successfully Mar 7 00:54:38.110363 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 00:54:38.121567 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 00:54:38.148121 ignition[1223]: Ignition 2.19.0 Mar 7 00:54:38.148679 ignition[1223]: Stage: disks Mar 7 00:54:38.149356 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:38.149382 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:38.149553 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:38.158184 ignition[1223]: PUT result: OK Mar 7 00:54:38.162657 ignition[1223]: disks: disks passed Mar 7 00:54:38.162938 ignition[1223]: Ignition finished successfully Mar 7 00:54:38.170200 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 00:54:38.173308 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 00:54:38.177307 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 00:54:38.187798 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:54:38.190156 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:54:38.192827 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:54:38.206426 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 00:54:38.255714 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 00:54:38.261321 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 00:54:38.276404 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 00:54:38.358172 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 596a8ea8-9d3d-4d06-a56e-9d3ebd3cb76d r/w with ordered data mode. Quota mode: none. Mar 7 00:54:38.359302 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 00:54:38.363438 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 00:54:38.381329 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:54:38.389321 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 00:54:38.393615 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 00:54:38.393715 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 00:54:38.416380 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1250) Mar 7 00:54:38.393768 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:54:38.422732 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:54:38.422778 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:54:38.424184 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 00:54:38.431838 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 00:54:38.445219 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 00:54:38.448064 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 00:54:38.455827 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:54:38.559223 initrd-setup-root[1274]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 00:54:38.572013 initrd-setup-root[1281]: cut: /sysroot/etc/group: No such file or directory Mar 7 00:54:38.582264 initrd-setup-root[1288]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 00:54:38.589427 initrd-setup-root[1295]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 00:54:38.743792 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 00:54:38.759804 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 00:54:38.769190 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 00:54:38.786547 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 00:54:38.789324 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:54:38.825173 ignition[1362]: INFO : Ignition 2.19.0 Mar 7 00:54:38.825173 ignition[1362]: INFO : Stage: mount Mar 7 00:54:38.825173 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:38.825173 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:38.836266 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:38.836266 ignition[1362]: INFO : PUT result: OK Mar 7 00:54:38.845090 ignition[1362]: INFO : mount: mount passed Mar 7 00:54:38.845090 ignition[1362]: INFO : Ignition finished successfully Mar 7 00:54:38.850248 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 00:54:38.864456 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 00:54:38.874524 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 00:54:38.890516 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:54:38.917203 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1374) Mar 7 00:54:38.921348 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:54:38.921402 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:54:38.921429 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 00:54:38.927174 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 00:54:38.931177 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:54:38.971396 ignition[1391]: INFO : Ignition 2.19.0 Mar 7 00:54:38.971396 ignition[1391]: INFO : Stage: files Mar 7 00:54:38.975599 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:38.975599 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:38.975599 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:38.975599 ignition[1391]: INFO : PUT result: OK Mar 7 00:54:38.987250 ignition[1391]: DEBUG : files: compiled without relabeling support, skipping Mar 7 00:54:38.990241 ignition[1391]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 00:54:38.990241 ignition[1391]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 00:54:38.999587 ignition[1391]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 00:54:39.002694 ignition[1391]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 00:54:39.002694 ignition[1391]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 00:54:39.000412 unknown[1391]: wrote ssh authorized keys file for user: core Mar 7 00:54:39.010942 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:54:39.010942 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 7 00:54:39.088322 systemd-networkd[1200]: eth0: Gained IPv6LL Mar 7 00:54:39.105873 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 00:54:39.240921 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:54:39.245378 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 00:54:39.251284 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 00:54:39.251284 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:54:39.251284 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:54:39.251284 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:54:39.251284 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:54:39.251284 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:54:39.251284 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:54:39.251284 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:54:39.251284 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:54:39.251284 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 7 00:54:39.251284 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 7 00:54:39.299082 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 7 00:54:39.299082 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Mar 7 00:54:39.755030 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 00:54:40.185338 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 7 00:54:40.185338 ignition[1391]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 00:54:40.193234 ignition[1391]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:54:40.193234 ignition[1391]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:54:40.193234 ignition[1391]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 00:54:40.193234 ignition[1391]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 7 00:54:40.193234 ignition[1391]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 00:54:40.193234 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:54:40.193234 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:54:40.193234 ignition[1391]: INFO : files: files passed Mar 7 00:54:40.193234 ignition[1391]: INFO : Ignition finished successfully Mar 7 00:54:40.200415 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 00:54:40.235433 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 00:54:40.243456 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 00:54:40.253821 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 00:54:40.254656 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 00:54:40.280535 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:54:40.280535 initrd-setup-root-after-ignition[1419]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:54:40.288810 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:54:40.294349 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:54:40.298920 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 00:54:40.312570 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 00:54:40.375047 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 00:54:40.375292 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 00:54:40.382708 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 00:54:40.387485 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 00:54:40.389730 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 00:54:40.402518 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 00:54:40.427711 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:54:40.445206 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 00:54:40.471433 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:54:40.474380 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:54:40.477239 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 00:54:40.479460 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 00:54:40.479723 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:54:40.494458 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 00:54:40.497061 systemd[1]: Stopped target basic.target - Basic System. Mar 7 00:54:40.503277 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 00:54:40.506341 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:54:40.510858 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 00:54:40.516012 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 00:54:40.518616 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:54:40.523198 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 00:54:40.531460 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 00:54:40.533282 systemd[1]: Stopped target swap.target - Swaps. Mar 7 00:54:40.539470 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 00:54:40.539697 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:54:40.544078 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:54:40.546468 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:54:40.551160 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 00:54:40.551352 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:54:40.559158 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 00:54:40.559379 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 00:54:40.573913 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 00:54:40.574707 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:54:40.582681 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 00:54:40.583076 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 00:54:40.596532 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 00:54:40.598857 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 00:54:40.599265 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:54:40.614632 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 00:54:40.621368 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 00:54:40.625373 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:54:40.632716 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 00:54:40.635385 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:54:40.649202 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 00:54:40.652190 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 00:54:40.670793 ignition[1443]: INFO : Ignition 2.19.0 Mar 7 00:54:40.670793 ignition[1443]: INFO : Stage: umount Mar 7 00:54:40.676050 ignition[1443]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:40.676050 ignition[1443]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:40.676050 ignition[1443]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:40.684711 ignition[1443]: INFO : PUT result: OK Mar 7 00:54:40.691248 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 00:54:40.696253 ignition[1443]: INFO : umount: umount passed Mar 7 00:54:40.696253 ignition[1443]: INFO : Ignition finished successfully Mar 7 00:54:40.699611 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 00:54:40.704379 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 00:54:40.708998 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 00:54:40.711189 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 00:54:40.718491 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 00:54:40.718787 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 00:54:40.725491 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 00:54:40.725584 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 00:54:40.727977 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 00:54:40.728054 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 00:54:40.730409 systemd[1]: Stopped target network.target - Network. Mar 7 00:54:40.732686 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 00:54:40.732767 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:54:40.735445 systemd[1]: Stopped target paths.target - Path Units. Mar 7 00:54:40.738029 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 00:54:40.740200 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:54:40.743171 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 00:54:40.745597 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 00:54:40.747834 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 00:54:40.747909 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:54:40.750249 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 00:54:40.750321 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:54:40.752672 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 00:54:40.752754 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 00:54:40.755083 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 00:54:40.755186 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 00:54:40.759850 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 00:54:40.759932 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 00:54:40.795374 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 00:54:40.798532 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 00:54:40.803903 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 00:54:40.804676 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 00:54:40.810205 systemd-networkd[1200]: eth0: DHCPv6 lease lost Mar 7 00:54:40.812823 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 00:54:40.812995 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:54:40.818488 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 00:54:40.818699 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 00:54:40.822028 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 00:54:40.822502 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:54:40.832861 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 00:54:40.844690 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 00:54:40.845009 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:54:40.849318 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 00:54:40.849408 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:54:40.852619 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 00:54:40.852710 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 00:54:40.860333 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:54:40.895170 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 00:54:40.895505 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:54:40.910971 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 00:54:40.911111 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 00:54:40.922126 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 00:54:40.922320 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:54:40.924789 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 00:54:40.924892 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:54:40.930653 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 00:54:40.930764 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 00:54:40.944370 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 00:54:40.944471 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:54:40.960567 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 00:54:40.960720 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 00:54:40.960824 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:54:40.978118 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:54:40.978243 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:54:40.984115 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 00:54:40.984360 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 00:54:40.988823 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 00:54:40.988998 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 00:54:40.994808 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 00:54:41.009458 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 00:54:41.045639 systemd[1]: Switching root. Mar 7 00:54:41.077554 systemd-journald[251]: Journal stopped Mar 7 00:54:43.022795 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Mar 7 00:54:43.022922 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 00:54:43.022966 kernel: SELinux: policy capability open_perms=1 Mar 7 00:54:43.023011 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 00:54:43.023045 kernel: SELinux: policy capability always_check_network=0 Mar 7 00:54:43.023076 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 00:54:43.023107 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 00:54:43.023137 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 00:54:43.023190 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 00:54:43.023221 kernel: audit: type=1403 audit(1772844881.431:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 00:54:43.023254 systemd[1]: Successfully loaded SELinux policy in 51.458ms. Mar 7 00:54:43.023299 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.049ms. Mar 7 00:54:43.023341 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 00:54:43.023375 systemd[1]: Detected virtualization amazon. Mar 7 00:54:43.023417 systemd[1]: Detected architecture arm64. Mar 7 00:54:43.023448 systemd[1]: Detected first boot. Mar 7 00:54:43.023478 systemd[1]: Initializing machine ID from VM UUID. Mar 7 00:54:43.023511 zram_generator::config[1485]: No configuration found. Mar 7 00:54:43.023546 systemd[1]: Populated /etc with preset unit settings. Mar 7 00:54:43.023579 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 00:54:43.023614 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 00:54:43.023647 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 00:54:43.023680 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 00:54:43.023718 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 00:54:43.023748 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 00:54:43.023778 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 00:54:43.023810 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 00:54:43.023842 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 00:54:43.023874 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 00:54:43.023908 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 00:54:43.023940 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:54:43.023971 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:54:43.024000 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 00:54:43.024032 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 00:54:43.024065 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 00:54:43.024120 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:54:43.024174 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 00:54:43.024208 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:54:43.024243 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 00:54:43.024274 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 00:54:43.024307 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 00:54:43.024337 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 00:54:43.024370 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:54:43.024404 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:54:43.024439 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:54:43.024471 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:54:43.024506 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 00:54:43.024539 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 00:54:43.024568 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:54:43.024598 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:54:43.024631 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:54:43.024662 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 00:54:43.024695 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 00:54:43.024726 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 00:54:43.024756 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 00:54:43.024791 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 00:54:43.024823 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 00:54:43.024853 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 00:54:43.024884 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 00:54:43.024916 systemd[1]: Reached target machines.target - Containers. Mar 7 00:54:43.024947 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 00:54:43.024977 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:54:43.025006 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:54:43.025040 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 00:54:43.027234 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:54:43.027278 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:54:43.027312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:54:43.027343 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 00:54:43.027374 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:54:43.027408 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 00:54:43.027440 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 00:54:43.027480 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 00:54:43.027511 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 00:54:43.027543 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 00:54:43.027573 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:54:43.027613 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:54:43.027646 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 00:54:43.027678 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 00:54:43.027710 kernel: loop: module loaded Mar 7 00:54:43.027741 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:54:43.027773 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 00:54:43.027808 systemd[1]: Stopped verity-setup.service. Mar 7 00:54:43.027841 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 00:54:43.027873 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 00:54:43.027904 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 00:54:43.027937 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 00:54:43.027969 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 00:54:43.027999 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 00:54:43.028029 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:54:43.028065 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 00:54:43.028127 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 00:54:43.028252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:54:43.028286 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:54:43.028403 systemd-journald[1575]: Collecting audit messages is disabled. Mar 7 00:54:43.028467 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:54:43.028498 kernel: fuse: init (API version 7.39) Mar 7 00:54:43.028530 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:54:43.028561 systemd-journald[1575]: Journal started Mar 7 00:54:43.028609 systemd-journald[1575]: Runtime Journal (/run/log/journal/ec2aecf19c4b801f0af076830cfc08cc) is 8.0M, max 75.3M, 67.3M free. Mar 7 00:54:42.434562 systemd[1]: Queued start job for default target multi-user.target. Mar 7 00:54:43.032392 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:54:42.462424 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 7 00:54:42.463254 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 00:54:43.040789 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:54:43.044030 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:54:43.047959 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 00:54:43.049037 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 00:54:43.052834 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 00:54:43.055945 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:54:43.059044 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 00:54:43.063126 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 00:54:43.073178 kernel: ACPI: bus type drm_connector registered Mar 7 00:54:43.075397 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:54:43.075809 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:54:43.097905 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 00:54:43.109472 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 00:54:43.119366 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 00:54:43.126534 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 00:54:43.126594 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:54:43.133282 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 00:54:43.144548 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 00:54:43.152772 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 00:54:43.155493 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:54:43.164493 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 00:54:43.169399 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 00:54:43.172117 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:54:43.175036 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 00:54:43.178463 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:54:43.181445 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:54:43.196540 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 00:54:43.204562 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 00:54:43.212225 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 00:54:43.215518 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 00:54:43.219098 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 00:54:43.240102 systemd-journald[1575]: Time spent on flushing to /var/log/journal/ec2aecf19c4b801f0af076830cfc08cc is 165.058ms for 899 entries. Mar 7 00:54:43.240102 systemd-journald[1575]: System Journal (/var/log/journal/ec2aecf19c4b801f0af076830cfc08cc) is 8.0M, max 195.6M, 187.6M free. Mar 7 00:54:43.439479 systemd-journald[1575]: Received client request to flush runtime journal. Mar 7 00:54:43.439573 kernel: loop0: detected capacity change from 0 to 114328 Mar 7 00:54:43.439628 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 00:54:43.439673 kernel: loop1: detected capacity change from 0 to 200864 Mar 7 00:54:43.253200 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 00:54:43.257011 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 00:54:43.274627 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 00:54:43.347564 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:54:43.389038 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 00:54:43.403487 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:54:43.424515 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 00:54:43.433287 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 00:54:43.448241 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 00:54:43.470892 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:54:43.483534 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 00:54:43.538517 udevadm[1634]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 00:54:43.542275 systemd-tmpfiles[1627]: ACLs are not supported, ignoring. Mar 7 00:54:43.542317 systemd-tmpfiles[1627]: ACLs are not supported, ignoring. Mar 7 00:54:43.560951 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:54:43.598652 kernel: loop2: detected capacity change from 0 to 114432 Mar 7 00:54:43.655188 kernel: loop3: detected capacity change from 0 to 52536 Mar 7 00:54:43.711209 kernel: loop4: detected capacity change from 0 to 114328 Mar 7 00:54:43.743288 kernel: loop5: detected capacity change from 0 to 200864 Mar 7 00:54:43.771189 kernel: loop6: detected capacity change from 0 to 114432 Mar 7 00:54:43.804189 kernel: loop7: detected capacity change from 0 to 52536 Mar 7 00:54:43.828469 (sd-merge)[1641]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 7 00:54:43.829575 (sd-merge)[1641]: Merged extensions into '/usr'. Mar 7 00:54:43.845328 systemd[1]: Reloading requested from client PID 1615 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 00:54:43.845354 systemd[1]: Reloading... Mar 7 00:54:44.060698 zram_generator::config[1670]: No configuration found. Mar 7 00:54:44.221444 ldconfig[1610]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 00:54:44.401748 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:54:44.522202 systemd[1]: Reloading finished in 676 ms. Mar 7 00:54:44.564931 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 00:54:44.568806 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 00:54:44.572234 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 00:54:44.588473 systemd[1]: Starting ensure-sysext.service... Mar 7 00:54:44.595441 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:54:44.612505 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:54:44.629266 systemd[1]: Reloading requested from client PID 1720 ('systemctl') (unit ensure-sysext.service)... Mar 7 00:54:44.629295 systemd[1]: Reloading... Mar 7 00:54:44.650668 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 00:54:44.651986 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 00:54:44.653877 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 00:54:44.654676 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Mar 7 00:54:44.654936 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Mar 7 00:54:44.661340 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:54:44.661566 systemd-tmpfiles[1721]: Skipping /boot Mar 7 00:54:44.681311 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:54:44.681510 systemd-tmpfiles[1721]: Skipping /boot Mar 7 00:54:44.735713 systemd-udevd[1722]: Using default interface naming scheme 'v255'. Mar 7 00:54:44.807184 zram_generator::config[1752]: No configuration found. Mar 7 00:54:44.990393 (udev-worker)[1761]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:54:45.198563 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1767) Mar 7 00:54:45.249865 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:54:45.401745 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 00:54:45.402116 systemd[1]: Reloading finished in 772 ms. Mar 7 00:54:45.441815 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:54:45.447304 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:54:45.523681 systemd[1]: Finished ensure-sysext.service. Mar 7 00:54:45.549059 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 00:54:45.558471 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 7 00:54:45.579441 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 00:54:45.586462 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 00:54:45.589560 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:54:45.594521 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 00:54:45.599631 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:54:45.606279 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:54:45.616608 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:54:45.625377 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:54:45.628669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:54:45.644720 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 00:54:45.651421 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 00:54:45.661477 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:54:45.671528 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:54:45.674299 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 00:54:45.684277 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 00:54:45.689494 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:54:45.694128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:54:45.695294 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:54:45.729112 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 00:54:45.746322 lvm[1919]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 00:54:45.757993 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:54:45.760299 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:54:45.772698 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:54:45.773087 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:54:45.777309 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:54:45.800745 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:54:45.804291 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:54:45.807830 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 00:54:45.812313 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:54:45.827474 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 00:54:45.886083 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 00:54:45.890115 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:54:45.901525 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 00:54:45.906107 augenrules[1954]: No rules Mar 7 00:54:45.910910 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 00:54:45.932289 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 00:54:45.951611 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 00:54:45.954066 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 00:54:45.969172 lvm[1956]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 00:54:45.990805 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 00:54:45.993891 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 00:54:46.016791 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 00:54:46.041546 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:54:46.066172 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 00:54:46.122277 systemd-networkd[1932]: lo: Link UP Mar 7 00:54:46.122292 systemd-networkd[1932]: lo: Gained carrier Mar 7 00:54:46.125711 systemd-networkd[1932]: Enumeration completed Mar 7 00:54:46.126078 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:54:46.129635 systemd-networkd[1932]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:54:46.129813 systemd-networkd[1932]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:54:46.132274 systemd-networkd[1932]: eth0: Link UP Mar 7 00:54:46.132776 systemd-networkd[1932]: eth0: Gained carrier Mar 7 00:54:46.132954 systemd-networkd[1932]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:54:46.140711 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 00:54:46.147583 systemd-resolved[1935]: Positive Trust Anchors: Mar 7 00:54:46.147611 systemd-resolved[1935]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:54:46.147674 systemd-resolved[1935]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:54:46.148937 systemd-networkd[1932]: eth0: DHCPv4 address 172.31.17.228/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 7 00:54:46.161949 systemd-resolved[1935]: Defaulting to hostname 'linux'. Mar 7 00:54:46.165508 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:54:46.168343 systemd[1]: Reached target network.target - Network. Mar 7 00:54:46.170468 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:54:46.173170 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:54:46.175727 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 00:54:46.178589 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 00:54:46.181798 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 00:54:46.186466 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 00:54:46.189295 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 00:54:46.192229 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 00:54:46.192288 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:54:46.194260 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:54:46.197867 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 00:54:46.204707 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 00:54:46.214600 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 00:54:46.218107 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 00:54:46.220743 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:54:46.222908 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:54:46.225453 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:54:46.225509 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:54:46.232427 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 00:54:46.244466 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 00:54:46.250467 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 00:54:46.258425 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 00:54:46.264514 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 00:54:46.269413 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 00:54:46.281426 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 00:54:46.290264 systemd[1]: Started ntpd.service - Network Time Service. Mar 7 00:54:46.300402 jq[1983]: false Mar 7 00:54:46.295933 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 00:54:46.304798 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 7 00:54:46.310572 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 00:54:46.321565 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 00:54:46.334482 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 00:54:46.341872 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 00:54:46.344901 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 00:54:46.349926 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 00:54:46.358319 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 00:54:46.363829 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 00:54:46.366278 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 00:54:46.422535 dbus-daemon[1982]: [system] SELinux support is enabled Mar 7 00:54:46.427433 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 00:54:46.434838 dbus-daemon[1982]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1932 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 7 00:54:46.453373 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 00:54:46.441670 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 00:54:46.443091 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 00:54:46.447833 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 00:54:46.447922 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 00:54:46.454307 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 00:54:46.454347 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 00:54:46.482935 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 00:54:46.485360 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 00:54:46.518234 jq[1995]: true Mar 7 00:54:46.522440 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 7 00:54:46.569785 (ntainerd)[2016]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 00:54:46.596777 extend-filesystems[1984]: Found loop4 Mar 7 00:54:46.596777 extend-filesystems[1984]: Found loop5 Mar 7 00:54:46.607575 extend-filesystems[1984]: Found loop6 Mar 7 00:54:46.607575 extend-filesystems[1984]: Found loop7 Mar 7 00:54:46.607575 extend-filesystems[1984]: Found nvme0n1 Mar 7 00:54:46.607575 extend-filesystems[1984]: Found nvme0n1p1 Mar 7 00:54:46.607575 extend-filesystems[1984]: Found nvme0n1p2 Mar 7 00:54:46.609199 ntpd[1986]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:14:43 UTC 2026 (1): Starting Mar 7 00:54:46.631516 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:14:43 UTC 2026 (1): Starting Mar 7 00:54:46.631516 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 00:54:46.631516 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: ---------------------------------------------------- Mar 7 00:54:46.631516 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: ntp-4 is maintained by Network Time Foundation, Mar 7 00:54:46.631516 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 00:54:46.631516 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: corporation. Support and training for ntp-4 are Mar 7 00:54:46.631516 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: available at https://www.nwtime.org/support Mar 7 00:54:46.631516 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: ---------------------------------------------------- Mar 7 00:54:46.631516 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: proto: precision = 0.096 usec (-23) Mar 7 00:54:46.631516 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: basedate set to 2026-02-22 Mar 7 00:54:46.631516 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: gps base set to 2026-02-22 (week 2407) Mar 7 00:54:46.609246 ntpd[1986]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 00:54:46.632764 extend-filesystems[1984]: Found nvme0n1p3 Mar 7 00:54:46.632764 extend-filesystems[1984]: Found usr Mar 7 00:54:46.632764 extend-filesystems[1984]: Found nvme0n1p4 Mar 7 00:54:46.632764 extend-filesystems[1984]: Found nvme0n1p6 Mar 7 00:54:46.632764 extend-filesystems[1984]: Found nvme0n1p7 Mar 7 00:54:46.609267 ntpd[1986]: ---------------------------------------------------- Mar 7 00:54:46.652525 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 00:54:46.652525 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 00:54:46.652525 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 00:54:46.652525 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: Listen normally on 3 eth0 172.31.17.228:123 Mar 7 00:54:46.652525 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: Listen normally on 4 lo [::1]:123 Mar 7 00:54:46.652525 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: bind(21) AF_INET6 fe80::436:6bff:fe4b:d1f%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 00:54:46.652525 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: unable to create socket on eth0 (5) for fe80::436:6bff:fe4b:d1f%2#123 Mar 7 00:54:46.652525 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: failed to init interface for address fe80::436:6bff:fe4b:d1f%2 Mar 7 00:54:46.652525 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: Listening on routing socket on fd #21 for interface updates Mar 7 00:54:46.652902 extend-filesystems[1984]: Found nvme0n1p9 Mar 7 00:54:46.652902 extend-filesystems[1984]: Checking size of /dev/nvme0n1p9 Mar 7 00:54:46.609286 ntpd[1986]: ntp-4 is maintained by Network Time Foundation, Mar 7 00:54:46.609305 ntpd[1986]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 00:54:46.609323 ntpd[1986]: corporation. Support and training for ntp-4 are Mar 7 00:54:46.609342 ntpd[1986]: available at https://www.nwtime.org/support Mar 7 00:54:46.609361 ntpd[1986]: ---------------------------------------------------- Mar 7 00:54:46.623743 ntpd[1986]: proto: precision = 0.096 usec (-23) Mar 7 00:54:46.627784 ntpd[1986]: basedate set to 2026-02-22 Mar 7 00:54:46.627818 ntpd[1986]: gps base set to 2026-02-22 (week 2407) Mar 7 00:54:46.635367 ntpd[1986]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 00:54:46.635446 ntpd[1986]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 00:54:46.644667 ntpd[1986]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 00:54:46.644741 ntpd[1986]: Listen normally on 3 eth0 172.31.17.228:123 Mar 7 00:54:46.644816 ntpd[1986]: Listen normally on 4 lo [::1]:123 Mar 7 00:54:46.644896 ntpd[1986]: bind(21) AF_INET6 fe80::436:6bff:fe4b:d1f%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 00:54:46.644936 ntpd[1986]: unable to create socket on eth0 (5) for fe80::436:6bff:fe4b:d1f%2#123 Mar 7 00:54:46.644965 ntpd[1986]: failed to init interface for address fe80::436:6bff:fe4b:d1f%2 Mar 7 00:54:46.645023 ntpd[1986]: Listening on routing socket on fd #21 for interface updates Mar 7 00:54:46.661657 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 00:54:46.663355 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 00:54:46.663355 ntpd[1986]: 7 Mar 00:54:46 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 00:54:46.661712 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 00:54:46.678171 jq[2019]: true Mar 7 00:54:46.686174 update_engine[1993]: I20260307 00:54:46.684299 1993 main.cc:92] Flatcar Update Engine starting Mar 7 00:54:46.699753 tar[2011]: linux-arm64/LICENSE Mar 7 00:54:46.699753 tar[2011]: linux-arm64/helm Mar 7 00:54:46.706696 systemd[1]: Started update-engine.service - Update Engine. Mar 7 00:54:46.721170 update_engine[1993]: I20260307 00:54:46.716017 1993 update_check_scheduler.cc:74] Next update check in 4m11s Mar 7 00:54:46.718493 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.752 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.752 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.752 INFO Fetch successful Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.752 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.752 INFO Fetch successful Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.752 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.752 INFO Fetch successful Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.752 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.752 INFO Fetch successful Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.752 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.753 INFO Fetch failed with 404: resource not found Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.753 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.753 INFO Fetch successful Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.753 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.753 INFO Fetch successful Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.753 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.753 INFO Fetch successful Mar 7 00:54:46.755182 coreos-metadata[1981]: Mar 07 00:54:46.753 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 7 00:54:46.762493 coreos-metadata[1981]: Mar 07 00:54:46.756 INFO Fetch successful Mar 7 00:54:46.762493 coreos-metadata[1981]: Mar 07 00:54:46.756 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 7 00:54:46.762493 coreos-metadata[1981]: Mar 07 00:54:46.756 INFO Fetch successful Mar 7 00:54:46.768003 extend-filesystems[1984]: Resized partition /dev/nvme0n1p9 Mar 7 00:54:46.776596 extend-filesystems[2038]: resize2fs 1.47.1 (20-May-2024) Mar 7 00:54:46.799082 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 7 00:54:46.824895 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 7 00:54:46.871283 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 00:54:46.874803 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 00:54:46.896318 systemd-logind[1991]: Watching system buttons on /dev/input/event0 (Power Button) Mar 7 00:54:46.896387 systemd-logind[1991]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 7 00:54:46.900287 systemd-logind[1991]: New seat seat0. Mar 7 00:54:46.915901 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 00:54:46.989183 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 7 00:54:46.993284 bash[2062]: Updated "/home/core/.ssh/authorized_keys" Mar 7 00:54:46.997031 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 00:54:47.035308 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1758) Mar 7 00:54:47.042020 extend-filesystems[2038]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 7 00:54:47.042020 extend-filesystems[2038]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 7 00:54:47.042020 extend-filesystems[2038]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 7 00:54:47.051676 extend-filesystems[1984]: Resized filesystem in /dev/nvme0n1p9 Mar 7 00:54:47.062988 systemd[1]: Starting sshkeys.service... Mar 7 00:54:47.066714 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 00:54:47.069301 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 00:54:47.142340 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 00:54:47.155791 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 00:54:47.316396 locksmithd[2030]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 00:54:47.330666 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 7 00:54:47.331692 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 7 00:54:47.343345 dbus-daemon[1982]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2017 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 7 00:54:47.366877 systemd[1]: Starting polkit.service - Authorization Manager... Mar 7 00:54:47.391253 polkitd[2142]: Started polkitd version 121 Mar 7 00:54:47.473523 polkitd[2142]: Loading rules from directory /etc/polkit-1/rules.d Mar 7 00:54:47.473666 polkitd[2142]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 7 00:54:47.481836 polkitd[2142]: Finished loading, compiling and executing 2 rules Mar 7 00:54:47.488893 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 7 00:54:47.489671 systemd[1]: Started polkit.service - Authorization Manager. Mar 7 00:54:47.495017 polkitd[2142]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 7 00:54:47.503157 coreos-metadata[2103]: Mar 07 00:54:47.502 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 7 00:54:47.511250 coreos-metadata[2103]: Mar 07 00:54:47.511 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 7 00:54:47.512633 coreos-metadata[2103]: Mar 07 00:54:47.512 INFO Fetch successful Mar 7 00:54:47.512633 coreos-metadata[2103]: Mar 07 00:54:47.512 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 7 00:54:47.516684 coreos-metadata[2103]: Mar 07 00:54:47.516 INFO Fetch successful Mar 7 00:54:47.523885 unknown[2103]: wrote ssh authorized keys file for user: core Mar 7 00:54:47.570496 systemd-resolved[1935]: System hostname changed to 'ip-172-31-17-228'. Mar 7 00:54:47.570499 systemd-hostnamed[2017]: Hostname set to (transient) Mar 7 00:54:47.610270 ntpd[1986]: bind(24) AF_INET6 fe80::436:6bff:fe4b:d1f%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 00:54:47.611273 ntpd[1986]: 7 Mar 00:54:47 ntpd[1986]: bind(24) AF_INET6 fe80::436:6bff:fe4b:d1f%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 00:54:47.611273 ntpd[1986]: 7 Mar 00:54:47 ntpd[1986]: unable to create socket on eth0 (6) for fe80::436:6bff:fe4b:d1f%2#123 Mar 7 00:54:47.611273 ntpd[1986]: 7 Mar 00:54:47 ntpd[1986]: failed to init interface for address fe80::436:6bff:fe4b:d1f%2 Mar 7 00:54:47.610341 ntpd[1986]: unable to create socket on eth0 (6) for fe80::436:6bff:fe4b:d1f%2#123 Mar 7 00:54:47.610372 ntpd[1986]: failed to init interface for address fe80::436:6bff:fe4b:d1f%2 Mar 7 00:54:47.614207 update-ssh-keys[2172]: Updated "/home/core/.ssh/authorized_keys" Mar 7 00:54:47.621370 containerd[2016]: time="2026-03-07T00:54:47.621243742Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 00:54:47.622656 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 00:54:47.637214 systemd[1]: Finished sshkeys.service. Mar 7 00:54:47.694659 containerd[2016]: time="2026-03-07T00:54:47.694543654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:47.697643 containerd[2016]: time="2026-03-07T00:54:47.697563874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:54:47.697828 containerd[2016]: time="2026-03-07T00:54:47.697796542Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 00:54:47.697938 containerd[2016]: time="2026-03-07T00:54:47.697910686Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 00:54:47.698511 containerd[2016]: time="2026-03-07T00:54:47.698476738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 00:54:47.698655 containerd[2016]: time="2026-03-07T00:54:47.698627098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:47.698857 containerd[2016]: time="2026-03-07T00:54:47.698825962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:54:47.698978 containerd[2016]: time="2026-03-07T00:54:47.698950150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:47.699410 containerd[2016]: time="2026-03-07T00:54:47.699371830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:54:47.699525 containerd[2016]: time="2026-03-07T00:54:47.699498586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:47.700184 containerd[2016]: time="2026-03-07T00:54:47.699604258Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:54:47.700184 containerd[2016]: time="2026-03-07T00:54:47.699636370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:47.700184 containerd[2016]: time="2026-03-07T00:54:47.699801838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:47.700694 containerd[2016]: time="2026-03-07T00:54:47.700660510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:47.701006 containerd[2016]: time="2026-03-07T00:54:47.700972258Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:54:47.701360 containerd[2016]: time="2026-03-07T00:54:47.701328778Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 00:54:47.701786 containerd[2016]: time="2026-03-07T00:54:47.701624038Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 00:54:47.701786 containerd[2016]: time="2026-03-07T00:54:47.701728810Z" level=info msg="metadata content store policy set" policy=shared Mar 7 00:54:47.720172 containerd[2016]: time="2026-03-07T00:54:47.718059958Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 00:54:47.720172 containerd[2016]: time="2026-03-07T00:54:47.718190326Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 00:54:47.720172 containerd[2016]: time="2026-03-07T00:54:47.718228342Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 00:54:47.720172 containerd[2016]: time="2026-03-07T00:54:47.718271614Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 00:54:47.720172 containerd[2016]: time="2026-03-07T00:54:47.718304362Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 00:54:47.720172 containerd[2016]: time="2026-03-07T00:54:47.718560670Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 00:54:47.720172 containerd[2016]: time="2026-03-07T00:54:47.719122450Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 00:54:47.720172 containerd[2016]: time="2026-03-07T00:54:47.719353138Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 00:54:47.720172 containerd[2016]: time="2026-03-07T00:54:47.719386354Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 00:54:47.720172 containerd[2016]: time="2026-03-07T00:54:47.719418082Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 00:54:47.720172 containerd[2016]: time="2026-03-07T00:54:47.719449618Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 00:54:47.720172 containerd[2016]: time="2026-03-07T00:54:47.719479642Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 00:54:47.720172 containerd[2016]: time="2026-03-07T00:54:47.719509210Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 00:54:47.720172 containerd[2016]: time="2026-03-07T00:54:47.719542522Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 00:54:47.720809 containerd[2016]: time="2026-03-07T00:54:47.719574802Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 00:54:47.720809 containerd[2016]: time="2026-03-07T00:54:47.719604694Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 00:54:47.720809 containerd[2016]: time="2026-03-07T00:54:47.719634562Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 00:54:47.720809 containerd[2016]: time="2026-03-07T00:54:47.719664430Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 00:54:47.720809 containerd[2016]: time="2026-03-07T00:54:47.719703574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.720809 containerd[2016]: time="2026-03-07T00:54:47.719740030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.720809 containerd[2016]: time="2026-03-07T00:54:47.719769754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.720809 containerd[2016]: time="2026-03-07T00:54:47.719800522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.720809 containerd[2016]: time="2026-03-07T00:54:47.719844634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.720809 containerd[2016]: time="2026-03-07T00:54:47.719876254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.720809 containerd[2016]: time="2026-03-07T00:54:47.719903878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.720809 containerd[2016]: time="2026-03-07T00:54:47.719933434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.720809 containerd[2016]: time="2026-03-07T00:54:47.719964394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.720809 containerd[2016]: time="2026-03-07T00:54:47.720004798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.721414 containerd[2016]: time="2026-03-07T00:54:47.720033394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.721414 containerd[2016]: time="2026-03-07T00:54:47.720084970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.726242 containerd[2016]: time="2026-03-07T00:54:47.720129106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.726242 containerd[2016]: time="2026-03-07T00:54:47.722384602Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 00:54:47.726242 containerd[2016]: time="2026-03-07T00:54:47.722462422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.726242 containerd[2016]: time="2026-03-07T00:54:47.722497510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.726242 containerd[2016]: time="2026-03-07T00:54:47.722549542Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 00:54:47.726242 containerd[2016]: time="2026-03-07T00:54:47.725284198Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 00:54:47.726242 containerd[2016]: time="2026-03-07T00:54:47.725628874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 00:54:47.726242 containerd[2016]: time="2026-03-07T00:54:47.725687134Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 00:54:47.726242 containerd[2016]: time="2026-03-07T00:54:47.725719222Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 00:54:47.726242 containerd[2016]: time="2026-03-07T00:54:47.725767366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.726242 containerd[2016]: time="2026-03-07T00:54:47.725801086Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 00:54:47.726242 containerd[2016]: time="2026-03-07T00:54:47.725825686Z" level=info msg="NRI interface is disabled by configuration." Mar 7 00:54:47.726242 containerd[2016]: time="2026-03-07T00:54:47.725876530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 00:54:47.727687 containerd[2016]: time="2026-03-07T00:54:47.727534414Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 00:54:47.728127 containerd[2016]: time="2026-03-07T00:54:47.727984846Z" level=info msg="Connect containerd service" Mar 7 00:54:47.728127 containerd[2016]: time="2026-03-07T00:54:47.728092042Z" level=info msg="using legacy CRI server" Mar 7 00:54:47.728305 containerd[2016]: time="2026-03-07T00:54:47.728278354Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 00:54:47.728638 containerd[2016]: time="2026-03-07T00:54:47.728609026Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 00:54:47.732823 containerd[2016]: time="2026-03-07T00:54:47.732759586Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 00:54:47.733946 containerd[2016]: time="2026-03-07T00:54:47.733074982Z" level=info msg="Start subscribing containerd event" Mar 7 00:54:47.733946 containerd[2016]: time="2026-03-07T00:54:47.733180570Z" level=info msg="Start recovering state" Mar 7 00:54:47.733946 containerd[2016]: time="2026-03-07T00:54:47.733320910Z" level=info msg="Start event monitor" Mar 7 00:54:47.733946 containerd[2016]: time="2026-03-07T00:54:47.733346770Z" level=info msg="Start snapshots syncer" Mar 7 00:54:47.733946 containerd[2016]: time="2026-03-07T00:54:47.733367362Z" level=info msg="Start cni network conf syncer for default" Mar 7 00:54:47.733946 containerd[2016]: time="2026-03-07T00:54:47.733386526Z" level=info msg="Start streaming server" Mar 7 00:54:47.738170 containerd[2016]: time="2026-03-07T00:54:47.736823830Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 00:54:47.738170 containerd[2016]: time="2026-03-07T00:54:47.737003386Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 00:54:47.741612 containerd[2016]: time="2026-03-07T00:54:47.741563998Z" level=info msg="containerd successfully booted in 0.122747s" Mar 7 00:54:47.741612 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 00:54:47.792350 systemd-networkd[1932]: eth0: Gained IPv6LL Mar 7 00:54:47.800802 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 00:54:47.805015 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 00:54:47.817671 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 7 00:54:47.829532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:47.837027 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 00:54:47.940213 amazon-ssm-agent[2186]: Initializing new seelog logger Mar 7 00:54:47.941335 amazon-ssm-agent[2186]: New Seelog Logger Creation Complete Mar 7 00:54:47.942176 amazon-ssm-agent[2186]: 2026/03/07 00:54:47 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:47.942176 amazon-ssm-agent[2186]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:47.942497 amazon-ssm-agent[2186]: 2026/03/07 00:54:47 processing appconfig overrides Mar 7 00:54:47.944767 amazon-ssm-agent[2186]: 2026/03/07 00:54:47 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:47.947752 amazon-ssm-agent[2186]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:47.947752 amazon-ssm-agent[2186]: 2026/03/07 00:54:47 processing appconfig overrides Mar 7 00:54:47.947752 amazon-ssm-agent[2186]: 2026/03/07 00:54:47 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:47.947752 amazon-ssm-agent[2186]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:47.947752 amazon-ssm-agent[2186]: 2026/03/07 00:54:47 processing appconfig overrides Mar 7 00:54:47.948491 amazon-ssm-agent[2186]: 2026-03-07 00:54:47 INFO Proxy environment variables: Mar 7 00:54:47.952132 amazon-ssm-agent[2186]: 2026/03/07 00:54:47 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:47.952294 amazon-ssm-agent[2186]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:47.952620 amazon-ssm-agent[2186]: 2026/03/07 00:54:47 processing appconfig overrides Mar 7 00:54:47.953996 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 00:54:48.051578 amazon-ssm-agent[2186]: 2026-03-07 00:54:47 INFO https_proxy: Mar 7 00:54:48.149155 amazon-ssm-agent[2186]: 2026-03-07 00:54:47 INFO http_proxy: Mar 7 00:54:48.264578 amazon-ssm-agent[2186]: 2026-03-07 00:54:47 INFO no_proxy: Mar 7 00:54:48.363681 amazon-ssm-agent[2186]: 2026-03-07 00:54:47 INFO Checking if agent identity type OnPrem can be assumed Mar 7 00:54:48.461830 amazon-ssm-agent[2186]: 2026-03-07 00:54:47 INFO Checking if agent identity type EC2 can be assumed Mar 7 00:54:48.561126 amazon-ssm-agent[2186]: 2026-03-07 00:54:48 INFO Agent will take identity from EC2 Mar 7 00:54:48.660077 amazon-ssm-agent[2186]: 2026-03-07 00:54:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 00:54:48.759497 amazon-ssm-agent[2186]: 2026-03-07 00:54:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 00:54:48.775293 tar[2011]: linux-arm64/README.md Mar 7 00:54:48.818895 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 00:54:48.858933 amazon-ssm-agent[2186]: 2026-03-07 00:54:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 00:54:48.922541 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 00:54:48.958234 amazon-ssm-agent[2186]: 2026-03-07 00:54:48 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 7 00:54:49.060261 amazon-ssm-agent[2186]: 2026-03-07 00:54:48 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 7 00:54:49.160533 amazon-ssm-agent[2186]: 2026-03-07 00:54:48 INFO [amazon-ssm-agent] Starting Core Agent Mar 7 00:54:49.260868 amazon-ssm-agent[2186]: 2026-03-07 00:54:48 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 7 00:54:49.361048 amazon-ssm-agent[2186]: 2026-03-07 00:54:48 INFO [Registrar] Starting registrar module Mar 7 00:54:49.461973 amazon-ssm-agent[2186]: 2026-03-07 00:54:48 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 7 00:54:49.740952 amazon-ssm-agent[2186]: 2026-03-07 00:54:49 INFO [EC2Identity] EC2 registration was successful. Mar 7 00:54:49.772666 amazon-ssm-agent[2186]: 2026-03-07 00:54:49 INFO [CredentialRefresher] credentialRefresher has started Mar 7 00:54:49.772666 amazon-ssm-agent[2186]: 2026-03-07 00:54:49 INFO [CredentialRefresher] Starting credentials refresher loop Mar 7 00:54:49.772848 amazon-ssm-agent[2186]: 2026-03-07 00:54:49 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 7 00:54:49.842238 amazon-ssm-agent[2186]: 2026-03-07 00:54:49 INFO [CredentialRefresher] Next credential rotation will be in 30.791659283866668 minutes Mar 7 00:54:50.186832 sshd_keygen[2007]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 00:54:50.229819 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 00:54:50.240695 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 00:54:50.256549 systemd[1]: Started sshd@0-172.31.17.228:22-20.161.92.111:53408.service - OpenSSH per-connection server daemon (20.161.92.111:53408). Mar 7 00:54:50.271977 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 00:54:50.276315 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 00:54:50.288602 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 00:54:50.327259 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 00:54:50.338861 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 00:54:50.345428 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 00:54:50.349608 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 00:54:50.425591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:50.433537 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 00:54:50.438934 systemd[1]: Startup finished in 1.169s (kernel) + 7.617s (initrd) + 9.059s (userspace) = 17.845s. Mar 7 00:54:50.440825 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:54:50.610305 ntpd[1986]: Listen normally on 7 eth0 [fe80::436:6bff:fe4b:d1f%2]:123 Mar 7 00:54:50.610740 ntpd[1986]: 7 Mar 00:54:50 ntpd[1986]: Listen normally on 7 eth0 [fe80::436:6bff:fe4b:d1f%2]:123 Mar 7 00:54:50.795482 sshd[2218]: Accepted publickey for core from 20.161.92.111 port 53408 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:50.798451 sshd[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:50.806793 amazon-ssm-agent[2186]: 2026-03-07 00:54:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 7 00:54:50.820434 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 00:54:50.833366 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 00:54:50.842212 systemd-logind[1991]: New session 1 of user core. Mar 7 00:54:50.882893 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 00:54:50.901377 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 00:54:50.907088 amazon-ssm-agent[2186]: 2026-03-07 00:54:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2242) started Mar 7 00:54:50.928912 (systemd)[2248]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 00:54:51.008655 amazon-ssm-agent[2186]: 2026-03-07 00:54:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 7 00:54:51.196188 systemd[2248]: Queued start job for default target default.target. Mar 7 00:54:51.204311 systemd[2248]: Created slice app.slice - User Application Slice. Mar 7 00:54:51.204373 systemd[2248]: Reached target paths.target - Paths. Mar 7 00:54:51.204406 systemd[2248]: Reached target timers.target - Timers. Mar 7 00:54:51.213420 systemd[2248]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 00:54:51.231603 systemd[2248]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 00:54:51.231858 systemd[2248]: Reached target sockets.target - Sockets. Mar 7 00:54:51.231906 systemd[2248]: Reached target basic.target - Basic System. Mar 7 00:54:51.231986 systemd[2248]: Reached target default.target - Main User Target. Mar 7 00:54:51.232068 systemd[2248]: Startup finished in 287ms. Mar 7 00:54:51.232354 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 00:54:51.245493 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 00:54:51.584617 kubelet[2232]: E0307 00:54:51.584398 2232 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:54:51.589594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:54:51.589925 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:54:51.590541 systemd[1]: kubelet.service: Consumed 1.274s CPU time. Mar 7 00:54:51.627795 systemd[1]: Started sshd@1-172.31.17.228:22-20.161.92.111:42582.service - OpenSSH per-connection server daemon (20.161.92.111:42582). Mar 7 00:54:52.133633 sshd[2267]: Accepted publickey for core from 20.161.92.111 port 42582 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:52.136747 sshd[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:52.144933 systemd-logind[1991]: New session 2 of user core. Mar 7 00:54:52.152397 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 00:54:52.494367 sshd[2267]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:52.499812 systemd[1]: sshd@1-172.31.17.228:22-20.161.92.111:42582.service: Deactivated successfully. Mar 7 00:54:52.503327 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 00:54:52.506604 systemd-logind[1991]: Session 2 logged out. Waiting for processes to exit. Mar 7 00:54:52.508942 systemd-logind[1991]: Removed session 2. Mar 7 00:54:52.593746 systemd[1]: Started sshd@2-172.31.17.228:22-20.161.92.111:42586.service - OpenSSH per-connection server daemon (20.161.92.111:42586). Mar 7 00:54:53.090284 sshd[2274]: Accepted publickey for core from 20.161.92.111 port 42586 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:53.093378 sshd[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:53.106876 systemd-logind[1991]: New session 3 of user core. Mar 7 00:54:53.111435 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 00:54:53.440459 sshd[2274]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:53.447706 systemd-logind[1991]: Session 3 logged out. Waiting for processes to exit. Mar 7 00:54:53.450075 systemd[1]: sshd@2-172.31.17.228:22-20.161.92.111:42586.service: Deactivated successfully. Mar 7 00:54:53.453788 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 00:54:53.456556 systemd-logind[1991]: Removed session 3. Mar 7 00:54:53.538750 systemd[1]: Started sshd@3-172.31.17.228:22-20.161.92.111:42602.service - OpenSSH per-connection server daemon (20.161.92.111:42602). Mar 7 00:54:53.392577 systemd-resolved[1935]: Clock change detected. Flushing caches. Mar 7 00:54:53.403821 systemd-journald[1575]: Time jumped backwards, rotating. Mar 7 00:54:53.828894 sshd[2281]: Accepted publickey for core from 20.161.92.111 port 42602 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:53.831517 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:53.838802 systemd-logind[1991]: New session 4 of user core. Mar 7 00:54:53.852802 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 00:54:54.188626 sshd[2281]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:54.194354 systemd[1]: sshd@3-172.31.17.228:22-20.161.92.111:42602.service: Deactivated successfully. Mar 7 00:54:54.194906 systemd-logind[1991]: Session 4 logged out. Waiting for processes to exit. Mar 7 00:54:54.197920 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 00:54:54.202098 systemd-logind[1991]: Removed session 4. Mar 7 00:54:54.286834 systemd[1]: Started sshd@4-172.31.17.228:22-20.161.92.111:42618.service - OpenSSH per-connection server daemon (20.161.92.111:42618). Mar 7 00:54:54.782523 sshd[2289]: Accepted publickey for core from 20.161.92.111 port 42618 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:54.785187 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:54.793400 systemd-logind[1991]: New session 5 of user core. Mar 7 00:54:54.800633 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 00:54:55.079305 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 00:54:55.080588 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:54:55.096470 sudo[2292]: pam_unix(sudo:session): session closed for user root Mar 7 00:54:55.175217 sshd[2289]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:55.183282 systemd[1]: sshd@4-172.31.17.228:22-20.161.92.111:42618.service: Deactivated successfully. Mar 7 00:54:55.187023 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 00:54:55.188812 systemd-logind[1991]: Session 5 logged out. Waiting for processes to exit. Mar 7 00:54:55.190885 systemd-logind[1991]: Removed session 5. Mar 7 00:54:55.272873 systemd[1]: Started sshd@5-172.31.17.228:22-20.161.92.111:42632.service - OpenSSH per-connection server daemon (20.161.92.111:42632). Mar 7 00:54:55.773852 sshd[2297]: Accepted publickey for core from 20.161.92.111 port 42632 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:55.776540 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:55.783742 systemd-logind[1991]: New session 6 of user core. Mar 7 00:54:55.791654 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 00:54:56.056005 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 00:54:56.056686 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:54:56.062913 sudo[2301]: pam_unix(sudo:session): session closed for user root Mar 7 00:54:56.072947 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 00:54:56.073630 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:54:56.096910 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 00:54:56.102227 auditctl[2304]: No rules Mar 7 00:54:56.104042 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 00:54:56.104510 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 00:54:56.115219 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 00:54:56.158248 augenrules[2322]: No rules Mar 7 00:54:56.162491 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 00:54:56.166139 sudo[2300]: pam_unix(sudo:session): session closed for user root Mar 7 00:54:56.244273 sshd[2297]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:56.250634 systemd-logind[1991]: Session 6 logged out. Waiting for processes to exit. Mar 7 00:54:56.251818 systemd[1]: sshd@5-172.31.17.228:22-20.161.92.111:42632.service: Deactivated successfully. Mar 7 00:54:56.256686 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 00:54:56.258478 systemd-logind[1991]: Removed session 6. Mar 7 00:54:56.333547 systemd[1]: Started sshd@6-172.31.17.228:22-20.161.92.111:42640.service - OpenSSH per-connection server daemon (20.161.92.111:42640). Mar 7 00:54:56.843951 sshd[2330]: Accepted publickey for core from 20.161.92.111 port 42640 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:56.846565 sshd[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:56.856476 systemd-logind[1991]: New session 7 of user core. Mar 7 00:54:56.865691 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 00:54:57.125644 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 00:54:57.126296 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:54:57.640810 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 00:54:57.641120 (dockerd)[2348]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 00:54:58.055535 dockerd[2348]: time="2026-03-07T00:54:58.055099655Z" level=info msg="Starting up" Mar 7 00:54:58.193991 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1594254104-merged.mount: Deactivated successfully. Mar 7 00:54:58.216837 dockerd[2348]: time="2026-03-07T00:54:58.216434675Z" level=info msg="Loading containers: start." Mar 7 00:54:58.377425 kernel: Initializing XFRM netlink socket Mar 7 00:54:58.419443 (udev-worker)[2371]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:54:58.503263 systemd-networkd[1932]: docker0: Link UP Mar 7 00:54:58.538897 dockerd[2348]: time="2026-03-07T00:54:58.538728589Z" level=info msg="Loading containers: done." Mar 7 00:54:58.563543 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4156216528-merged.mount: Deactivated successfully. Mar 7 00:54:58.571946 dockerd[2348]: time="2026-03-07T00:54:58.571207369Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 00:54:58.571946 dockerd[2348]: time="2026-03-07T00:54:58.571354789Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 00:54:58.571946 dockerd[2348]: time="2026-03-07T00:54:58.571565065Z" level=info msg="Daemon has completed initialization" Mar 7 00:54:58.654694 dockerd[2348]: time="2026-03-07T00:54:58.653755070Z" level=info msg="API listen on /run/docker.sock" Mar 7 00:54:58.656660 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 00:54:59.707519 containerd[2016]: time="2026-03-07T00:54:59.706961595Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 7 00:55:00.399441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2006526970.mount: Deactivated successfully. Mar 7 00:55:01.557686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 00:55:01.572119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:55:01.912146 containerd[2016]: time="2026-03-07T00:55:01.911977590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:01.917766 containerd[2016]: time="2026-03-07T00:55:01.917555874Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=24583252" Mar 7 00:55:01.921427 containerd[2016]: time="2026-03-07T00:55:01.921295926Z" level=info msg="ImageCreate event name:\"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:01.933427 containerd[2016]: time="2026-03-07T00:55:01.932792682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:01.936731 containerd[2016]: time="2026-03-07T00:55:01.936663750Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"24579851\" in 2.229640163s" Mar 7 00:55:01.936941 containerd[2016]: time="2026-03-07T00:55:01.936903354Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\"" Mar 7 00:55:01.938491 containerd[2016]: time="2026-03-07T00:55:01.938415042Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 7 00:55:01.989472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:55:02.007863 (kubelet)[2553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:55:02.085495 kubelet[2553]: E0307 00:55:02.085362 2553 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:55:02.093513 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:55:02.094075 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:55:03.376824 containerd[2016]: time="2026-03-07T00:55:03.376742693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:03.378831 containerd[2016]: time="2026-03-07T00:55:03.378767297Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=19139641" Mar 7 00:55:03.379027 containerd[2016]: time="2026-03-07T00:55:03.378980429Z" level=info msg="ImageCreate event name:\"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:03.385314 containerd[2016]: time="2026-03-07T00:55:03.385225301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:03.388318 containerd[2016]: time="2026-03-07T00:55:03.387862733Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"20724045\" in 1.448938183s" Mar 7 00:55:03.388318 containerd[2016]: time="2026-03-07T00:55:03.387926261Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\"" Mar 7 00:55:03.388994 containerd[2016]: time="2026-03-07T00:55:03.388944509Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 7 00:55:04.591515 containerd[2016]: time="2026-03-07T00:55:04.591439003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:04.593985 containerd[2016]: time="2026-03-07T00:55:04.593745427Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=14195544" Mar 7 00:55:04.597400 containerd[2016]: time="2026-03-07T00:55:04.596182543Z" level=info msg="ImageCreate event name:\"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:04.602609 containerd[2016]: time="2026-03-07T00:55:04.602544535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:04.604983 containerd[2016]: time="2026-03-07T00:55:04.604934251Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"15779966\" in 1.215930378s" Mar 7 00:55:04.605118 containerd[2016]: time="2026-03-07T00:55:04.605088907Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\"" Mar 7 00:55:04.605929 containerd[2016]: time="2026-03-07T00:55:04.605858347Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 7 00:55:05.922757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1002787086.mount: Deactivated successfully. Mar 7 00:55:06.309561 containerd[2016]: time="2026-03-07T00:55:06.309488636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:06.312654 containerd[2016]: time="2026-03-07T00:55:06.312254780Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=22697088" Mar 7 00:55:06.315419 containerd[2016]: time="2026-03-07T00:55:06.314913056Z" level=info msg="ImageCreate event name:\"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:06.319855 containerd[2016]: time="2026-03-07T00:55:06.319777436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:06.321601 containerd[2016]: time="2026-03-07T00:55:06.321129944Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"22696107\" in 1.715020089s" Mar 7 00:55:06.321601 containerd[2016]: time="2026-03-07T00:55:06.321190484Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\"" Mar 7 00:55:06.321938 containerd[2016]: time="2026-03-07T00:55:06.321884636Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 7 00:55:06.868447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3494913833.mount: Deactivated successfully. Mar 7 00:55:08.131828 containerd[2016]: time="2026-03-07T00:55:08.131741829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:08.134384 containerd[2016]: time="2026-03-07T00:55:08.134318229Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Mar 7 00:55:08.137238 containerd[2016]: time="2026-03-07T00:55:08.136474941Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:08.145980 containerd[2016]: time="2026-03-07T00:55:08.145120389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:08.147562 containerd[2016]: time="2026-03-07T00:55:08.147499065Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.825553541s" Mar 7 00:55:08.147671 containerd[2016]: time="2026-03-07T00:55:08.147562917Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Mar 7 00:55:08.149118 containerd[2016]: time="2026-03-07T00:55:08.149060433Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 00:55:08.641899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3141544292.mount: Deactivated successfully. Mar 7 00:55:08.654961 containerd[2016]: time="2026-03-07T00:55:08.654878507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:08.657205 containerd[2016]: time="2026-03-07T00:55:08.656810567Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 7 00:55:08.661408 containerd[2016]: time="2026-03-07T00:55:08.659407271Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:08.664710 containerd[2016]: time="2026-03-07T00:55:08.664657535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:08.666251 containerd[2016]: time="2026-03-07T00:55:08.666193199Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 517.057706ms" Mar 7 00:55:08.666364 containerd[2016]: time="2026-03-07T00:55:08.666248411Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 7 00:55:08.667023 containerd[2016]: time="2026-03-07T00:55:08.666960347Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 7 00:55:09.248764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963129049.mount: Deactivated successfully. Mar 7 00:55:10.626393 containerd[2016]: time="2026-03-07T00:55:10.626290741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:10.629077 containerd[2016]: time="2026-03-07T00:55:10.629014129Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21125515" Mar 7 00:55:10.631130 containerd[2016]: time="2026-03-07T00:55:10.631036765Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:10.639873 containerd[2016]: time="2026-03-07T00:55:10.639795757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:10.643422 containerd[2016]: time="2026-03-07T00:55:10.642114061Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.975095466s" Mar 7 00:55:10.643422 containerd[2016]: time="2026-03-07T00:55:10.642184453Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Mar 7 00:55:12.307345 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 00:55:12.316869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:55:12.657765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:55:12.668895 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:55:12.741420 kubelet[2724]: E0307 00:55:12.739293 2724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:55:12.743764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:55:12.744089 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:55:17.390036 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 7 00:55:17.596649 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:55:17.607269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:55:17.676979 systemd[1]: Reloading requested from client PID 2741 ('systemctl') (unit session-7.scope)... Mar 7 00:55:17.677251 systemd[1]: Reloading... Mar 7 00:55:17.896419 zram_generator::config[2781]: No configuration found. Mar 7 00:55:18.154979 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:55:18.327649 systemd[1]: Reloading finished in 649 ms. Mar 7 00:55:18.425743 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 00:55:18.425973 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 00:55:18.426667 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:55:18.434063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:55:18.763533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:55:18.778915 (kubelet)[2845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:55:18.852562 kubelet[2845]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 00:55:18.853071 kubelet[2845]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:55:18.853365 kubelet[2845]: I0307 00:55:18.853317 2845 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 00:55:20.127201 kubelet[2845]: I0307 00:55:20.127129 2845 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 00:55:20.127201 kubelet[2845]: I0307 00:55:20.127180 2845 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:55:20.127840 kubelet[2845]: I0307 00:55:20.127231 2845 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 00:55:20.127840 kubelet[2845]: I0307 00:55:20.127246 2845 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:55:20.128031 kubelet[2845]: I0307 00:55:20.127979 2845 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 00:55:20.143071 kubelet[2845]: E0307 00:55:20.143012 2845 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.17.228:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 00:55:20.146826 kubelet[2845]: I0307 00:55:20.146596 2845 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:55:20.151844 kubelet[2845]: E0307 00:55:20.151790 2845 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 00:55:20.151989 kubelet[2845]: I0307 00:55:20.151882 2845 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 00:55:20.157406 kubelet[2845]: I0307 00:55:20.157314 2845 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 00:55:20.158744 kubelet[2845]: I0307 00:55:20.157955 2845 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:55:20.158744 kubelet[2845]: I0307 00:55:20.157996 2845 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-228","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 00:55:20.158744 kubelet[2845]: I0307 00:55:20.158264 2845 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 00:55:20.158744 kubelet[2845]: I0307 00:55:20.158282 2845 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 00:55:20.159058 kubelet[2845]: I0307 00:55:20.158457 2845 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 00:55:20.162463 kubelet[2845]: I0307 00:55:20.162077 2845 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:55:20.164524 kubelet[2845]: I0307 00:55:20.164496 2845 kubelet.go:475] "Attempting to sync node with API server" Mar 7 00:55:20.164646 kubelet[2845]: I0307 00:55:20.164626 2845 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:55:20.164803 kubelet[2845]: I0307 00:55:20.164784 2845 kubelet.go:387] "Adding apiserver pod source" Mar 7 00:55:20.164931 kubelet[2845]: I0307 00:55:20.164912 2845 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:55:20.167362 kubelet[2845]: E0307 00:55:20.167303 2845 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-228&limit=500&resourceVersion=0\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 00:55:20.168408 kubelet[2845]: I0307 00:55:20.167739 2845 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 00:55:20.168832 kubelet[2845]: I0307 00:55:20.168804 2845 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:55:20.168975 kubelet[2845]: I0307 00:55:20.168953 2845 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 00:55:20.169125 kubelet[2845]: W0307 00:55:20.169106 2845 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 00:55:20.173661 kubelet[2845]: I0307 00:55:20.173632 2845 server.go:1262] "Started kubelet" Mar 7 00:55:20.174156 kubelet[2845]: E0307 00:55:20.174102 2845 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 00:55:20.178762 kubelet[2845]: I0307 00:55:20.178689 2845 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:55:20.180291 kubelet[2845]: I0307 00:55:20.180237 2845 server.go:310] "Adding debug handlers to kubelet server" Mar 7 00:55:20.180718 kubelet[2845]: I0307 00:55:20.180642 2845 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:55:20.180867 kubelet[2845]: I0307 00:55:20.180843 2845 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 00:55:20.188474 kubelet[2845]: I0307 00:55:20.188015 2845 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:55:20.188474 kubelet[2845]: E0307 00:55:20.186239 2845 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.228:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.228:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-228.189a690fdccd6f95 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-228,UID:ip-172-31-17-228,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-228,},FirstTimestamp:2026-03-07 00:55:20.173588373 +0000 UTC m=+1.387924256,LastTimestamp:2026-03-07 00:55:20.173588373 +0000 UTC m=+1.387924256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-228,}" Mar 7 00:55:20.191429 kubelet[2845]: I0307 00:55:20.190778 2845 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 00:55:20.197611 kubelet[2845]: I0307 00:55:20.197567 2845 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:55:20.201407 kubelet[2845]: I0307 00:55:20.201327 2845 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 00:55:20.201818 kubelet[2845]: E0307 00:55:20.201773 2845 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-17-228\" not found" Mar 7 00:55:20.202546 kubelet[2845]: I0307 00:55:20.202514 2845 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 00:55:20.203989 kubelet[2845]: I0307 00:55:20.202986 2845 reconciler.go:29] "Reconciler: start to sync state" Mar 7 00:55:20.205128 kubelet[2845]: E0307 00:55:20.204653 2845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-228?timeout=10s\": dial tcp 172.31.17.228:6443: connect: connection refused" interval="200ms" Mar 7 00:55:20.205128 kubelet[2845]: E0307 00:55:20.204873 2845 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 00:55:20.205775 kubelet[2845]: I0307 00:55:20.205721 2845 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:55:20.210133 kubelet[2845]: E0307 00:55:20.209750 2845 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 00:55:20.210133 kubelet[2845]: I0307 00:55:20.209959 2845 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:55:20.210133 kubelet[2845]: I0307 00:55:20.209978 2845 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:55:20.240493 kubelet[2845]: I0307 00:55:20.240154 2845 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 00:55:20.243320 kubelet[2845]: I0307 00:55:20.242714 2845 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 00:55:20.243320 kubelet[2845]: I0307 00:55:20.242752 2845 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 00:55:20.243320 kubelet[2845]: I0307 00:55:20.242789 2845 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 00:55:20.243320 kubelet[2845]: E0307 00:55:20.242858 2845 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:55:20.248071 kubelet[2845]: E0307 00:55:20.247973 2845 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 00:55:20.257740 kubelet[2845]: I0307 00:55:20.257690 2845 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 00:55:20.257740 kubelet[2845]: I0307 00:55:20.257726 2845 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 00:55:20.257940 kubelet[2845]: I0307 00:55:20.257762 2845 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:55:20.261576 kubelet[2845]: I0307 00:55:20.261538 2845 policy_none.go:49] "None policy: Start" Mar 7 00:55:20.261576 kubelet[2845]: I0307 00:55:20.261577 2845 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 00:55:20.261767 kubelet[2845]: I0307 00:55:20.261604 2845 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 00:55:20.264806 kubelet[2845]: I0307 00:55:20.264766 2845 policy_none.go:47] "Start" Mar 7 00:55:20.272443 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 00:55:20.288412 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 00:55:20.296469 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 00:55:20.302703 kubelet[2845]: E0307 00:55:20.302657 2845 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-17-228\" not found" Mar 7 00:55:20.309023 kubelet[2845]: E0307 00:55:20.308787 2845 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:55:20.310023 kubelet[2845]: I0307 00:55:20.309735 2845 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 00:55:20.310350 kubelet[2845]: I0307 00:55:20.309764 2845 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:55:20.311280 kubelet[2845]: I0307 00:55:20.311118 2845 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 00:55:20.317046 kubelet[2845]: E0307 00:55:20.316908 2845 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:55:20.317046 kubelet[2845]: E0307 00:55:20.316978 2845 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-228\" not found" Mar 7 00:55:20.367783 systemd[1]: Created slice kubepods-burstable-pod6eda7cda7d5c59363b1b6408a76a9ef8.slice - libcontainer container kubepods-burstable-pod6eda7cda7d5c59363b1b6408a76a9ef8.slice. Mar 7 00:55:20.388020 kubelet[2845]: E0307 00:55:20.387621 2845 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Mar 7 00:55:20.396544 systemd[1]: Created slice kubepods-burstable-poda9335f14a628062281164641626ad4a0.slice - libcontainer container kubepods-burstable-poda9335f14a628062281164641626ad4a0.slice. Mar 7 00:55:20.401415 kubelet[2845]: E0307 00:55:20.400953 2845 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Mar 7 00:55:20.406806 kubelet[2845]: I0307 00:55:20.405886 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9335f14a628062281164641626ad4a0-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"a9335f14a628062281164641626ad4a0\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Mar 7 00:55:20.406806 kubelet[2845]: I0307 00:55:20.405946 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9335f14a628062281164641626ad4a0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"a9335f14a628062281164641626ad4a0\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Mar 7 00:55:20.406806 kubelet[2845]: I0307 00:55:20.406008 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6eda7cda7d5c59363b1b6408a76a9ef8-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-228\" (UID: \"6eda7cda7d5c59363b1b6408a76a9ef8\") " pod="kube-system/kube-apiserver-ip-172-31-17-228" Mar 7 00:55:20.406806 kubelet[2845]: I0307 00:55:20.406044 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a9335f14a628062281164641626ad4a0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"a9335f14a628062281164641626ad4a0\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Mar 7 00:55:20.406806 kubelet[2845]: I0307 00:55:20.406087 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9335f14a628062281164641626ad4a0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"a9335f14a628062281164641626ad4a0\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Mar 7 00:55:20.407136 kubelet[2845]: I0307 00:55:20.406147 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9335f14a628062281164641626ad4a0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"a9335f14a628062281164641626ad4a0\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Mar 7 00:55:20.407136 kubelet[2845]: I0307 00:55:20.406187 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c69048d6d7d902d081f32290597248ea-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-228\" (UID: \"c69048d6d7d902d081f32290597248ea\") " pod="kube-system/kube-scheduler-ip-172-31-17-228" Mar 7 00:55:20.407136 kubelet[2845]: I0307 00:55:20.406221 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6eda7cda7d5c59363b1b6408a76a9ef8-ca-certs\") pod \"kube-apiserver-ip-172-31-17-228\" (UID: \"6eda7cda7d5c59363b1b6408a76a9ef8\") " pod="kube-system/kube-apiserver-ip-172-31-17-228" Mar 7 00:55:20.407136 kubelet[2845]: I0307 00:55:20.406259 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6eda7cda7d5c59363b1b6408a76a9ef8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-228\" (UID: \"6eda7cda7d5c59363b1b6408a76a9ef8\") " pod="kube-system/kube-apiserver-ip-172-31-17-228" Mar 7 00:55:20.407210 systemd[1]: Created slice kubepods-burstable-podc69048d6d7d902d081f32290597248ea.slice - libcontainer container kubepods-burstable-podc69048d6d7d902d081f32290597248ea.slice. Mar 7 00:55:20.408859 kubelet[2845]: E0307 00:55:20.408811 2845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-228?timeout=10s\": dial tcp 172.31.17.228:6443: connect: connection refused" interval="400ms" Mar 7 00:55:20.412007 kubelet[2845]: E0307 00:55:20.411216 2845 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Mar 7 00:55:20.413337 kubelet[2845]: I0307 00:55:20.413291 2845 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-228" Mar 7 00:55:20.414006 kubelet[2845]: E0307 00:55:20.413960 2845 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.228:6443/api/v1/nodes\": dial tcp 172.31.17.228:6443: connect: connection refused" node="ip-172-31-17-228" Mar 7 00:55:20.616773 kubelet[2845]: I0307 00:55:20.616278 2845 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-228" Mar 7 00:55:20.617100 kubelet[2845]: E0307 00:55:20.617061 2845 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.228:6443/api/v1/nodes\": dial tcp 172.31.17.228:6443: connect: connection refused" node="ip-172-31-17-228" Mar 7 00:55:20.694704 containerd[2016]: time="2026-03-07T00:55:20.694141127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-228,Uid:6eda7cda7d5c59363b1b6408a76a9ef8,Namespace:kube-system,Attempt:0,}" Mar 7 00:55:20.706148 containerd[2016]: time="2026-03-07T00:55:20.705728759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-228,Uid:a9335f14a628062281164641626ad4a0,Namespace:kube-system,Attempt:0,}" Mar 7 00:55:20.716756 containerd[2016]: time="2026-03-07T00:55:20.716706875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-228,Uid:c69048d6d7d902d081f32290597248ea,Namespace:kube-system,Attempt:0,}" Mar 7 00:55:20.810333 kubelet[2845]: E0307 00:55:20.809451 2845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-228?timeout=10s\": dial tcp 172.31.17.228:6443: connect: connection refused" interval="800ms" Mar 7 00:55:21.020734 kubelet[2845]: I0307 00:55:21.020528 2845 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-228" Mar 7 00:55:21.021064 kubelet[2845]: E0307 00:55:21.020999 2845 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.228:6443/api/v1/nodes\": dial tcp 172.31.17.228:6443: connect: connection refused" node="ip-172-31-17-228" Mar 7 00:55:21.191559 kubelet[2845]: E0307 00:55:21.191498 2845 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 00:55:21.295105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2588926742.mount: Deactivated successfully. Mar 7 00:55:21.303897 kubelet[2845]: E0307 00:55:21.303824 2845 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 00:55:21.316222 containerd[2016]: time="2026-03-07T00:55:21.316130806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:55:21.318496 containerd[2016]: time="2026-03-07T00:55:21.318426622Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:55:21.320763 containerd[2016]: time="2026-03-07T00:55:21.320707462Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 7 00:55:21.323252 containerd[2016]: time="2026-03-07T00:55:21.323195962Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 00:55:21.326429 containerd[2016]: time="2026-03-07T00:55:21.326212534Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:55:21.330435 containerd[2016]: time="2026-03-07T00:55:21.329951314Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 00:55:21.330960 containerd[2016]: time="2026-03-07T00:55:21.330902086Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:55:21.333451 containerd[2016]: time="2026-03-07T00:55:21.333333682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:55:21.336542 containerd[2016]: time="2026-03-07T00:55:21.335307394Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 629.474055ms" Mar 7 00:55:21.342978 containerd[2016]: time="2026-03-07T00:55:21.342920602Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 625.776243ms" Mar 7 00:55:21.364462 containerd[2016]: time="2026-03-07T00:55:21.364345726Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 670.095531ms" Mar 7 00:55:21.530868 containerd[2016]: time="2026-03-07T00:55:21.530639315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:21.531528 containerd[2016]: time="2026-03-07T00:55:21.530985203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:21.531528 containerd[2016]: time="2026-03-07T00:55:21.531184667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:21.531528 containerd[2016]: time="2026-03-07T00:55:21.531457463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:21.532308 containerd[2016]: time="2026-03-07T00:55:21.532172855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:21.532446 containerd[2016]: time="2026-03-07T00:55:21.532289711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:21.532446 containerd[2016]: time="2026-03-07T00:55:21.532328711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:21.532667 containerd[2016]: time="2026-03-07T00:55:21.532519535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:21.541290 containerd[2016]: time="2026-03-07T00:55:21.541097339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:21.541290 containerd[2016]: time="2026-03-07T00:55:21.541222907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:21.541290 containerd[2016]: time="2026-03-07T00:55:21.541275779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:21.541700 containerd[2016]: time="2026-03-07T00:55:21.541509695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:21.579999 systemd[1]: Started cri-containerd-8888a74e5c15046e425440c307f25ada5d8e599fda8f07e5f97c9d21944eb156.scope - libcontainer container 8888a74e5c15046e425440c307f25ada5d8e599fda8f07e5f97c9d21944eb156. Mar 7 00:55:21.604709 systemd[1]: Started cri-containerd-09fa01ab3a413ade05051a3a794c1e07d50868535548153f4d8d77ad381b81c7.scope - libcontainer container 09fa01ab3a413ade05051a3a794c1e07d50868535548153f4d8d77ad381b81c7. Mar 7 00:55:21.608319 systemd[1]: Started cri-containerd-964de55bd97423e88751fbd629f7d4cddd72c148848fc4e0aaaded5c14ac668c.scope - libcontainer container 964de55bd97423e88751fbd629f7d4cddd72c148848fc4e0aaaded5c14ac668c. Mar 7 00:55:21.613807 kubelet[2845]: E0307 00:55:21.613733 2845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-228?timeout=10s\": dial tcp 172.31.17.228:6443: connect: connection refused" interval="1.6s" Mar 7 00:55:21.683934 kubelet[2845]: E0307 00:55:21.683852 2845 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 00:55:21.707087 kubelet[2845]: E0307 00:55:21.706940 2845 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-228&limit=500&resourceVersion=0\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 00:55:21.715208 containerd[2016]: time="2026-03-07T00:55:21.715151712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-228,Uid:6eda7cda7d5c59363b1b6408a76a9ef8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8888a74e5c15046e425440c307f25ada5d8e599fda8f07e5f97c9d21944eb156\"" Mar 7 00:55:21.734468 containerd[2016]: time="2026-03-07T00:55:21.733514784Z" level=info msg="CreateContainer within sandbox \"8888a74e5c15046e425440c307f25ada5d8e599fda8f07e5f97c9d21944eb156\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 00:55:21.738758 containerd[2016]: time="2026-03-07T00:55:21.738501600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-228,Uid:a9335f14a628062281164641626ad4a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"09fa01ab3a413ade05051a3a794c1e07d50868535548153f4d8d77ad381b81c7\"" Mar 7 00:55:21.747331 containerd[2016]: time="2026-03-07T00:55:21.747164472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-228,Uid:c69048d6d7d902d081f32290597248ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"964de55bd97423e88751fbd629f7d4cddd72c148848fc4e0aaaded5c14ac668c\"" Mar 7 00:55:21.752255 containerd[2016]: time="2026-03-07T00:55:21.752190960Z" level=info msg="CreateContainer within sandbox \"09fa01ab3a413ade05051a3a794c1e07d50868535548153f4d8d77ad381b81c7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 00:55:21.756938 containerd[2016]: time="2026-03-07T00:55:21.756710736Z" level=info msg="CreateContainer within sandbox \"964de55bd97423e88751fbd629f7d4cddd72c148848fc4e0aaaded5c14ac668c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 00:55:21.779522 containerd[2016]: time="2026-03-07T00:55:21.779444844Z" level=info msg="CreateContainer within sandbox \"8888a74e5c15046e425440c307f25ada5d8e599fda8f07e5f97c9d21944eb156\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b507bb877972541c2678caa050b7eaef3c8cc0979ddc03b65310d478d62dac96\"" Mar 7 00:55:21.780428 containerd[2016]: time="2026-03-07T00:55:21.780359161Z" level=info msg="StartContainer for \"b507bb877972541c2678caa050b7eaef3c8cc0979ddc03b65310d478d62dac96\"" Mar 7 00:55:21.803926 containerd[2016]: time="2026-03-07T00:55:21.803747521Z" level=info msg="CreateContainer within sandbox \"964de55bd97423e88751fbd629f7d4cddd72c148848fc4e0aaaded5c14ac668c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"375754f898165d04b9b9a08ed8f84b91dac704d7d59ff3258ab147f40d607dc6\"" Mar 7 00:55:21.804755 containerd[2016]: time="2026-03-07T00:55:21.804713857Z" level=info msg="StartContainer for \"375754f898165d04b9b9a08ed8f84b91dac704d7d59ff3258ab147f40d607dc6\"" Mar 7 00:55:21.813555 containerd[2016]: time="2026-03-07T00:55:21.813350269Z" level=info msg="CreateContainer within sandbox \"09fa01ab3a413ade05051a3a794c1e07d50868535548153f4d8d77ad381b81c7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0a23b8460fdf6b25b48127cd0bd8a68fcfbfb0567525e83e72869fd277692787\"" Mar 7 00:55:21.814631 containerd[2016]: time="2026-03-07T00:55:21.814570909Z" level=info msg="StartContainer for \"0a23b8460fdf6b25b48127cd0bd8a68fcfbfb0567525e83e72869fd277692787\"" Mar 7 00:55:21.825941 kubelet[2845]: I0307 00:55:21.825888 2845 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-228" Mar 7 00:55:21.827878 kubelet[2845]: E0307 00:55:21.827741 2845 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.228:6443/api/v1/nodes\": dial tcp 172.31.17.228:6443: connect: connection refused" node="ip-172-31-17-228" Mar 7 00:55:21.841049 systemd[1]: Started cri-containerd-b507bb877972541c2678caa050b7eaef3c8cc0979ddc03b65310d478d62dac96.scope - libcontainer container b507bb877972541c2678caa050b7eaef3c8cc0979ddc03b65310d478d62dac96. Mar 7 00:55:21.904690 systemd[1]: Started cri-containerd-375754f898165d04b9b9a08ed8f84b91dac704d7d59ff3258ab147f40d607dc6.scope - libcontainer container 375754f898165d04b9b9a08ed8f84b91dac704d7d59ff3258ab147f40d607dc6. Mar 7 00:55:21.921299 systemd[1]: Started cri-containerd-0a23b8460fdf6b25b48127cd0bd8a68fcfbfb0567525e83e72869fd277692787.scope - libcontainer container 0a23b8460fdf6b25b48127cd0bd8a68fcfbfb0567525e83e72869fd277692787. Mar 7 00:55:21.956533 containerd[2016]: time="2026-03-07T00:55:21.956454073Z" level=info msg="StartContainer for \"b507bb877972541c2678caa050b7eaef3c8cc0979ddc03b65310d478d62dac96\" returns successfully" Mar 7 00:55:22.055160 containerd[2016]: time="2026-03-07T00:55:22.055101550Z" level=info msg="StartContainer for \"0a23b8460fdf6b25b48127cd0bd8a68fcfbfb0567525e83e72869fd277692787\" returns successfully" Mar 7 00:55:22.065828 containerd[2016]: time="2026-03-07T00:55:22.065766790Z" level=info msg="StartContainer for \"375754f898165d04b9b9a08ed8f84b91dac704d7d59ff3258ab147f40d607dc6\" returns successfully" Mar 7 00:55:22.263311 kubelet[2845]: E0307 00:55:22.262208 2845 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Mar 7 00:55:22.276636 kubelet[2845]: E0307 00:55:22.274636 2845 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Mar 7 00:55:22.289434 kubelet[2845]: E0307 00:55:22.289363 2845 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Mar 7 00:55:23.291727 kubelet[2845]: E0307 00:55:23.291461 2845 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Mar 7 00:55:23.291727 kubelet[2845]: E0307 00:55:23.291533 2845 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Mar 7 00:55:23.431166 kubelet[2845]: I0307 00:55:23.431095 2845 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-228" Mar 7 00:55:26.514729 kubelet[2845]: E0307 00:55:26.514668 2845 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Mar 7 00:55:26.573549 kubelet[2845]: I0307 00:55:26.573466 2845 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-228" Mar 7 00:55:26.602926 kubelet[2845]: I0307 00:55:26.602773 2845 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-228" Mar 7 00:55:26.616847 kubelet[2845]: E0307 00:55:26.616605 2845 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-17-228.189a690fdccd6f95 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-228,UID:ip-172-31-17-228,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-228,},FirstTimestamp:2026-03-07 00:55:20.173588373 +0000 UTC m=+1.387924256,LastTimestamp:2026-03-07 00:55:20.173588373 +0000 UTC m=+1.387924256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-228,}" Mar 7 00:55:26.637542 kubelet[2845]: E0307 00:55:26.637469 2845 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-228\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-228" Mar 7 00:55:26.637542 kubelet[2845]: I0307 00:55:26.637538 2845 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-228" Mar 7 00:55:26.641051 kubelet[2845]: E0307 00:55:26.640990 2845 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-228\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-228" Mar 7 00:55:26.641051 kubelet[2845]: I0307 00:55:26.641040 2845 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-228" Mar 7 00:55:26.650061 kubelet[2845]: E0307 00:55:26.650005 2845 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-228\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-228" Mar 7 00:55:27.169220 kubelet[2845]: I0307 00:55:27.169156 2845 apiserver.go:52] "Watching apiserver" Mar 7 00:55:27.204477 kubelet[2845]: I0307 00:55:27.204311 2845 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 00:55:27.989063 kubelet[2845]: I0307 00:55:27.989017 2845 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-228" Mar 7 00:55:28.763038 systemd[1]: Reloading requested from client PID 3132 ('systemctl') (unit session-7.scope)... Mar 7 00:55:28.763063 systemd[1]: Reloading... Mar 7 00:55:28.949448 zram_generator::config[3184]: No configuration found. Mar 7 00:55:29.158409 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:55:29.362011 systemd[1]: Reloading finished in 598 ms. Mar 7 00:55:29.442484 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:55:29.460050 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 00:55:29.460657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:55:29.460847 systemd[1]: kubelet.service: Consumed 2.144s CPU time, 120.3M memory peak, 0B memory swap peak. Mar 7 00:55:29.468093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:55:29.834836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:55:29.846973 (kubelet)[3232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:55:29.960402 kubelet[3232]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 00:55:29.960402 kubelet[3232]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:55:29.960912 kubelet[3232]: I0307 00:55:29.960564 3232 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 00:55:29.982994 kubelet[3232]: I0307 00:55:29.982923 3232 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 00:55:29.982994 kubelet[3232]: I0307 00:55:29.982969 3232 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:55:29.983202 kubelet[3232]: I0307 00:55:29.983018 3232 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 00:55:29.983202 kubelet[3232]: I0307 00:55:29.983034 3232 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:55:29.983497 kubelet[3232]: I0307 00:55:29.983466 3232 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 00:55:29.988425 kubelet[3232]: I0307 00:55:29.988358 3232 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 00:55:29.998411 kubelet[3232]: I0307 00:55:29.998196 3232 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:55:30.008445 kubelet[3232]: E0307 00:55:30.008172 3232 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 00:55:30.008445 kubelet[3232]: I0307 00:55:30.008261 3232 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 00:55:30.016436 kubelet[3232]: I0307 00:55:30.016326 3232 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 00:55:30.016777 kubelet[3232]: I0307 00:55:30.016718 3232 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:55:30.017039 kubelet[3232]: I0307 00:55:30.016769 3232 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-228","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 00:55:30.017039 kubelet[3232]: I0307 00:55:30.017031 3232 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 00:55:30.017244 kubelet[3232]: I0307 00:55:30.017049 3232 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 00:55:30.017244 kubelet[3232]: I0307 00:55:30.017087 3232 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 00:55:30.017492 kubelet[3232]: I0307 00:55:30.017454 3232 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:55:30.020110 kubelet[3232]: I0307 00:55:30.020058 3232 kubelet.go:475] "Attempting to sync node with API server" Mar 7 00:55:30.020110 kubelet[3232]: I0307 00:55:30.020112 3232 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:55:30.020489 kubelet[3232]: I0307 00:55:30.020163 3232 kubelet.go:387] "Adding apiserver pod source" Mar 7 00:55:30.020489 kubelet[3232]: I0307 00:55:30.020183 3232 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:55:30.039432 kubelet[3232]: I0307 00:55:30.037677 3232 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 00:55:30.039432 kubelet[3232]: I0307 00:55:30.038714 3232 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:55:30.039432 kubelet[3232]: I0307 00:55:30.038764 3232 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 00:55:30.046428 kubelet[3232]: I0307 00:55:30.046013 3232 server.go:1262] "Started kubelet" Mar 7 00:55:30.056415 kubelet[3232]: I0307 00:55:30.054992 3232 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 00:55:30.064965 kubelet[3232]: I0307 00:55:30.064909 3232 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:55:30.070186 kubelet[3232]: I0307 00:55:30.070148 3232 server.go:310] "Adding debug handlers to kubelet server" Mar 7 00:55:30.080987 kubelet[3232]: I0307 00:55:30.080891 3232 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:55:30.081217 kubelet[3232]: I0307 00:55:30.081191 3232 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 00:55:30.081757 kubelet[3232]: I0307 00:55:30.081731 3232 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:55:30.083670 kubelet[3232]: I0307 00:55:30.082495 3232 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:55:30.103108 kubelet[3232]: I0307 00:55:30.101478 3232 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 00:55:30.103807 kubelet[3232]: E0307 00:55:30.103758 3232 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-17-228\" not found" Mar 7 00:55:30.107423 kubelet[3232]: I0307 00:55:30.106262 3232 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 00:55:30.108422 kubelet[3232]: I0307 00:55:30.107931 3232 reconciler.go:29] "Reconciler: start to sync state" Mar 7 00:55:30.126846 kubelet[3232]: I0307 00:55:30.126583 3232 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 00:55:30.135223 kubelet[3232]: I0307 00:55:30.135168 3232 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 00:55:30.135223 kubelet[3232]: I0307 00:55:30.135216 3232 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 00:55:30.135469 kubelet[3232]: I0307 00:55:30.135261 3232 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 00:55:30.135469 kubelet[3232]: E0307 00:55:30.135340 3232 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:55:30.179412 kubelet[3232]: I0307 00:55:30.179332 3232 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:55:30.180502 kubelet[3232]: I0307 00:55:30.179437 3232 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:55:30.180502 kubelet[3232]: I0307 00:55:30.179567 3232 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:55:30.236278 kubelet[3232]: E0307 00:55:30.235465 3232 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 00:55:30.321658 kubelet[3232]: I0307 00:55:30.320234 3232 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 00:55:30.321658 kubelet[3232]: I0307 00:55:30.320269 3232 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 00:55:30.321658 kubelet[3232]: I0307 00:55:30.320306 3232 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:55:30.321658 kubelet[3232]: I0307 00:55:30.320618 3232 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 00:55:30.321658 kubelet[3232]: I0307 00:55:30.320639 3232 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 00:55:30.321658 kubelet[3232]: I0307 00:55:30.320668 3232 policy_none.go:49] "None policy: Start" Mar 7 00:55:30.321658 kubelet[3232]: I0307 00:55:30.320686 3232 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 00:55:30.321658 kubelet[3232]: I0307 00:55:30.320706 3232 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 00:55:30.321658 kubelet[3232]: I0307 00:55:30.320873 3232 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 00:55:30.321658 kubelet[3232]: I0307 00:55:30.320890 3232 policy_none.go:47] "Start" Mar 7 00:55:30.338136 kubelet[3232]: E0307 00:55:30.336584 3232 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:55:30.338136 kubelet[3232]: I0307 00:55:30.336868 3232 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 00:55:30.338136 kubelet[3232]: I0307 00:55:30.336887 3232 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:55:30.338136 kubelet[3232]: I0307 00:55:30.337290 3232 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 00:55:30.341023 kubelet[3232]: E0307 00:55:30.340963 3232 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:55:30.439462 kubelet[3232]: I0307 00:55:30.437616 3232 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-228" Mar 7 00:55:30.440973 kubelet[3232]: I0307 00:55:30.439068 3232 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-228" Mar 7 00:55:30.441144 kubelet[3232]: I0307 00:55:30.439282 3232 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-228" Mar 7 00:55:30.459960 kubelet[3232]: E0307 00:55:30.459917 3232 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-228\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-228" Mar 7 00:55:30.463773 kubelet[3232]: I0307 00:55:30.463624 3232 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-228" Mar 7 00:55:30.478628 kubelet[3232]: I0307 00:55:30.476700 3232 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-228" Mar 7 00:55:30.478628 kubelet[3232]: I0307 00:55:30.476815 3232 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-228" Mar 7 00:55:30.511992 kubelet[3232]: I0307 00:55:30.511152 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9335f14a628062281164641626ad4a0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"a9335f14a628062281164641626ad4a0\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Mar 7 00:55:30.511992 kubelet[3232]: I0307 00:55:30.511420 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9335f14a628062281164641626ad4a0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"a9335f14a628062281164641626ad4a0\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Mar 7 00:55:30.511992 kubelet[3232]: I0307 00:55:30.511484 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c69048d6d7d902d081f32290597248ea-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-228\" (UID: \"c69048d6d7d902d081f32290597248ea\") " pod="kube-system/kube-scheduler-ip-172-31-17-228" Mar 7 00:55:30.511992 kubelet[3232]: I0307 00:55:30.511957 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6eda7cda7d5c59363b1b6408a76a9ef8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-228\" (UID: \"6eda7cda7d5c59363b1b6408a76a9ef8\") " pod="kube-system/kube-apiserver-ip-172-31-17-228" Mar 7 00:55:30.512303 kubelet[3232]: I0307 00:55:30.512074 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9335f14a628062281164641626ad4a0-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"a9335f14a628062281164641626ad4a0\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Mar 7 00:55:30.512303 kubelet[3232]: I0307 00:55:30.512112 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a9335f14a628062281164641626ad4a0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"a9335f14a628062281164641626ad4a0\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Mar 7 00:55:30.512303 kubelet[3232]: I0307 00:55:30.512147 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9335f14a628062281164641626ad4a0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"a9335f14a628062281164641626ad4a0\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Mar 7 00:55:30.512303 kubelet[3232]: I0307 00:55:30.512182 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6eda7cda7d5c59363b1b6408a76a9ef8-ca-certs\") pod \"kube-apiserver-ip-172-31-17-228\" (UID: \"6eda7cda7d5c59363b1b6408a76a9ef8\") " pod="kube-system/kube-apiserver-ip-172-31-17-228" Mar 7 00:55:30.512303 kubelet[3232]: I0307 00:55:30.512221 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6eda7cda7d5c59363b1b6408a76a9ef8-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-228\" (UID: \"6eda7cda7d5c59363b1b6408a76a9ef8\") " pod="kube-system/kube-apiserver-ip-172-31-17-228" Mar 7 00:55:31.024988 kubelet[3232]: I0307 00:55:31.024650 3232 apiserver.go:52] "Watching apiserver" Mar 7 00:55:31.107782 kubelet[3232]: I0307 00:55:31.107716 3232 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 00:55:31.202672 kubelet[3232]: I0307 00:55:31.201045 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-228" podStartSLOduration=1.201022387 podStartE2EDuration="1.201022387s" podCreationTimestamp="2026-03-07 00:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:55:31.185753395 +0000 UTC m=+1.330617247" watchObservedRunningTime="2026-03-07 00:55:31.201022387 +0000 UTC m=+1.345886239" Mar 7 00:55:31.222576 kubelet[3232]: I0307 00:55:31.222450 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-228" podStartSLOduration=4.222429355 podStartE2EDuration="4.222429355s" podCreationTimestamp="2026-03-07 00:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:55:31.201629395 +0000 UTC m=+1.346493271" watchObservedRunningTime="2026-03-07 00:55:31.222429355 +0000 UTC m=+1.367293207" Mar 7 00:55:31.252426 kubelet[3232]: I0307 00:55:31.250416 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-228" podStartSLOduration=1.250368836 podStartE2EDuration="1.250368836s" podCreationTimestamp="2026-03-07 00:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:55:31.223133959 +0000 UTC m=+1.367997823" watchObservedRunningTime="2026-03-07 00:55:31.250368836 +0000 UTC m=+1.395232676" Mar 7 00:55:31.378695 update_engine[1993]: I20260307 00:55:31.376441 1993 update_attempter.cc:509] Updating boot flags... Mar 7 00:55:31.541566 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3291) Mar 7 00:55:31.987617 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3291) Mar 7 00:55:32.308475 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3291) Mar 7 00:55:34.869082 kubelet[3232]: I0307 00:55:34.869023 3232 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 00:55:34.871113 containerd[2016]: time="2026-03-07T00:55:34.871037882Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 00:55:34.872282 kubelet[3232]: I0307 00:55:34.871635 3232 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 00:55:35.676370 systemd[1]: Created slice kubepods-besteffort-pod63d26128_010c_4bae_9007_507e194a77a6.slice - libcontainer container kubepods-besteffort-pod63d26128_010c_4bae_9007_507e194a77a6.slice. Mar 7 00:55:35.851808 kubelet[3232]: I0307 00:55:35.851347 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/63d26128-010c-4bae-9007-507e194a77a6-kube-proxy\") pod \"kube-proxy-sjj7p\" (UID: \"63d26128-010c-4bae-9007-507e194a77a6\") " pod="kube-system/kube-proxy-sjj7p" Mar 7 00:55:35.851808 kubelet[3232]: I0307 00:55:35.851427 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63d26128-010c-4bae-9007-507e194a77a6-lib-modules\") pod \"kube-proxy-sjj7p\" (UID: \"63d26128-010c-4bae-9007-507e194a77a6\") " pod="kube-system/kube-proxy-sjj7p" Mar 7 00:55:35.851808 kubelet[3232]: I0307 00:55:35.851471 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63d26128-010c-4bae-9007-507e194a77a6-xtables-lock\") pod \"kube-proxy-sjj7p\" (UID: \"63d26128-010c-4bae-9007-507e194a77a6\") " pod="kube-system/kube-proxy-sjj7p" Mar 7 00:55:35.851808 kubelet[3232]: I0307 00:55:35.851519 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvhmq\" (UniqueName: \"kubernetes.io/projected/63d26128-010c-4bae-9007-507e194a77a6-kube-api-access-tvhmq\") pod \"kube-proxy-sjj7p\" (UID: \"63d26128-010c-4bae-9007-507e194a77a6\") " pod="kube-system/kube-proxy-sjj7p" Mar 7 00:55:35.872735 systemd[1]: Created slice kubepods-besteffort-pod7191b3b9_6a86_4e87_b195_8014688e3bfd.slice - libcontainer container kubepods-besteffort-pod7191b3b9_6a86_4e87_b195_8014688e3bfd.slice. Mar 7 00:55:35.953191 kubelet[3232]: I0307 00:55:35.951826 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7191b3b9-6a86-4e87-b195-8014688e3bfd-var-lib-calico\") pod \"tigera-operator-5588576f44-smrkb\" (UID: \"7191b3b9-6a86-4e87-b195-8014688e3bfd\") " pod="tigera-operator/tigera-operator-5588576f44-smrkb" Mar 7 00:55:35.953191 kubelet[3232]: I0307 00:55:35.951888 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkglh\" (UniqueName: \"kubernetes.io/projected/7191b3b9-6a86-4e87-b195-8014688e3bfd-kube-api-access-gkglh\") pod \"tigera-operator-5588576f44-smrkb\" (UID: \"7191b3b9-6a86-4e87-b195-8014688e3bfd\") " pod="tigera-operator/tigera-operator-5588576f44-smrkb" Mar 7 00:55:35.964890 kubelet[3232]: E0307 00:55:35.964810 3232 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 7 00:55:35.965092 kubelet[3232]: E0307 00:55:35.965072 3232 projected.go:196] Error preparing data for projected volume kube-api-access-tvhmq for pod kube-system/kube-proxy-sjj7p: configmap "kube-root-ca.crt" not found Mar 7 00:55:35.965316 kubelet[3232]: E0307 00:55:35.965293 3232 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63d26128-010c-4bae-9007-507e194a77a6-kube-api-access-tvhmq podName:63d26128-010c-4bae-9007-507e194a77a6 nodeName:}" failed. No retries permitted until 2026-03-07 00:55:36.465257623 +0000 UTC m=+6.610121463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tvhmq" (UniqueName: "kubernetes.io/projected/63d26128-010c-4bae-9007-507e194a77a6-kube-api-access-tvhmq") pod "kube-proxy-sjj7p" (UID: "63d26128-010c-4bae-9007-507e194a77a6") : configmap "kube-root-ca.crt" not found Mar 7 00:55:36.184907 containerd[2016]: time="2026-03-07T00:55:36.184851396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-smrkb,Uid:7191b3b9-6a86-4e87-b195-8014688e3bfd,Namespace:tigera-operator,Attempt:0,}" Mar 7 00:55:36.242982 containerd[2016]: time="2026-03-07T00:55:36.242568780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:36.242982 containerd[2016]: time="2026-03-07T00:55:36.242677224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:36.242982 containerd[2016]: time="2026-03-07T00:55:36.242715876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:36.242982 containerd[2016]: time="2026-03-07T00:55:36.242875176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:36.283697 systemd[1]: Started cri-containerd-88ce35478c187e993b2c05604e191c5c8c26bdd1c2aabf1c78802d9271f67553.scope - libcontainer container 88ce35478c187e993b2c05604e191c5c8c26bdd1c2aabf1c78802d9271f67553. Mar 7 00:55:36.348433 containerd[2016]: time="2026-03-07T00:55:36.348305869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-smrkb,Uid:7191b3b9-6a86-4e87-b195-8014688e3bfd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"88ce35478c187e993b2c05604e191c5c8c26bdd1c2aabf1c78802d9271f67553\"" Mar 7 00:55:36.352101 containerd[2016]: time="2026-03-07T00:55:36.351790921Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 00:55:36.595441 containerd[2016]: time="2026-03-07T00:55:36.594904790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sjj7p,Uid:63d26128-010c-4bae-9007-507e194a77a6,Namespace:kube-system,Attempt:0,}" Mar 7 00:55:36.634642 containerd[2016]: time="2026-03-07T00:55:36.634507706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:36.635002 containerd[2016]: time="2026-03-07T00:55:36.634939106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:36.635828 containerd[2016]: time="2026-03-07T00:55:36.635756630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:36.636209 containerd[2016]: time="2026-03-07T00:55:36.636153266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:36.667756 systemd[1]: Started cri-containerd-59252e611f945bd42dc44a8e19601c0390281bb07b0ac001cab10b312c932ee6.scope - libcontainer container 59252e611f945bd42dc44a8e19601c0390281bb07b0ac001cab10b312c932ee6. Mar 7 00:55:36.710823 containerd[2016]: time="2026-03-07T00:55:36.710593863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sjj7p,Uid:63d26128-010c-4bae-9007-507e194a77a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"59252e611f945bd42dc44a8e19601c0390281bb07b0ac001cab10b312c932ee6\"" Mar 7 00:55:36.722068 containerd[2016]: time="2026-03-07T00:55:36.721874511Z" level=info msg="CreateContainer within sandbox \"59252e611f945bd42dc44a8e19601c0390281bb07b0ac001cab10b312c932ee6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 00:55:36.753320 containerd[2016]: time="2026-03-07T00:55:36.753138111Z" level=info msg="CreateContainer within sandbox \"59252e611f945bd42dc44a8e19601c0390281bb07b0ac001cab10b312c932ee6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d01e76c127cb8ef7c9129547bfc319989bc8a608a45ac9c3c47c4636e0f5ec77\"" Mar 7 00:55:36.755096 containerd[2016]: time="2026-03-07T00:55:36.754968099Z" level=info msg="StartContainer for \"d01e76c127cb8ef7c9129547bfc319989bc8a608a45ac9c3c47c4636e0f5ec77\"" Mar 7 00:55:36.807962 systemd[1]: Started cri-containerd-d01e76c127cb8ef7c9129547bfc319989bc8a608a45ac9c3c47c4636e0f5ec77.scope - libcontainer container d01e76c127cb8ef7c9129547bfc319989bc8a608a45ac9c3c47c4636e0f5ec77. Mar 7 00:55:36.864517 containerd[2016]: time="2026-03-07T00:55:36.864216675Z" level=info msg="StartContainer for \"d01e76c127cb8ef7c9129547bfc319989bc8a608a45ac9c3c47c4636e0f5ec77\" returns successfully" Mar 7 00:55:37.564200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380247430.mount: Deactivated successfully. Mar 7 00:55:38.716398 kubelet[3232]: I0307 00:55:38.715261 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sjj7p" podStartSLOduration=3.715241945 podStartE2EDuration="3.715241945s" podCreationTimestamp="2026-03-07 00:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:55:37.287535002 +0000 UTC m=+7.432398854" watchObservedRunningTime="2026-03-07 00:55:38.715241945 +0000 UTC m=+8.860105773" Mar 7 00:55:38.768598 containerd[2016]: time="2026-03-07T00:55:38.768485273Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:38.771468 containerd[2016]: time="2026-03-07T00:55:38.770904161Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 7 00:55:38.774195 containerd[2016]: time="2026-03-07T00:55:38.774120941Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:38.780646 containerd[2016]: time="2026-03-07T00:55:38.780592913Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:38.782752 containerd[2016]: time="2026-03-07T00:55:38.782293445Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.430444696s" Mar 7 00:55:38.782752 containerd[2016]: time="2026-03-07T00:55:38.782355017Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 7 00:55:38.791698 containerd[2016]: time="2026-03-07T00:55:38.791612813Z" level=info msg="CreateContainer within sandbox \"88ce35478c187e993b2c05604e191c5c8c26bdd1c2aabf1c78802d9271f67553\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 00:55:38.824503 containerd[2016]: time="2026-03-07T00:55:38.824430245Z" level=info msg="CreateContainer within sandbox \"88ce35478c187e993b2c05604e191c5c8c26bdd1c2aabf1c78802d9271f67553\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6e172f3aa4861d5761db199c79adcd5074fd8c81c2fa1f0fe6560c13a3a4c34c\"" Mar 7 00:55:38.828210 containerd[2016]: time="2026-03-07T00:55:38.827351513Z" level=info msg="StartContainer for \"6e172f3aa4861d5761db199c79adcd5074fd8c81c2fa1f0fe6560c13a3a4c34c\"" Mar 7 00:55:38.879694 systemd[1]: Started cri-containerd-6e172f3aa4861d5761db199c79adcd5074fd8c81c2fa1f0fe6560c13a3a4c34c.scope - libcontainer container 6e172f3aa4861d5761db199c79adcd5074fd8c81c2fa1f0fe6560c13a3a4c34c. Mar 7 00:55:38.924865 containerd[2016]: time="2026-03-07T00:55:38.924787482Z" level=info msg="StartContainer for \"6e172f3aa4861d5761db199c79adcd5074fd8c81c2fa1f0fe6560c13a3a4c34c\" returns successfully" Mar 7 00:55:39.299640 kubelet[3232]: I0307 00:55:39.299533 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-smrkb" podStartSLOduration=1.866115532 podStartE2EDuration="4.299481544s" podCreationTimestamp="2026-03-07 00:55:35 +0000 UTC" firstStartedPulling="2026-03-07 00:55:36.351001045 +0000 UTC m=+6.495864885" lastFinishedPulling="2026-03-07 00:55:38.784367069 +0000 UTC m=+8.929230897" observedRunningTime="2026-03-07 00:55:39.297659344 +0000 UTC m=+9.442523268" watchObservedRunningTime="2026-03-07 00:55:39.299481544 +0000 UTC m=+9.444345480" Mar 7 00:55:45.899277 sudo[2333]: pam_unix(sudo:session): session closed for user root Mar 7 00:55:45.979800 sshd[2330]: pam_unix(sshd:session): session closed for user core Mar 7 00:55:45.989912 systemd-logind[1991]: Session 7 logged out. Waiting for processes to exit. Mar 7 00:55:45.991661 systemd[1]: sshd@6-172.31.17.228:22-20.161.92.111:42640.service: Deactivated successfully. Mar 7 00:55:45.997964 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 00:55:46.003555 systemd[1]: session-7.scope: Consumed 10.785s CPU time, 156.1M memory peak, 0B memory swap peak. Mar 7 00:55:46.006803 systemd-logind[1991]: Removed session 7. Mar 7 00:55:56.437624 systemd[1]: Created slice kubepods-besteffort-pod4a6e8e42_5b05_4de6_82e8_b146dbd9f060.slice - libcontainer container kubepods-besteffort-pod4a6e8e42_5b05_4de6_82e8_b146dbd9f060.slice. Mar 7 00:55:56.594792 kubelet[3232]: I0307 00:55:56.594573 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a6e8e42-5b05-4de6-82e8-b146dbd9f060-tigera-ca-bundle\") pod \"calico-typha-66688cbc4b-vs7lf\" (UID: \"4a6e8e42-5b05-4de6-82e8-b146dbd9f060\") " pod="calico-system/calico-typha-66688cbc4b-vs7lf" Mar 7 00:55:56.594792 kubelet[3232]: I0307 00:55:56.594647 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4a6e8e42-5b05-4de6-82e8-b146dbd9f060-typha-certs\") pod \"calico-typha-66688cbc4b-vs7lf\" (UID: \"4a6e8e42-5b05-4de6-82e8-b146dbd9f060\") " pod="calico-system/calico-typha-66688cbc4b-vs7lf" Mar 7 00:55:56.594792 kubelet[3232]: I0307 00:55:56.594690 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzntj\" (UniqueName: \"kubernetes.io/projected/4a6e8e42-5b05-4de6-82e8-b146dbd9f060-kube-api-access-fzntj\") pod \"calico-typha-66688cbc4b-vs7lf\" (UID: \"4a6e8e42-5b05-4de6-82e8-b146dbd9f060\") " pod="calico-system/calico-typha-66688cbc4b-vs7lf" Mar 7 00:55:56.602514 systemd[1]: Created slice kubepods-besteffort-pod5db31521_9067_4050_835d_213e394ea42b.slice - libcontainer container kubepods-besteffort-pod5db31521_9067_4050_835d_213e394ea42b.slice. Mar 7 00:55:56.697509 kubelet[3232]: I0307 00:55:56.695608 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5db31521-9067-4050-835d-213e394ea42b-policysync\") pod \"calico-node-g8cdd\" (UID: \"5db31521-9067-4050-835d-213e394ea42b\") " pod="calico-system/calico-node-g8cdd" Mar 7 00:55:56.697793 kubelet[3232]: I0307 00:55:56.697753 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/5db31521-9067-4050-835d-213e394ea42b-sys-fs\") pod \"calico-node-g8cdd\" (UID: \"5db31521-9067-4050-835d-213e394ea42b\") " pod="calico-system/calico-node-g8cdd" Mar 7 00:55:56.697965 kubelet[3232]: I0307 00:55:56.697925 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5db31521-9067-4050-835d-213e394ea42b-var-run-calico\") pod \"calico-node-g8cdd\" (UID: \"5db31521-9067-4050-835d-213e394ea42b\") " pod="calico-system/calico-node-g8cdd" Mar 7 00:55:56.698132 kubelet[3232]: I0307 00:55:56.698083 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5db31521-9067-4050-835d-213e394ea42b-xtables-lock\") pod \"calico-node-g8cdd\" (UID: \"5db31521-9067-4050-835d-213e394ea42b\") " pod="calico-system/calico-node-g8cdd" Mar 7 00:55:56.698943 kubelet[3232]: I0307 00:55:56.698178 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5db31521-9067-4050-835d-213e394ea42b-node-certs\") pod \"calico-node-g8cdd\" (UID: \"5db31521-9067-4050-835d-213e394ea42b\") " pod="calico-system/calico-node-g8cdd" Mar 7 00:55:56.698943 kubelet[3232]: I0307 00:55:56.698694 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5db31521-9067-4050-835d-213e394ea42b-cni-net-dir\") pod \"calico-node-g8cdd\" (UID: \"5db31521-9067-4050-835d-213e394ea42b\") " pod="calico-system/calico-node-g8cdd" Mar 7 00:55:56.701657 kubelet[3232]: I0307 00:55:56.700443 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5db31521-9067-4050-835d-213e394ea42b-cni-log-dir\") pod \"calico-node-g8cdd\" (UID: \"5db31521-9067-4050-835d-213e394ea42b\") " pod="calico-system/calico-node-g8cdd" Mar 7 00:55:56.701826 kubelet[3232]: I0307 00:55:56.701699 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5db31521-9067-4050-835d-213e394ea42b-flexvol-driver-host\") pod \"calico-node-g8cdd\" (UID: \"5db31521-9067-4050-835d-213e394ea42b\") " pod="calico-system/calico-node-g8cdd" Mar 7 00:55:56.701826 kubelet[3232]: I0307 00:55:56.701755 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5db31521-9067-4050-835d-213e394ea42b-tigera-ca-bundle\") pod \"calico-node-g8cdd\" (UID: \"5db31521-9067-4050-835d-213e394ea42b\") " pod="calico-system/calico-node-g8cdd" Mar 7 00:55:56.701963 kubelet[3232]: I0307 00:55:56.701822 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5db31521-9067-4050-835d-213e394ea42b-lib-modules\") pod \"calico-node-g8cdd\" (UID: \"5db31521-9067-4050-835d-213e394ea42b\") " pod="calico-system/calico-node-g8cdd" Mar 7 00:55:56.701963 kubelet[3232]: I0307 00:55:56.701913 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/5db31521-9067-4050-835d-213e394ea42b-bpffs\") pod \"calico-node-g8cdd\" (UID: \"5db31521-9067-4050-835d-213e394ea42b\") " pod="calico-system/calico-node-g8cdd" Mar 7 00:55:56.703332 kubelet[3232]: I0307 00:55:56.702176 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5db31521-9067-4050-835d-213e394ea42b-cni-bin-dir\") pod \"calico-node-g8cdd\" (UID: \"5db31521-9067-4050-835d-213e394ea42b\") " pod="calico-system/calico-node-g8cdd" Mar 7 00:55:56.703332 kubelet[3232]: I0307 00:55:56.702277 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5db31521-9067-4050-835d-213e394ea42b-var-lib-calico\") pod \"calico-node-g8cdd\" (UID: \"5db31521-9067-4050-835d-213e394ea42b\") " pod="calico-system/calico-node-g8cdd" Mar 7 00:55:56.703577 kubelet[3232]: I0307 00:55:56.703440 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snqnb\" (UniqueName: \"kubernetes.io/projected/5db31521-9067-4050-835d-213e394ea42b-kube-api-access-snqnb\") pod \"calico-node-g8cdd\" (UID: \"5db31521-9067-4050-835d-213e394ea42b\") " pod="calico-system/calico-node-g8cdd" Mar 7 00:55:56.703577 kubelet[3232]: I0307 00:55:56.703529 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/5db31521-9067-4050-835d-213e394ea42b-nodeproc\") pod \"calico-node-g8cdd\" (UID: \"5db31521-9067-4050-835d-213e394ea42b\") " pod="calico-system/calico-node-g8cdd" Mar 7 00:55:56.725921 kubelet[3232]: E0307 00:55:56.725039 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbhbv" podUID="0b3c7869-094b-4d01-b5aa-efdf0e733f4f" Mar 7 00:55:56.751912 containerd[2016]: time="2026-03-07T00:55:56.750902734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66688cbc4b-vs7lf,Uid:4a6e8e42-5b05-4de6-82e8-b146dbd9f060,Namespace:calico-system,Attempt:0,}" Mar 7 00:55:56.805156 kubelet[3232]: I0307 00:55:56.804780 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0b3c7869-094b-4d01-b5aa-efdf0e733f4f-registration-dir\") pod \"csi-node-driver-fbhbv\" (UID: \"0b3c7869-094b-4d01-b5aa-efdf0e733f4f\") " pod="calico-system/csi-node-driver-fbhbv" Mar 7 00:55:56.805156 kubelet[3232]: I0307 00:55:56.804898 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0b3c7869-094b-4d01-b5aa-efdf0e733f4f-varrun\") pod \"csi-node-driver-fbhbv\" (UID: \"0b3c7869-094b-4d01-b5aa-efdf0e733f4f\") " pod="calico-system/csi-node-driver-fbhbv" Mar 7 00:55:56.805156 kubelet[3232]: I0307 00:55:56.805039 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5fck\" (UniqueName: \"kubernetes.io/projected/0b3c7869-094b-4d01-b5aa-efdf0e733f4f-kube-api-access-d5fck\") pod \"csi-node-driver-fbhbv\" (UID: \"0b3c7869-094b-4d01-b5aa-efdf0e733f4f\") " pod="calico-system/csi-node-driver-fbhbv" Mar 7 00:55:56.805580 kubelet[3232]: I0307 00:55:56.805210 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b3c7869-094b-4d01-b5aa-efdf0e733f4f-kubelet-dir\") pod \"csi-node-driver-fbhbv\" (UID: \"0b3c7869-094b-4d01-b5aa-efdf0e733f4f\") " pod="calico-system/csi-node-driver-fbhbv" Mar 7 00:55:56.805580 kubelet[3232]: I0307 00:55:56.805254 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0b3c7869-094b-4d01-b5aa-efdf0e733f4f-socket-dir\") pod \"csi-node-driver-fbhbv\" (UID: \"0b3c7869-094b-4d01-b5aa-efdf0e733f4f\") " pod="calico-system/csi-node-driver-fbhbv" Mar 7 00:55:56.820841 kubelet[3232]: E0307 00:55:56.818816 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.820841 kubelet[3232]: W0307 00:55:56.819169 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.820841 kubelet[3232]: E0307 00:55:56.819212 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.824655 kubelet[3232]: E0307 00:55:56.824560 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.824812 kubelet[3232]: W0307 00:55:56.824641 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.824812 kubelet[3232]: E0307 00:55:56.824801 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.826781 kubelet[3232]: E0307 00:55:56.826728 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.826781 kubelet[3232]: W0307 00:55:56.826768 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.828562 kubelet[3232]: E0307 00:55:56.826802 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.830050 kubelet[3232]: E0307 00:55:56.829987 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.830198 kubelet[3232]: W0307 00:55:56.830067 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.830272 kubelet[3232]: E0307 00:55:56.830103 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.834403 kubelet[3232]: E0307 00:55:56.832807 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.834403 kubelet[3232]: W0307 00:55:56.832849 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.834403 kubelet[3232]: E0307 00:55:56.832882 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.847816 kubelet[3232]: E0307 00:55:56.845926 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.847816 kubelet[3232]: W0307 00:55:56.845980 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.847816 kubelet[3232]: E0307 00:55:56.846022 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.848101 kubelet[3232]: E0307 00:55:56.847877 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.848101 kubelet[3232]: W0307 00:55:56.847904 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.848101 kubelet[3232]: E0307 00:55:56.847936 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.849617 kubelet[3232]: E0307 00:55:56.849563 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.849617 kubelet[3232]: W0307 00:55:56.849605 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.849833 kubelet[3232]: E0307 00:55:56.849640 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.852176 kubelet[3232]: E0307 00:55:56.852123 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.852176 kubelet[3232]: W0307 00:55:56.852163 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.852401 kubelet[3232]: E0307 00:55:56.852197 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.853197 kubelet[3232]: E0307 00:55:56.853149 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.853197 kubelet[3232]: W0307 00:55:56.853187 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.853406 kubelet[3232]: E0307 00:55:56.853219 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.855850 kubelet[3232]: E0307 00:55:56.855796 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.855850 kubelet[3232]: W0307 00:55:56.855837 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.856060 kubelet[3232]: E0307 00:55:56.855870 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.856878 kubelet[3232]: E0307 00:55:56.856760 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.856878 kubelet[3232]: W0307 00:55:56.856800 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.856878 kubelet[3232]: E0307 00:55:56.856830 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.860455 kubelet[3232]: E0307 00:55:56.858637 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.860455 kubelet[3232]: W0307 00:55:56.858688 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.860455 kubelet[3232]: E0307 00:55:56.858722 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.860455 kubelet[3232]: E0307 00:55:56.859668 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.860455 kubelet[3232]: W0307 00:55:56.859693 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.860455 kubelet[3232]: E0307 00:55:56.859723 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.860865 kubelet[3232]: E0307 00:55:56.860705 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.860865 kubelet[3232]: W0307 00:55:56.860730 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.860865 kubelet[3232]: E0307 00:55:56.860761 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.862483 kubelet[3232]: E0307 00:55:56.861494 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.862483 kubelet[3232]: W0307 00:55:56.861529 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.862483 kubelet[3232]: E0307 00:55:56.861559 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.862779 kubelet[3232]: E0307 00:55:56.862510 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.862779 kubelet[3232]: W0307 00:55:56.862536 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.862779 kubelet[3232]: E0307 00:55:56.862571 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.864430 kubelet[3232]: E0307 00:55:56.863029 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.864430 kubelet[3232]: W0307 00:55:56.863061 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.864430 kubelet[3232]: E0307 00:55:56.863086 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.865356 containerd[2016]: time="2026-03-07T00:55:56.862335083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:56.865356 containerd[2016]: time="2026-03-07T00:55:56.864230363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:56.865356 containerd[2016]: time="2026-03-07T00:55:56.864307559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:56.865837 kubelet[3232]: E0307 00:55:56.865121 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.865837 kubelet[3232]: W0307 00:55:56.865148 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.865837 kubelet[3232]: E0307 00:55:56.865179 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.868405 containerd[2016]: time="2026-03-07T00:55:56.866725079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:56.868521 kubelet[3232]: E0307 00:55:56.867492 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.868521 kubelet[3232]: W0307 00:55:56.867521 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.868521 kubelet[3232]: E0307 00:55:56.867553 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.911429 kubelet[3232]: E0307 00:55:56.910467 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.911429 kubelet[3232]: W0307 00:55:56.910518 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.911429 kubelet[3232]: E0307 00:55:56.910588 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.915572 kubelet[3232]: E0307 00:55:56.915522 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.915572 kubelet[3232]: W0307 00:55:56.915560 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.915774 kubelet[3232]: E0307 00:55:56.915595 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.920306 kubelet[3232]: E0307 00:55:56.920031 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.920306 kubelet[3232]: W0307 00:55:56.920074 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.920564 kubelet[3232]: E0307 00:55:56.920343 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.924458 kubelet[3232]: E0307 00:55:56.924161 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.924458 kubelet[3232]: W0307 00:55:56.924194 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.924458 kubelet[3232]: E0307 00:55:56.924228 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.925522 kubelet[3232]: E0307 00:55:56.925208 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.925522 kubelet[3232]: W0307 00:55:56.925251 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.925522 kubelet[3232]: E0307 00:55:56.925308 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.927840 kubelet[3232]: E0307 00:55:56.925899 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.927840 kubelet[3232]: W0307 00:55:56.925932 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.927840 kubelet[3232]: E0307 00:55:56.925997 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.927840 kubelet[3232]: E0307 00:55:56.926525 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.927840 kubelet[3232]: W0307 00:55:56.926543 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.927840 kubelet[3232]: E0307 00:55:56.926588 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.928310 kubelet[3232]: E0307 00:55:56.928153 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.928310 kubelet[3232]: W0307 00:55:56.928179 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.928310 kubelet[3232]: E0307 00:55:56.928243 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.929094 kubelet[3232]: E0307 00:55:56.928810 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.929094 kubelet[3232]: W0307 00:55:56.928867 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.929094 kubelet[3232]: E0307 00:55:56.928896 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.932470 kubelet[3232]: E0307 00:55:56.929428 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.932470 kubelet[3232]: W0307 00:55:56.929449 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.932470 kubelet[3232]: E0307 00:55:56.929499 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.932470 kubelet[3232]: E0307 00:55:56.930441 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.932470 kubelet[3232]: W0307 00:55:56.930467 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.932470 kubelet[3232]: E0307 00:55:56.930496 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.932470 kubelet[3232]: E0307 00:55:56.930977 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.932470 kubelet[3232]: W0307 00:55:56.930997 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.932470 kubelet[3232]: E0307 00:55:56.931020 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.932470 kubelet[3232]: E0307 00:55:56.931409 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.933021 kubelet[3232]: W0307 00:55:56.931428 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.933021 kubelet[3232]: E0307 00:55:56.931451 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.933021 kubelet[3232]: E0307 00:55:56.932041 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.933021 kubelet[3232]: W0307 00:55:56.932064 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.933021 kubelet[3232]: E0307 00:55:56.932090 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.933021 kubelet[3232]: E0307 00:55:56.932613 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.933021 kubelet[3232]: W0307 00:55:56.932634 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.933021 kubelet[3232]: E0307 00:55:56.932685 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.937627 kubelet[3232]: E0307 00:55:56.933076 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.937627 kubelet[3232]: W0307 00:55:56.933118 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.937627 kubelet[3232]: E0307 00:55:56.933140 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.937627 kubelet[3232]: E0307 00:55:56.933532 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.937627 kubelet[3232]: W0307 00:55:56.933552 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.937627 kubelet[3232]: E0307 00:55:56.933605 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.937627 kubelet[3232]: E0307 00:55:56.934833 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.937627 kubelet[3232]: W0307 00:55:56.934893 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.937627 kubelet[3232]: E0307 00:55:56.934923 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.937627 kubelet[3232]: E0307 00:55:56.935596 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.938193 kubelet[3232]: W0307 00:55:56.935618 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.938193 kubelet[3232]: E0307 00:55:56.935645 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.938193 kubelet[3232]: E0307 00:55:56.936555 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.938193 kubelet[3232]: W0307 00:55:56.936792 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.938193 kubelet[3232]: E0307 00:55:56.936821 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.938193 kubelet[3232]: E0307 00:55:56.937552 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.938193 kubelet[3232]: W0307 00:55:56.937591 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.938193 kubelet[3232]: E0307 00:55:56.937619 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.938193 kubelet[3232]: E0307 00:55:56.938168 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.938193 kubelet[3232]: W0307 00:55:56.938186 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.940543 kubelet[3232]: E0307 00:55:56.938208 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.940543 kubelet[3232]: E0307 00:55:56.938906 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.940543 kubelet[3232]: W0307 00:55:56.938930 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.940543 kubelet[3232]: E0307 00:55:56.938994 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.940543 kubelet[3232]: E0307 00:55:56.939746 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.940543 kubelet[3232]: W0307 00:55:56.939771 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.940543 kubelet[3232]: E0307 00:55:56.939838 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.940963 kubelet[3232]: E0307 00:55:56.940655 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.940963 kubelet[3232]: W0307 00:55:56.940678 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.940963 kubelet[3232]: E0307 00:55:56.940703 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.945457 kubelet[3232]: E0307 00:55:56.941267 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.945457 kubelet[3232]: W0307 00:55:56.941793 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.945457 kubelet[3232]: E0307 00:55:56.941829 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:56.941423 systemd[1]: Started cri-containerd-02fee6069dbfc03ea4712794270562a7efd510b7491b0a67c08ace4b2b06b4ff.scope - libcontainer container 02fee6069dbfc03ea4712794270562a7efd510b7491b0a67c08ace4b2b06b4ff. Mar 7 00:55:56.969606 kubelet[3232]: E0307 00:55:56.968646 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:56.969606 kubelet[3232]: W0307 00:55:56.968686 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:56.969606 kubelet[3232]: E0307 00:55:56.968719 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:57.042751 containerd[2016]: time="2026-03-07T00:55:57.042530072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66688cbc4b-vs7lf,Uid:4a6e8e42-5b05-4de6-82e8-b146dbd9f060,Namespace:calico-system,Attempt:0,} returns sandbox id \"02fee6069dbfc03ea4712794270562a7efd510b7491b0a67c08ace4b2b06b4ff\"" Mar 7 00:55:57.047324 containerd[2016]: time="2026-03-07T00:55:57.046796504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 00:55:57.216309 containerd[2016]: time="2026-03-07T00:55:57.215738997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g8cdd,Uid:5db31521-9067-4050-835d-213e394ea42b,Namespace:calico-system,Attempt:0,}" Mar 7 00:55:57.259827 containerd[2016]: time="2026-03-07T00:55:57.259489653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:57.259827 containerd[2016]: time="2026-03-07T00:55:57.259595229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:57.259981 containerd[2016]: time="2026-03-07T00:55:57.259621569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:57.260658 containerd[2016]: time="2026-03-07T00:55:57.260005929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:57.292723 systemd[1]: Started cri-containerd-117d620ac3cf7c715fde839666f347358246a9363830b22610a4c7e07f6fa1e0.scope - libcontainer container 117d620ac3cf7c715fde839666f347358246a9363830b22610a4c7e07f6fa1e0. Mar 7 00:55:57.344602 containerd[2016]: time="2026-03-07T00:55:57.344534229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g8cdd,Uid:5db31521-9067-4050-835d-213e394ea42b,Namespace:calico-system,Attempt:0,} returns sandbox id \"117d620ac3cf7c715fde839666f347358246a9363830b22610a4c7e07f6fa1e0\"" Mar 7 00:55:58.137879 kubelet[3232]: E0307 00:55:58.137764 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbhbv" podUID="0b3c7869-094b-4d01-b5aa-efdf0e733f4f" Mar 7 00:55:58.312289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149038978.mount: Deactivated successfully. Mar 7 00:55:59.078880 containerd[2016]: time="2026-03-07T00:55:59.077316466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:59.081670 containerd[2016]: time="2026-03-07T00:55:59.081614434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 7 00:55:59.084094 containerd[2016]: time="2026-03-07T00:55:59.084023122Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:59.090618 containerd[2016]: time="2026-03-07T00:55:59.088946170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:59.091973 containerd[2016]: time="2026-03-07T00:55:59.091898806Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.044739458s" Mar 7 00:55:59.091973 containerd[2016]: time="2026-03-07T00:55:59.091964446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 7 00:55:59.095261 containerd[2016]: time="2026-03-07T00:55:59.094321354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 00:55:59.126856 containerd[2016]: time="2026-03-07T00:55:59.126765394Z" level=info msg="CreateContainer within sandbox \"02fee6069dbfc03ea4712794270562a7efd510b7491b0a67c08ace4b2b06b4ff\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 00:55:59.157031 containerd[2016]: time="2026-03-07T00:55:59.156954790Z" level=info msg="CreateContainer within sandbox \"02fee6069dbfc03ea4712794270562a7efd510b7491b0a67c08ace4b2b06b4ff\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"17c62bc855c3c0690d1223c227cd7ed918f4e0b3771fdf918fb42373bfd992a5\"" Mar 7 00:55:59.158433 containerd[2016]: time="2026-03-07T00:55:59.158356570Z" level=info msg="StartContainer for \"17c62bc855c3c0690d1223c227cd7ed918f4e0b3771fdf918fb42373bfd992a5\"" Mar 7 00:55:59.220800 systemd[1]: Started cri-containerd-17c62bc855c3c0690d1223c227cd7ed918f4e0b3771fdf918fb42373bfd992a5.scope - libcontainer container 17c62bc855c3c0690d1223c227cd7ed918f4e0b3771fdf918fb42373bfd992a5. Mar 7 00:55:59.287976 containerd[2016]: time="2026-03-07T00:55:59.287765123Z" level=info msg="StartContainer for \"17c62bc855c3c0690d1223c227cd7ed918f4e0b3771fdf918fb42373bfd992a5\" returns successfully" Mar 7 00:55:59.417912 kubelet[3232]: E0307 00:55:59.417089 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.417912 kubelet[3232]: W0307 00:55:59.417128 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.417912 kubelet[3232]: E0307 00:55:59.417161 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.417912 kubelet[3232]: E0307 00:55:59.417554 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.417912 kubelet[3232]: W0307 00:55:59.417572 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.417912 kubelet[3232]: E0307 00:55:59.417632 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.418772 kubelet[3232]: E0307 00:55:59.418727 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.419155 kubelet[3232]: W0307 00:55:59.418768 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.419155 kubelet[3232]: E0307 00:55:59.418800 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.420261 kubelet[3232]: E0307 00:55:59.419224 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.420261 kubelet[3232]: W0307 00:55:59.419243 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.420261 kubelet[3232]: E0307 00:55:59.419267 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.420261 kubelet[3232]: E0307 00:55:59.419961 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.420261 kubelet[3232]: W0307 00:55:59.419984 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.420261 kubelet[3232]: E0307 00:55:59.420010 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.421760 kubelet[3232]: E0307 00:55:59.421607 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.421760 kubelet[3232]: W0307 00:55:59.421633 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.421760 kubelet[3232]: E0307 00:55:59.421662 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.422080 kubelet[3232]: E0307 00:55:59.422044 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.422080 kubelet[3232]: W0307 00:55:59.422072 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.422239 kubelet[3232]: E0307 00:55:59.422097 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.423479 kubelet[3232]: E0307 00:55:59.422487 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.423479 kubelet[3232]: W0307 00:55:59.422509 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.423479 kubelet[3232]: E0307 00:55:59.422534 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.423479 kubelet[3232]: E0307 00:55:59.423031 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.423479 kubelet[3232]: W0307 00:55:59.423056 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.423479 kubelet[3232]: E0307 00:55:59.423086 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.425775 kubelet[3232]: E0307 00:55:59.424292 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.425775 kubelet[3232]: W0307 00:55:59.424318 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.425775 kubelet[3232]: E0307 00:55:59.424345 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.425775 kubelet[3232]: E0307 00:55:59.424800 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.425775 kubelet[3232]: W0307 00:55:59.424820 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.425775 kubelet[3232]: E0307 00:55:59.424842 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.425775 kubelet[3232]: E0307 00:55:59.425202 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.425775 kubelet[3232]: W0307 00:55:59.425220 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.425775 kubelet[3232]: E0307 00:55:59.425240 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.425775 kubelet[3232]: E0307 00:55:59.425652 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.426308 kubelet[3232]: W0307 00:55:59.425672 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.426308 kubelet[3232]: E0307 00:55:59.425694 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.427957 kubelet[3232]: E0307 00:55:59.427905 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.427957 kubelet[3232]: W0307 00:55:59.427946 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.428124 kubelet[3232]: E0307 00:55:59.427980 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.428656 kubelet[3232]: E0307 00:55:59.428434 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.428656 kubelet[3232]: W0307 00:55:59.428466 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.428656 kubelet[3232]: E0307 00:55:59.428494 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.456317 kubelet[3232]: E0307 00:55:59.456238 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.456317 kubelet[3232]: W0307 00:55:59.456304 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.456580 kubelet[3232]: E0307 00:55:59.456337 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.457158 kubelet[3232]: E0307 00:55:59.457116 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.457158 kubelet[3232]: W0307 00:55:59.457151 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.457646 kubelet[3232]: E0307 00:55:59.457181 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.459438 kubelet[3232]: E0307 00:55:59.459250 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.459438 kubelet[3232]: W0307 00:55:59.459290 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.459438 kubelet[3232]: E0307 00:55:59.459321 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.460548 kubelet[3232]: E0307 00:55:59.460294 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.460548 kubelet[3232]: W0307 00:55:59.460317 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.460548 kubelet[3232]: E0307 00:55:59.460342 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.461773 kubelet[3232]: E0307 00:55:59.461650 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.461773 kubelet[3232]: W0307 00:55:59.461706 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.463005 kubelet[3232]: E0307 00:55:59.461737 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.463871 kubelet[3232]: E0307 00:55:59.463821 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.464766 kubelet[3232]: W0307 00:55:59.464029 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.464766 kubelet[3232]: E0307 00:55:59.464069 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.465615 kubelet[3232]: E0307 00:55:59.465276 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.465615 kubelet[3232]: W0307 00:55:59.465319 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.465615 kubelet[3232]: E0307 00:55:59.465350 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.467491 kubelet[3232]: E0307 00:55:59.466829 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.467491 kubelet[3232]: W0307 00:55:59.466862 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.467491 kubelet[3232]: E0307 00:55:59.466893 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.469415 kubelet[3232]: E0307 00:55:59.468511 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.469865 kubelet[3232]: W0307 00:55:59.469549 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.469865 kubelet[3232]: E0307 00:55:59.469596 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.471615 kubelet[3232]: E0307 00:55:59.470846 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.471615 kubelet[3232]: W0307 00:55:59.470879 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.471615 kubelet[3232]: E0307 00:55:59.470910 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.475429 kubelet[3232]: E0307 00:55:59.474740 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.475429 kubelet[3232]: W0307 00:55:59.474775 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.475429 kubelet[3232]: E0307 00:55:59.474808 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.476466 kubelet[3232]: E0307 00:55:59.476433 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.476718 kubelet[3232]: W0307 00:55:59.476690 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.477553 kubelet[3232]: E0307 00:55:59.477253 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.478573 kubelet[3232]: E0307 00:55:59.478536 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.478729 kubelet[3232]: W0307 00:55:59.478704 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.478840 kubelet[3232]: E0307 00:55:59.478817 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.480216 kubelet[3232]: E0307 00:55:59.480164 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.480216 kubelet[3232]: W0307 00:55:59.480209 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.480460 kubelet[3232]: E0307 00:55:59.480242 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.481500 kubelet[3232]: E0307 00:55:59.481448 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.481500 kubelet[3232]: W0307 00:55:59.481487 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.481695 kubelet[3232]: E0307 00:55:59.481520 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.483453 kubelet[3232]: E0307 00:55:59.483285 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.483453 kubelet[3232]: W0307 00:55:59.483349 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.483453 kubelet[3232]: E0307 00:55:59.483456 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.484302 kubelet[3232]: E0307 00:55:59.484229 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.484302 kubelet[3232]: W0307 00:55:59.484292 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.485528 kubelet[3232]: E0307 00:55:59.484323 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:59.486461 kubelet[3232]: E0307 00:55:59.485864 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:59.486461 kubelet[3232]: W0307 00:55:59.485913 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:59.486461 kubelet[3232]: E0307 00:55:59.485963 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.138492 kubelet[3232]: E0307 00:56:00.138416 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbhbv" podUID="0b3c7869-094b-4d01-b5aa-efdf0e733f4f" Mar 7 00:56:00.351353 containerd[2016]: time="2026-03-07T00:56:00.351181896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:00.353218 containerd[2016]: time="2026-03-07T00:56:00.353159664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 7 00:56:00.357415 containerd[2016]: time="2026-03-07T00:56:00.355975836Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:00.363917 containerd[2016]: time="2026-03-07T00:56:00.363839952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:00.366474 containerd[2016]: time="2026-03-07T00:56:00.366258216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.271872194s" Mar 7 00:56:00.366474 containerd[2016]: time="2026-03-07T00:56:00.366314796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 7 00:56:00.370762 kubelet[3232]: I0307 00:56:00.370677 3232 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:56:00.376534 containerd[2016]: time="2026-03-07T00:56:00.376300584Z" level=info msg="CreateContainer within sandbox \"117d620ac3cf7c715fde839666f347358246a9363830b22610a4c7e07f6fa1e0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 00:56:00.405062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854389654.mount: Deactivated successfully. Mar 7 00:56:00.412994 containerd[2016]: time="2026-03-07T00:56:00.412822308Z" level=info msg="CreateContainer within sandbox \"117d620ac3cf7c715fde839666f347358246a9363830b22610a4c7e07f6fa1e0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9af04cb293a55112b078764d4ac527624a72505a1104caaa791c6650470214cc\"" Mar 7 00:56:00.413796 containerd[2016]: time="2026-03-07T00:56:00.413637144Z" level=info msg="StartContainer for \"9af04cb293a55112b078764d4ac527624a72505a1104caaa791c6650470214cc\"" Mar 7 00:56:00.439490 kubelet[3232]: E0307 00:56:00.435533 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.439490 kubelet[3232]: W0307 00:56:00.435569 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.439490 kubelet[3232]: E0307 00:56:00.435601 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.441488 kubelet[3232]: E0307 00:56:00.441145 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.441488 kubelet[3232]: W0307 00:56:00.441178 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.441488 kubelet[3232]: E0307 00:56:00.441327 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.447550 kubelet[3232]: E0307 00:56:00.444459 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.447550 kubelet[3232]: W0307 00:56:00.444490 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.447550 kubelet[3232]: E0307 00:56:00.444522 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.447550 kubelet[3232]: E0307 00:56:00.447353 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.447550 kubelet[3232]: W0307 00:56:00.447398 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.447550 kubelet[3232]: E0307 00:56:00.447437 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.447913 kubelet[3232]: E0307 00:56:00.447839 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.447913 kubelet[3232]: W0307 00:56:00.447857 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.447913 kubelet[3232]: E0307 00:56:00.447877 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.448480 kubelet[3232]: E0307 00:56:00.448428 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.448480 kubelet[3232]: W0307 00:56:00.448461 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.448626 kubelet[3232]: E0307 00:56:00.448489 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.449330 kubelet[3232]: E0307 00:56:00.449228 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.449330 kubelet[3232]: W0307 00:56:00.449262 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.449330 kubelet[3232]: E0307 00:56:00.449291 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.451708 kubelet[3232]: E0307 00:56:00.451554 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.451708 kubelet[3232]: W0307 00:56:00.451595 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.451708 kubelet[3232]: E0307 00:56:00.451634 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.453120 kubelet[3232]: E0307 00:56:00.452258 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.453120 kubelet[3232]: W0307 00:56:00.452283 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.453120 kubelet[3232]: E0307 00:56:00.452308 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.453120 kubelet[3232]: E0307 00:56:00.452804 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.453120 kubelet[3232]: W0307 00:56:00.452824 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.454006 kubelet[3232]: E0307 00:56:00.453570 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.455162 kubelet[3232]: E0307 00:56:00.455092 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.455557 kubelet[3232]: W0307 00:56:00.455127 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.455557 kubelet[3232]: E0307 00:56:00.455448 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.456915 kubelet[3232]: E0307 00:56:00.456510 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.456915 kubelet[3232]: W0307 00:56:00.456541 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.456915 kubelet[3232]: E0307 00:56:00.456571 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.458140 kubelet[3232]: E0307 00:56:00.457871 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.458140 kubelet[3232]: W0307 00:56:00.457902 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.458140 kubelet[3232]: E0307 00:56:00.457952 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.459607 kubelet[3232]: E0307 00:56:00.459423 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.459607 kubelet[3232]: W0307 00:56:00.459456 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.459607 kubelet[3232]: E0307 00:56:00.459487 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.460991 kubelet[3232]: E0307 00:56:00.460963 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.461108 kubelet[3232]: W0307 00:56:00.461083 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.461218 kubelet[3232]: E0307 00:56:00.461194 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.470692 kubelet[3232]: E0307 00:56:00.470636 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.471523 kubelet[3232]: W0307 00:56:00.471085 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.471523 kubelet[3232]: E0307 00:56:00.471170 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.476806 kubelet[3232]: E0307 00:56:00.475285 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.476806 kubelet[3232]: W0307 00:56:00.475317 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.476806 kubelet[3232]: E0307 00:56:00.475352 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.479632 kubelet[3232]: E0307 00:56:00.479346 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.479632 kubelet[3232]: W0307 00:56:00.479415 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.479632 kubelet[3232]: E0307 00:56:00.479451 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.479994 kubelet[3232]: E0307 00:56:00.479950 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.479994 kubelet[3232]: W0307 00:56:00.479985 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.480124 kubelet[3232]: E0307 00:56:00.480019 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.480595 kubelet[3232]: E0307 00:56:00.480489 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.480595 kubelet[3232]: W0307 00:56:00.480519 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.480595 kubelet[3232]: E0307 00:56:00.480544 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.480932 kubelet[3232]: E0307 00:56:00.480877 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.480932 kubelet[3232]: W0307 00:56:00.480904 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.481248 kubelet[3232]: E0307 00:56:00.480933 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.481856 kubelet[3232]: E0307 00:56:00.481781 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.481856 kubelet[3232]: W0307 00:56:00.481816 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.481856 kubelet[3232]: E0307 00:56:00.481848 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.485083 kubelet[3232]: E0307 00:56:00.484070 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.485083 kubelet[3232]: W0307 00:56:00.484110 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.485083 kubelet[3232]: E0307 00:56:00.484145 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.485083 kubelet[3232]: E0307 00:56:00.484658 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.485083 kubelet[3232]: W0307 00:56:00.484683 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.485083 kubelet[3232]: E0307 00:56:00.484716 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.487727 kubelet[3232]: E0307 00:56:00.487604 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.487727 kubelet[3232]: W0307 00:56:00.487645 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.487727 kubelet[3232]: E0307 00:56:00.487679 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.489412 kubelet[3232]: E0307 00:56:00.489030 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.489412 kubelet[3232]: W0307 00:56:00.489068 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.489412 kubelet[3232]: E0307 00:56:00.489101 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.489634 kubelet[3232]: E0307 00:56:00.489464 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.489634 kubelet[3232]: W0307 00:56:00.489483 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.489634 kubelet[3232]: E0307 00:56:00.489504 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.490726 kubelet[3232]: E0307 00:56:00.490682 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.490726 kubelet[3232]: W0307 00:56:00.490720 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.490957 kubelet[3232]: E0307 00:56:00.490755 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.491970 kubelet[3232]: E0307 00:56:00.491836 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.491970 kubelet[3232]: W0307 00:56:00.491874 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.491970 kubelet[3232]: E0307 00:56:00.491905 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.493804 kubelet[3232]: E0307 00:56:00.492915 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.493804 kubelet[3232]: W0307 00:56:00.492953 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.493804 kubelet[3232]: E0307 00:56:00.493009 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.493804 kubelet[3232]: E0307 00:56:00.493671 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.493804 kubelet[3232]: W0307 00:56:00.493721 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.493804 kubelet[3232]: E0307 00:56:00.493807 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.497515 kubelet[3232]: E0307 00:56:00.494532 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.497515 kubelet[3232]: W0307 00:56:00.494687 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.497515 kubelet[3232]: E0307 00:56:00.494720 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.497515 kubelet[3232]: E0307 00:56:00.495723 3232 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:56:00.497515 kubelet[3232]: W0307 00:56:00.495772 3232 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:56:00.497515 kubelet[3232]: E0307 00:56:00.495803 3232 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:56:00.495578 systemd[1]: Started cri-containerd-9af04cb293a55112b078764d4ac527624a72505a1104caaa791c6650470214cc.scope - libcontainer container 9af04cb293a55112b078764d4ac527624a72505a1104caaa791c6650470214cc. Mar 7 00:56:00.565230 containerd[2016]: time="2026-03-07T00:56:00.564204901Z" level=info msg="StartContainer for \"9af04cb293a55112b078764d4ac527624a72505a1104caaa791c6650470214cc\" returns successfully" Mar 7 00:56:00.599717 systemd[1]: cri-containerd-9af04cb293a55112b078764d4ac527624a72505a1104caaa791c6650470214cc.scope: Deactivated successfully. Mar 7 00:56:01.084799 containerd[2016]: time="2026-03-07T00:56:01.084715668Z" level=info msg="shim disconnected" id=9af04cb293a55112b078764d4ac527624a72505a1104caaa791c6650470214cc namespace=k8s.io Mar 7 00:56:01.085461 containerd[2016]: time="2026-03-07T00:56:01.085166568Z" level=warning msg="cleaning up after shim disconnected" id=9af04cb293a55112b078764d4ac527624a72505a1104caaa791c6650470214cc namespace=k8s.io Mar 7 00:56:01.085461 containerd[2016]: time="2026-03-07T00:56:01.085195896Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:56:01.104215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9af04cb293a55112b078764d4ac527624a72505a1104caaa791c6650470214cc-rootfs.mount: Deactivated successfully. Mar 7 00:56:01.111440 containerd[2016]: time="2026-03-07T00:56:01.111105060Z" level=warning msg="cleanup warnings time=\"2026-03-07T00:56:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 00:56:01.379828 containerd[2016]: time="2026-03-07T00:56:01.378482977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 00:56:01.419013 kubelet[3232]: I0307 00:56:01.416423 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66688cbc4b-vs7lf" podStartSLOduration=3.367282955 podStartE2EDuration="5.416357497s" podCreationTimestamp="2026-03-07 00:55:56 +0000 UTC" firstStartedPulling="2026-03-07 00:55:57.044959736 +0000 UTC m=+27.189823576" lastFinishedPulling="2026-03-07 00:55:59.094034206 +0000 UTC m=+29.238898118" observedRunningTime="2026-03-07 00:55:59.394305503 +0000 UTC m=+29.539169367" watchObservedRunningTime="2026-03-07 00:56:01.416357497 +0000 UTC m=+31.561221349" Mar 7 00:56:02.139010 kubelet[3232]: E0307 00:56:02.138803 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbhbv" podUID="0b3c7869-094b-4d01-b5aa-efdf0e733f4f" Mar 7 00:56:03.848747 kubelet[3232]: I0307 00:56:03.848694 3232 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:56:04.138538 kubelet[3232]: E0307 00:56:04.136984 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbhbv" podUID="0b3c7869-094b-4d01-b5aa-efdf0e733f4f" Mar 7 00:56:06.138318 kubelet[3232]: E0307 00:56:06.138256 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbhbv" podUID="0b3c7869-094b-4d01-b5aa-efdf0e733f4f" Mar 7 00:56:07.740919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2410051028.mount: Deactivated successfully. Mar 7 00:56:07.803448 containerd[2016]: time="2026-03-07T00:56:07.802928145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:07.805554 containerd[2016]: time="2026-03-07T00:56:07.805276485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 7 00:56:07.807899 containerd[2016]: time="2026-03-07T00:56:07.807811761Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:07.814436 containerd[2016]: time="2026-03-07T00:56:07.814221009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:07.816456 containerd[2016]: time="2026-03-07T00:56:07.815732985Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.437145416s" Mar 7 00:56:07.816456 containerd[2016]: time="2026-03-07T00:56:07.815798613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 7 00:56:07.825821 containerd[2016]: time="2026-03-07T00:56:07.825746577Z" level=info msg="CreateContainer within sandbox \"117d620ac3cf7c715fde839666f347358246a9363830b22610a4c7e07f6fa1e0\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 00:56:07.863136 containerd[2016]: time="2026-03-07T00:56:07.863056737Z" level=info msg="CreateContainer within sandbox \"117d620ac3cf7c715fde839666f347358246a9363830b22610a4c7e07f6fa1e0\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"66521365c43959e645575db0627a925a04b1b7fffe52942611e21f50fad8b93d\"" Mar 7 00:56:07.865370 containerd[2016]: time="2026-03-07T00:56:07.865292013Z" level=info msg="StartContainer for \"66521365c43959e645575db0627a925a04b1b7fffe52942611e21f50fad8b93d\"" Mar 7 00:56:07.925691 systemd[1]: Started cri-containerd-66521365c43959e645575db0627a925a04b1b7fffe52942611e21f50fad8b93d.scope - libcontainer container 66521365c43959e645575db0627a925a04b1b7fffe52942611e21f50fad8b93d. Mar 7 00:56:07.985629 containerd[2016]: time="2026-03-07T00:56:07.985574074Z" level=info msg="StartContainer for \"66521365c43959e645575db0627a925a04b1b7fffe52942611e21f50fad8b93d\" returns successfully" Mar 7 00:56:08.136684 kubelet[3232]: E0307 00:56:08.136420 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbhbv" podUID="0b3c7869-094b-4d01-b5aa-efdf0e733f4f" Mar 7 00:56:08.183870 systemd[1]: cri-containerd-66521365c43959e645575db0627a925a04b1b7fffe52942611e21f50fad8b93d.scope: Deactivated successfully. Mar 7 00:56:08.700790 containerd[2016]: time="2026-03-07T00:56:08.700705114Z" level=info msg="shim disconnected" id=66521365c43959e645575db0627a925a04b1b7fffe52942611e21f50fad8b93d namespace=k8s.io Mar 7 00:56:08.700790 containerd[2016]: time="2026-03-07T00:56:08.700780990Z" level=warning msg="cleaning up after shim disconnected" id=66521365c43959e645575db0627a925a04b1b7fffe52942611e21f50fad8b93d namespace=k8s.io Mar 7 00:56:08.701075 containerd[2016]: time="2026-03-07T00:56:08.700802818Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:56:08.740723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66521365c43959e645575db0627a925a04b1b7fffe52942611e21f50fad8b93d-rootfs.mount: Deactivated successfully. Mar 7 00:56:09.409115 containerd[2016]: time="2026-03-07T00:56:09.408987921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 00:56:10.142143 kubelet[3232]: E0307 00:56:10.141634 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbhbv" podUID="0b3c7869-094b-4d01-b5aa-efdf0e733f4f" Mar 7 00:56:12.138427 kubelet[3232]: E0307 00:56:12.138231 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbhbv" podUID="0b3c7869-094b-4d01-b5aa-efdf0e733f4f" Mar 7 00:56:12.425456 containerd[2016]: time="2026-03-07T00:56:12.425286432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:12.427933 containerd[2016]: time="2026-03-07T00:56:12.427884816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 7 00:56:12.429747 containerd[2016]: time="2026-03-07T00:56:12.429702528Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:12.436515 containerd[2016]: time="2026-03-07T00:56:12.434944584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:12.436865 containerd[2016]: time="2026-03-07T00:56:12.436818540Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.027750003s" Mar 7 00:56:12.436994 containerd[2016]: time="2026-03-07T00:56:12.436965300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 7 00:56:12.445967 containerd[2016]: time="2026-03-07T00:56:12.445892664Z" level=info msg="CreateContainer within sandbox \"117d620ac3cf7c715fde839666f347358246a9363830b22610a4c7e07f6fa1e0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 00:56:12.475945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1011874277.mount: Deactivated successfully. Mar 7 00:56:12.479634 containerd[2016]: time="2026-03-07T00:56:12.479579268Z" level=info msg="CreateContainer within sandbox \"117d620ac3cf7c715fde839666f347358246a9363830b22610a4c7e07f6fa1e0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"77391df3b7ee871ed67a00f7b01f59f2166e4c0d7b3fd5d5fe14e9fb7e53d3f3\"" Mar 7 00:56:12.481178 containerd[2016]: time="2026-03-07T00:56:12.481107828Z" level=info msg="StartContainer for \"77391df3b7ee871ed67a00f7b01f59f2166e4c0d7b3fd5d5fe14e9fb7e53d3f3\"" Mar 7 00:56:12.547006 systemd[1]: Started cri-containerd-77391df3b7ee871ed67a00f7b01f59f2166e4c0d7b3fd5d5fe14e9fb7e53d3f3.scope - libcontainer container 77391df3b7ee871ed67a00f7b01f59f2166e4c0d7b3fd5d5fe14e9fb7e53d3f3. Mar 7 00:56:12.609912 containerd[2016]: time="2026-03-07T00:56:12.609444409Z" level=info msg="StartContainer for \"77391df3b7ee871ed67a00f7b01f59f2166e4c0d7b3fd5d5fe14e9fb7e53d3f3\" returns successfully" Mar 7 00:56:14.139880 kubelet[3232]: E0307 00:56:14.139596 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbhbv" podUID="0b3c7869-094b-4d01-b5aa-efdf0e733f4f" Mar 7 00:56:14.454969 containerd[2016]: time="2026-03-07T00:56:14.454584242Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 00:56:14.461424 kubelet[3232]: I0307 00:56:14.461344 3232 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 7 00:56:14.464669 systemd[1]: cri-containerd-77391df3b7ee871ed67a00f7b01f59f2166e4c0d7b3fd5d5fe14e9fb7e53d3f3.scope: Deactivated successfully. Mar 7 00:56:14.465097 systemd[1]: cri-containerd-77391df3b7ee871ed67a00f7b01f59f2166e4c0d7b3fd5d5fe14e9fb7e53d3f3.scope: Consumed 1.016s CPU time. Mar 7 00:56:14.535944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77391df3b7ee871ed67a00f7b01f59f2166e4c0d7b3fd5d5fe14e9fb7e53d3f3-rootfs.mount: Deactivated successfully. Mar 7 00:56:14.549150 containerd[2016]: time="2026-03-07T00:56:14.548572839Z" level=info msg="shim disconnected" id=77391df3b7ee871ed67a00f7b01f59f2166e4c0d7b3fd5d5fe14e9fb7e53d3f3 namespace=k8s.io Mar 7 00:56:14.549150 containerd[2016]: time="2026-03-07T00:56:14.548672247Z" level=warning msg="cleaning up after shim disconnected" id=77391df3b7ee871ed67a00f7b01f59f2166e4c0d7b3fd5d5fe14e9fb7e53d3f3 namespace=k8s.io Mar 7 00:56:14.549150 containerd[2016]: time="2026-03-07T00:56:14.548692671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:56:14.598813 systemd[1]: Created slice kubepods-burstable-poddb5a2b01_b80e_4ffd_95f8_409702863707.slice - libcontainer container kubepods-burstable-poddb5a2b01_b80e_4ffd_95f8_409702863707.slice. Mar 7 00:56:14.602563 kubelet[3232]: I0307 00:56:14.599366 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7j9h\" (UniqueName: \"kubernetes.io/projected/db5a2b01-b80e-4ffd-95f8-409702863707-kube-api-access-l7j9h\") pod \"coredns-66bc5c9577-hlxj9\" (UID: \"db5a2b01-b80e-4ffd-95f8-409702863707\") " pod="kube-system/coredns-66bc5c9577-hlxj9" Mar 7 00:56:14.604662 kubelet[3232]: I0307 00:56:14.604331 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db5a2b01-b80e-4ffd-95f8-409702863707-config-volume\") pod \"coredns-66bc5c9577-hlxj9\" (UID: \"db5a2b01-b80e-4ffd-95f8-409702863707\") " pod="kube-system/coredns-66bc5c9577-hlxj9" Mar 7 00:56:14.650482 systemd[1]: Created slice kubepods-burstable-podead8cb66_a254_46cd_b1ed_0e08793150d4.slice - libcontainer container kubepods-burstable-podead8cb66_a254_46cd_b1ed_0e08793150d4.slice. Mar 7 00:56:14.675331 systemd[1]: Created slice kubepods-besteffort-podc91a82c4_5f97_4c22_95be_166930ad0926.slice - libcontainer container kubepods-besteffort-podc91a82c4_5f97_4c22_95be_166930ad0926.slice. Mar 7 00:56:14.698478 systemd[1]: Created slice kubepods-besteffort-podb162dfce_342e_4a97_8794_bfbba685c555.slice - libcontainer container kubepods-besteffort-podb162dfce_342e_4a97_8794_bfbba685c555.slice. Mar 7 00:56:14.707465 kubelet[3232]: I0307 00:56:14.705683 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwsj6\" (UniqueName: \"kubernetes.io/projected/b162dfce-342e-4a97-8794-bfbba685c555-kube-api-access-nwsj6\") pod \"calico-apiserver-54c564cdd4-frsdt\" (UID: \"b162dfce-342e-4a97-8794-bfbba685c555\") " pod="calico-system/calico-apiserver-54c564cdd4-frsdt" Mar 7 00:56:14.707718 kubelet[3232]: I0307 00:56:14.707683 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07a42616-368f-4a16-92ee-c16f0eba21ab-config\") pod \"goldmane-cccfbd5cf-qnkgm\" (UID: \"07a42616-368f-4a16-92ee-c16f0eba21ab\") " pod="calico-system/goldmane-cccfbd5cf-qnkgm" Mar 7 00:56:14.707851 kubelet[3232]: I0307 00:56:14.707827 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/07a42616-368f-4a16-92ee-c16f0eba21ab-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-qnkgm\" (UID: \"07a42616-368f-4a16-92ee-c16f0eba21ab\") " pod="calico-system/goldmane-cccfbd5cf-qnkgm" Mar 7 00:56:14.707981 kubelet[3232]: I0307 00:56:14.707957 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfl4h\" (UniqueName: \"kubernetes.io/projected/c91a82c4-5f97-4c22-95be-166930ad0926-kube-api-access-gfl4h\") pod \"calico-kube-controllers-5d8c4bf8bd-drgff\" (UID: \"c91a82c4-5f97-4c22-95be-166930ad0926\") " pod="calico-system/calico-kube-controllers-5d8c4bf8bd-drgff" Mar 7 00:56:14.708175 kubelet[3232]: I0307 00:56:14.708091 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b162dfce-342e-4a97-8794-bfbba685c555-calico-apiserver-certs\") pod \"calico-apiserver-54c564cdd4-frsdt\" (UID: \"b162dfce-342e-4a97-8794-bfbba685c555\") " pod="calico-system/calico-apiserver-54c564cdd4-frsdt" Mar 7 00:56:14.709520 kubelet[3232]: I0307 00:56:14.709470 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/15380836-54c5-44a9-8a93-212d68a67553-whisker-backend-key-pair\") pod \"whisker-555b5549d8-sl8wr\" (UID: \"15380836-54c5-44a9-8a93-212d68a67553\") " pod="calico-system/whisker-555b5549d8-sl8wr" Mar 7 00:56:14.711360 kubelet[3232]: I0307 00:56:14.709812 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15380836-54c5-44a9-8a93-212d68a67553-whisker-ca-bundle\") pod \"whisker-555b5549d8-sl8wr\" (UID: \"15380836-54c5-44a9-8a93-212d68a67553\") " pod="calico-system/whisker-555b5549d8-sl8wr" Mar 7 00:56:14.711360 kubelet[3232]: I0307 00:56:14.709893 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9nl9\" (UniqueName: \"kubernetes.io/projected/ead8cb66-a254-46cd-b1ed-0e08793150d4-kube-api-access-w9nl9\") pod \"coredns-66bc5c9577-b8g2z\" (UID: \"ead8cb66-a254-46cd-b1ed-0e08793150d4\") " pod="kube-system/coredns-66bc5c9577-b8g2z" Mar 7 00:56:14.711360 kubelet[3232]: I0307 00:56:14.710000 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds2ls\" (UniqueName: \"kubernetes.io/projected/07a42616-368f-4a16-92ee-c16f0eba21ab-kube-api-access-ds2ls\") pod \"goldmane-cccfbd5cf-qnkgm\" (UID: \"07a42616-368f-4a16-92ee-c16f0eba21ab\") " pod="calico-system/goldmane-cccfbd5cf-qnkgm" Mar 7 00:56:14.711360 kubelet[3232]: I0307 00:56:14.710041 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c91a82c4-5f97-4c22-95be-166930ad0926-tigera-ca-bundle\") pod \"calico-kube-controllers-5d8c4bf8bd-drgff\" (UID: \"c91a82c4-5f97-4c22-95be-166930ad0926\") " pod="calico-system/calico-kube-controllers-5d8c4bf8bd-drgff" Mar 7 00:56:14.711360 kubelet[3232]: I0307 00:56:14.710109 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ead8cb66-a254-46cd-b1ed-0e08793150d4-config-volume\") pod \"coredns-66bc5c9577-b8g2z\" (UID: \"ead8cb66-a254-46cd-b1ed-0e08793150d4\") " pod="kube-system/coredns-66bc5c9577-b8g2z" Mar 7 00:56:14.711736 kubelet[3232]: I0307 00:56:14.710144 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/15380836-54c5-44a9-8a93-212d68a67553-nginx-config\") pod \"whisker-555b5549d8-sl8wr\" (UID: \"15380836-54c5-44a9-8a93-212d68a67553\") " pod="calico-system/whisker-555b5549d8-sl8wr" Mar 7 00:56:14.711736 kubelet[3232]: I0307 00:56:14.710180 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07a42616-368f-4a16-92ee-c16f0eba21ab-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-qnkgm\" (UID: \"07a42616-368f-4a16-92ee-c16f0eba21ab\") " pod="calico-system/goldmane-cccfbd5cf-qnkgm" Mar 7 00:56:14.711736 kubelet[3232]: I0307 00:56:14.710234 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7prq6\" (UniqueName: \"kubernetes.io/projected/15380836-54c5-44a9-8a93-212d68a67553-kube-api-access-7prq6\") pod \"whisker-555b5549d8-sl8wr\" (UID: \"15380836-54c5-44a9-8a93-212d68a67553\") " pod="calico-system/whisker-555b5549d8-sl8wr" Mar 7 00:56:14.711736 kubelet[3232]: I0307 00:56:14.710271 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ee2a76da-5fba-4560-8907-48edf40e4afd-calico-apiserver-certs\") pod \"calico-apiserver-54c564cdd4-5c6xm\" (UID: \"ee2a76da-5fba-4560-8907-48edf40e4afd\") " pod="calico-system/calico-apiserver-54c564cdd4-5c6xm" Mar 7 00:56:14.711736 kubelet[3232]: I0307 00:56:14.710311 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82ztb\" (UniqueName: \"kubernetes.io/projected/ee2a76da-5fba-4560-8907-48edf40e4afd-kube-api-access-82ztb\") pod \"calico-apiserver-54c564cdd4-5c6xm\" (UID: \"ee2a76da-5fba-4560-8907-48edf40e4afd\") " pod="calico-system/calico-apiserver-54c564cdd4-5c6xm" Mar 7 00:56:14.720151 systemd[1]: Created slice kubepods-besteffort-podee2a76da_5fba_4560_8907_48edf40e4afd.slice - libcontainer container kubepods-besteffort-podee2a76da_5fba_4560_8907_48edf40e4afd.slice. Mar 7 00:56:14.739242 systemd[1]: Created slice kubepods-besteffort-pod15380836_54c5_44a9_8a93_212d68a67553.slice - libcontainer container kubepods-besteffort-pod15380836_54c5_44a9_8a93_212d68a67553.slice. Mar 7 00:56:14.755745 systemd[1]: Created slice kubepods-besteffort-pod07a42616_368f_4a16_92ee_c16f0eba21ab.slice - libcontainer container kubepods-besteffort-pod07a42616_368f_4a16_92ee_c16f0eba21ab.slice. Mar 7 00:56:14.947177 containerd[2016]: time="2026-03-07T00:56:14.947111861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hlxj9,Uid:db5a2b01-b80e-4ffd-95f8-409702863707,Namespace:kube-system,Attempt:0,}" Mar 7 00:56:14.965682 containerd[2016]: time="2026-03-07T00:56:14.965518385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b8g2z,Uid:ead8cb66-a254-46cd-b1ed-0e08793150d4,Namespace:kube-system,Attempt:0,}" Mar 7 00:56:14.998334 containerd[2016]: time="2026-03-07T00:56:14.998259497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d8c4bf8bd-drgff,Uid:c91a82c4-5f97-4c22-95be-166930ad0926,Namespace:calico-system,Attempt:0,}" Mar 7 00:56:15.013636 containerd[2016]: time="2026-03-07T00:56:15.013495381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54c564cdd4-frsdt,Uid:b162dfce-342e-4a97-8794-bfbba685c555,Namespace:calico-system,Attempt:0,}" Mar 7 00:56:15.032107 containerd[2016]: time="2026-03-07T00:56:15.031935097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54c564cdd4-5c6xm,Uid:ee2a76da-5fba-4560-8907-48edf40e4afd,Namespace:calico-system,Attempt:0,}" Mar 7 00:56:15.053101 containerd[2016]: time="2026-03-07T00:56:15.052493989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-555b5549d8-sl8wr,Uid:15380836-54c5-44a9-8a93-212d68a67553,Namespace:calico-system,Attempt:0,}" Mar 7 00:56:15.085093 containerd[2016]: time="2026-03-07T00:56:15.084936877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-qnkgm,Uid:07a42616-368f-4a16-92ee-c16f0eba21ab,Namespace:calico-system,Attempt:0,}" Mar 7 00:56:15.372850 containerd[2016]: time="2026-03-07T00:56:15.372633039Z" level=error msg="Failed to destroy network for sandbox \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.374141 containerd[2016]: time="2026-03-07T00:56:15.373335699Z" level=error msg="encountered an error cleaning up failed sandbox \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.374141 containerd[2016]: time="2026-03-07T00:56:15.373454127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b8g2z,Uid:ead8cb66-a254-46cd-b1ed-0e08793150d4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.374368 kubelet[3232]: E0307 00:56:15.373767 3232 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.374368 kubelet[3232]: E0307 00:56:15.373873 3232 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-b8g2z" Mar 7 00:56:15.374368 kubelet[3232]: E0307 00:56:15.373911 3232 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-b8g2z" Mar 7 00:56:15.377420 kubelet[3232]: E0307 00:56:15.373999 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-b8g2z_kube-system(ead8cb66-a254-46cd-b1ed-0e08793150d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-b8g2z_kube-system(ead8cb66-a254-46cd-b1ed-0e08793150d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-b8g2z" podUID="ead8cb66-a254-46cd-b1ed-0e08793150d4" Mar 7 00:56:15.434775 containerd[2016]: time="2026-03-07T00:56:15.434598363Z" level=error msg="Failed to destroy network for sandbox \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.441138 containerd[2016]: time="2026-03-07T00:56:15.440877015Z" level=error msg="encountered an error cleaning up failed sandbox \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.441138 containerd[2016]: time="2026-03-07T00:56:15.440997231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54c564cdd4-5c6xm,Uid:ee2a76da-5fba-4560-8907-48edf40e4afd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.442183 kubelet[3232]: E0307 00:56:15.441620 3232 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.442183 kubelet[3232]: E0307 00:56:15.441687 3232 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-54c564cdd4-5c6xm" Mar 7 00:56:15.442183 kubelet[3232]: E0307 00:56:15.441719 3232 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-54c564cdd4-5c6xm" Mar 7 00:56:15.442438 kubelet[3232]: E0307 00:56:15.441810 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54c564cdd4-5c6xm_calico-system(ee2a76da-5fba-4560-8907-48edf40e4afd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54c564cdd4-5c6xm_calico-system(ee2a76da-5fba-4560-8907-48edf40e4afd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-54c564cdd4-5c6xm" podUID="ee2a76da-5fba-4560-8907-48edf40e4afd" Mar 7 00:56:15.483651 kubelet[3232]: I0307 00:56:15.481739 3232 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Mar 7 00:56:15.489979 containerd[2016]: time="2026-03-07T00:56:15.489912639Z" level=info msg="StopPodSandbox for \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\"" Mar 7 00:56:15.493939 containerd[2016]: time="2026-03-07T00:56:15.490241031Z" level=info msg="Ensure that sandbox bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e in task-service has been cleanup successfully" Mar 7 00:56:15.503723 containerd[2016]: time="2026-03-07T00:56:15.503645595Z" level=error msg="Failed to destroy network for sandbox \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.513040 containerd[2016]: time="2026-03-07T00:56:15.512955387Z" level=error msg="encountered an error cleaning up failed sandbox \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.513186 containerd[2016]: time="2026-03-07T00:56:15.513068487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d8c4bf8bd-drgff,Uid:c91a82c4-5f97-4c22-95be-166930ad0926,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.515097 kubelet[3232]: E0307 00:56:15.513544 3232 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.515097 kubelet[3232]: E0307 00:56:15.513625 3232 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d8c4bf8bd-drgff" Mar 7 00:56:15.515097 kubelet[3232]: E0307 00:56:15.513658 3232 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d8c4bf8bd-drgff" Mar 7 00:56:15.515494 kubelet[3232]: E0307 00:56:15.513738 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d8c4bf8bd-drgff_calico-system(c91a82c4-5f97-4c22-95be-166930ad0926)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d8c4bf8bd-drgff_calico-system(c91a82c4-5f97-4c22-95be-166930ad0926)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d8c4bf8bd-drgff" podUID="c91a82c4-5f97-4c22-95be-166930ad0926" Mar 7 00:56:15.520864 containerd[2016]: time="2026-03-07T00:56:15.519025503Z" level=info msg="CreateContainer within sandbox \"117d620ac3cf7c715fde839666f347358246a9363830b22610a4c7e07f6fa1e0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 00:56:15.524795 containerd[2016]: time="2026-03-07T00:56:15.524281347Z" level=error msg="Failed to destroy network for sandbox \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.529233 containerd[2016]: time="2026-03-07T00:56:15.526834491Z" level=error msg="encountered an error cleaning up failed sandbox \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.529233 containerd[2016]: time="2026-03-07T00:56:15.526939683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hlxj9,Uid:db5a2b01-b80e-4ffd-95f8-409702863707,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.532461 kubelet[3232]: E0307 00:56:15.529976 3232 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.532461 kubelet[3232]: E0307 00:56:15.530071 3232 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hlxj9" Mar 7 00:56:15.532461 kubelet[3232]: E0307 00:56:15.530104 3232 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hlxj9" Mar 7 00:56:15.532730 kubelet[3232]: E0307 00:56:15.530181 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-hlxj9_kube-system(db5a2b01-b80e-4ffd-95f8-409702863707)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-hlxj9_kube-system(db5a2b01-b80e-4ffd-95f8-409702863707)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-hlxj9" podUID="db5a2b01-b80e-4ffd-95f8-409702863707" Mar 7 00:56:15.634370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2350796684.mount: Deactivated successfully. Mar 7 00:56:15.649315 containerd[2016]: time="2026-03-07T00:56:15.649252072Z" level=info msg="CreateContainer within sandbox \"117d620ac3cf7c715fde839666f347358246a9363830b22610a4c7e07f6fa1e0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"36b395a0335cd8963638ad4a50605d0f04bc57c77935749772c9bf6c35c8aeda\"" Mar 7 00:56:15.652631 containerd[2016]: time="2026-03-07T00:56:15.651995308Z" level=info msg="StartContainer for \"36b395a0335cd8963638ad4a50605d0f04bc57c77935749772c9bf6c35c8aeda\"" Mar 7 00:56:15.679646 containerd[2016]: time="2026-03-07T00:56:15.678534352Z" level=error msg="Failed to destroy network for sandbox \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.690578 containerd[2016]: time="2026-03-07T00:56:15.687416956Z" level=error msg="encountered an error cleaning up failed sandbox \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.690578 containerd[2016]: time="2026-03-07T00:56:15.687529456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-qnkgm,Uid:07a42616-368f-4a16-92ee-c16f0eba21ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.690830 kubelet[3232]: E0307 00:56:15.690292 3232 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.690830 kubelet[3232]: E0307 00:56:15.690434 3232 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-qnkgm" Mar 7 00:56:15.690830 kubelet[3232]: E0307 00:56:15.690472 3232 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-qnkgm" Mar 7 00:56:15.688463 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499-shm.mount: Deactivated successfully. Mar 7 00:56:15.697105 kubelet[3232]: E0307 00:56:15.693089 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-qnkgm_calico-system(07a42616-368f-4a16-92ee-c16f0eba21ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-qnkgm_calico-system(07a42616-368f-4a16-92ee-c16f0eba21ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-qnkgm" podUID="07a42616-368f-4a16-92ee-c16f0eba21ab" Mar 7 00:56:15.702692 containerd[2016]: time="2026-03-07T00:56:15.702613084Z" level=error msg="Failed to destroy network for sandbox \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.709149 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458-shm.mount: Deactivated successfully. Mar 7 00:56:15.712659 containerd[2016]: time="2026-03-07T00:56:15.712575964Z" level=error msg="encountered an error cleaning up failed sandbox \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.716193 containerd[2016]: time="2026-03-07T00:56:15.712689628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-555b5549d8-sl8wr,Uid:15380836-54c5-44a9-8a93-212d68a67553,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.716450 kubelet[3232]: E0307 00:56:15.713193 3232 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.716450 kubelet[3232]: E0307 00:56:15.713281 3232 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-555b5549d8-sl8wr" Mar 7 00:56:15.716450 kubelet[3232]: E0307 00:56:15.713315 3232 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-555b5549d8-sl8wr" Mar 7 00:56:15.716659 kubelet[3232]: E0307 00:56:15.713415 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-555b5549d8-sl8wr_calico-system(15380836-54c5-44a9-8a93-212d68a67553)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-555b5549d8-sl8wr_calico-system(15380836-54c5-44a9-8a93-212d68a67553)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-555b5549d8-sl8wr" podUID="15380836-54c5-44a9-8a93-212d68a67553" Mar 7 00:56:15.739774 containerd[2016]: time="2026-03-07T00:56:15.739708001Z" level=error msg="StopPodSandbox for \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\" failed" error="failed to destroy network for sandbox \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.740523 kubelet[3232]: E0307 00:56:15.740231 3232 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Mar 7 00:56:15.740523 kubelet[3232]: E0307 00:56:15.740321 3232 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e"} Mar 7 00:56:15.740523 kubelet[3232]: E0307 00:56:15.740421 3232 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ead8cb66-a254-46cd-b1ed-0e08793150d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 00:56:15.740523 kubelet[3232]: E0307 00:56:15.740468 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ead8cb66-a254-46cd-b1ed-0e08793150d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-b8g2z" podUID="ead8cb66-a254-46cd-b1ed-0e08793150d4" Mar 7 00:56:15.746998 containerd[2016]: time="2026-03-07T00:56:15.746850401Z" level=error msg="Failed to destroy network for sandbox \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.747606 containerd[2016]: time="2026-03-07T00:56:15.747541901Z" level=error msg="encountered an error cleaning up failed sandbox \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.747703 containerd[2016]: time="2026-03-07T00:56:15.747650249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54c564cdd4-frsdt,Uid:b162dfce-342e-4a97-8794-bfbba685c555,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.749086 kubelet[3232]: E0307 00:56:15.748025 3232 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:15.749086 kubelet[3232]: E0307 00:56:15.748459 3232 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-54c564cdd4-frsdt" Mar 7 00:56:15.749086 kubelet[3232]: E0307 00:56:15.748881 3232 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-54c564cdd4-frsdt" Mar 7 00:56:15.750351 kubelet[3232]: E0307 00:56:15.749041 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54c564cdd4-frsdt_calico-system(b162dfce-342e-4a97-8794-bfbba685c555)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54c564cdd4-frsdt_calico-system(b162dfce-342e-4a97-8794-bfbba685c555)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-54c564cdd4-frsdt" podUID="b162dfce-342e-4a97-8794-bfbba685c555" Mar 7 00:56:15.771239 systemd[1]: Started cri-containerd-36b395a0335cd8963638ad4a50605d0f04bc57c77935749772c9bf6c35c8aeda.scope - libcontainer container 36b395a0335cd8963638ad4a50605d0f04bc57c77935749772c9bf6c35c8aeda. Mar 7 00:56:15.833723 containerd[2016]: time="2026-03-07T00:56:15.833562809Z" level=info msg="StartContainer for \"36b395a0335cd8963638ad4a50605d0f04bc57c77935749772c9bf6c35c8aeda\" returns successfully" Mar 7 00:56:16.168050 systemd[1]: Created slice kubepods-besteffort-pod0b3c7869_094b_4d01_b5aa_efdf0e733f4f.slice - libcontainer container kubepods-besteffort-pod0b3c7869_094b_4d01_b5aa_efdf0e733f4f.slice. Mar 7 00:56:16.189239 containerd[2016]: time="2026-03-07T00:56:16.189184503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fbhbv,Uid:0b3c7869-094b-4d01-b5aa-efdf0e733f4f,Namespace:calico-system,Attempt:0,}" Mar 7 00:56:16.491887 kubelet[3232]: I0307 00:56:16.491617 3232 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Mar 7 00:56:16.495983 containerd[2016]: time="2026-03-07T00:56:16.495137452Z" level=info msg="StopPodSandbox for \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\"" Mar 7 00:56:16.499293 containerd[2016]: time="2026-03-07T00:56:16.498879100Z" level=info msg="Ensure that sandbox b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499 in task-service has been cleanup successfully" Mar 7 00:56:16.502877 kubelet[3232]: I0307 00:56:16.500688 3232 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Mar 7 00:56:16.505116 containerd[2016]: time="2026-03-07T00:56:16.504967060Z" level=info msg="StopPodSandbox for \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\"" Mar 7 00:56:16.505522 containerd[2016]: time="2026-03-07T00:56:16.505459696Z" level=info msg="Ensure that sandbox 65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458 in task-service has been cleanup successfully" Mar 7 00:56:16.519808 kubelet[3232]: I0307 00:56:16.519708 3232 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Mar 7 00:56:16.526683 containerd[2016]: time="2026-03-07T00:56:16.526613656Z" level=info msg="StopPodSandbox for \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\"" Mar 7 00:56:16.526961 containerd[2016]: time="2026-03-07T00:56:16.526915696Z" level=info msg="Ensure that sandbox 9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76 in task-service has been cleanup successfully" Mar 7 00:56:16.529820 kubelet[3232]: I0307 00:56:16.529527 3232 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Mar 7 00:56:16.546959 containerd[2016]: time="2026-03-07T00:56:16.543317213Z" level=info msg="StopPodSandbox for \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\"" Mar 7 00:56:16.546959 containerd[2016]: time="2026-03-07T00:56:16.543634277Z" level=info msg="Ensure that sandbox b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60 in task-service has been cleanup successfully" Mar 7 00:56:16.578438 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60-shm.mount: Deactivated successfully. Mar 7 00:56:16.590632 kubelet[3232]: I0307 00:56:16.588189 3232 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Mar 7 00:56:16.590790 containerd[2016]: time="2026-03-07T00:56:16.590579393Z" level=info msg="StopPodSandbox for \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\"" Mar 7 00:56:16.593809 containerd[2016]: time="2026-03-07T00:56:16.590879513Z" level=info msg="Ensure that sandbox 7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48 in task-service has been cleanup successfully" Mar 7 00:56:16.608406 kubelet[3232]: I0307 00:56:16.607320 3232 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Mar 7 00:56:16.619405 containerd[2016]: time="2026-03-07T00:56:16.617498873Z" level=info msg="StopPodSandbox for \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\"" Mar 7 00:56:16.626985 containerd[2016]: time="2026-03-07T00:56:16.626796821Z" level=info msg="Ensure that sandbox 230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94 in task-service has been cleanup successfully" Mar 7 00:56:16.764508 systemd-networkd[1932]: caliac8d5653ce6: Link UP Mar 7 00:56:16.766989 systemd-networkd[1932]: caliac8d5653ce6: Gained carrier Mar 7 00:56:16.774514 (udev-worker)[4749]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:56:16.890507 kubelet[3232]: I0307 00:56:16.888369 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g8cdd" podStartSLOduration=5.800628783 podStartE2EDuration="20.888345702s" podCreationTimestamp="2026-03-07 00:55:56 +0000 UTC" firstStartedPulling="2026-03-07 00:55:57.350499609 +0000 UTC m=+27.495363437" lastFinishedPulling="2026-03-07 00:56:12.438216528 +0000 UTC m=+42.583080356" observedRunningTime="2026-03-07 00:56:16.718853297 +0000 UTC m=+46.863717161" watchObservedRunningTime="2026-03-07 00:56:16.888345702 +0000 UTC m=+47.033209530" Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.257 [ERROR][4622] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.331 [INFO][4622] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--228-k8s-csi--node--driver--fbhbv-eth0 csi-node-driver- calico-system 0b3c7869-094b-4d01-b5aa-efdf0e733f4f 753 0 2026-03-07 00:55:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-17-228 csi-node-driver-fbhbv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliac8d5653ce6 [] [] }} ContainerID="d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" Namespace="calico-system" Pod="csi-node-driver-fbhbv" WorkloadEndpoint="ip--172--31--17--228-k8s-csi--node--driver--fbhbv-" Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.331 [INFO][4622] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" Namespace="calico-system" Pod="csi-node-driver-fbhbv" WorkloadEndpoint="ip--172--31--17--228-k8s-csi--node--driver--fbhbv-eth0" Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.447 [INFO][4634] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" HandleID="k8s-pod-network.d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" Workload="ip--172--31--17--228-k8s-csi--node--driver--fbhbv-eth0" Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.467 [INFO][4634] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" HandleID="k8s-pod-network.d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" Workload="ip--172--31--17--228-k8s-csi--node--driver--fbhbv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-228", "pod":"csi-node-driver-fbhbv", "timestamp":"2026-03-07 00:56:16.447498892 +0000 UTC"}, Hostname:"ip-172-31-17-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003acf20)} Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.467 [INFO][4634] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.467 [INFO][4634] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.467 [INFO][4634] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-228' Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.475 [INFO][4634] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" host="ip-172-31-17-228" Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.486 [INFO][4634] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-228" Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.512 [INFO][4634] ipam/ipam.go 526: Trying affinity for 192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.532 [INFO][4634] ipam/ipam.go 160: Attempting to load block cidr=192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.561 [INFO][4634] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.561 [INFO][4634] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" host="ip-172-31-17-228" Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.574 [INFO][4634] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.598 [INFO][4634] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" host="ip-172-31-17-228" Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.617 [INFO][4634] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.52.129/26] block=192.168.52.128/26 handle="k8s-pod-network.d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" host="ip-172-31-17-228" Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.620 [INFO][4634] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.52.129/26] handle="k8s-pod-network.d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" host="ip-172-31-17-228" Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.623 [INFO][4634] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:16.943403 containerd[2016]: 2026-03-07 00:56:16.625 [INFO][4634] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.52.129/26] IPv6=[] ContainerID="d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" HandleID="k8s-pod-network.d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" Workload="ip--172--31--17--228-k8s-csi--node--driver--fbhbv-eth0" Mar 7 00:56:16.944716 containerd[2016]: 2026-03-07 00:56:16.740 [INFO][4622] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" Namespace="calico-system" Pod="csi-node-driver-fbhbv" WorkloadEndpoint="ip--172--31--17--228-k8s-csi--node--driver--fbhbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-csi--node--driver--fbhbv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0b3c7869-094b-4d01-b5aa-efdf0e733f4f", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"", Pod:"csi-node-driver-fbhbv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliac8d5653ce6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:16.944716 containerd[2016]: 2026-03-07 00:56:16.740 [INFO][4622] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.129/32] ContainerID="d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" Namespace="calico-system" Pod="csi-node-driver-fbhbv" WorkloadEndpoint="ip--172--31--17--228-k8s-csi--node--driver--fbhbv-eth0" Mar 7 00:56:16.944716 containerd[2016]: 2026-03-07 00:56:16.740 [INFO][4622] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac8d5653ce6 ContainerID="d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" Namespace="calico-system" Pod="csi-node-driver-fbhbv" WorkloadEndpoint="ip--172--31--17--228-k8s-csi--node--driver--fbhbv-eth0" Mar 7 00:56:16.944716 containerd[2016]: 2026-03-07 00:56:16.803 [INFO][4622] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" Namespace="calico-system" Pod="csi-node-driver-fbhbv" WorkloadEndpoint="ip--172--31--17--228-k8s-csi--node--driver--fbhbv-eth0" Mar 7 00:56:16.944716 containerd[2016]: 2026-03-07 00:56:16.851 [INFO][4622] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" Namespace="calico-system" Pod="csi-node-driver-fbhbv" WorkloadEndpoint="ip--172--31--17--228-k8s-csi--node--driver--fbhbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-csi--node--driver--fbhbv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0b3c7869-094b-4d01-b5aa-efdf0e733f4f", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc", Pod:"csi-node-driver-fbhbv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliac8d5653ce6", MAC:"3a:23:b8:c5:ff:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:16.944716 containerd[2016]: 2026-03-07 00:56:16.902 [INFO][4622] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc" Namespace="calico-system" Pod="csi-node-driver-fbhbv" WorkloadEndpoint="ip--172--31--17--228-k8s-csi--node--driver--fbhbv-eth0" Mar 7 00:56:17.173203 containerd[2016]: time="2026-03-07T00:56:17.171339124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:56:17.173203 containerd[2016]: time="2026-03-07T00:56:17.171608104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:56:17.173203 containerd[2016]: time="2026-03-07T00:56:17.171650668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:17.177070 containerd[2016]: time="2026-03-07T00:56:17.172547512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:17.261313 systemd[1]: Started cri-containerd-d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc.scope - libcontainer container d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc. Mar 7 00:56:17.356328 containerd[2016]: 2026-03-07 00:56:16.875 [INFO][4700] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Mar 7 00:56:17.356328 containerd[2016]: 2026-03-07 00:56:16.878 [INFO][4700] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" iface="eth0" netns="/var/run/netns/cni-90d18b93-3cce-fbdc-bff4-003f7939e521" Mar 7 00:56:17.356328 containerd[2016]: 2026-03-07 00:56:16.882 [INFO][4700] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" iface="eth0" netns="/var/run/netns/cni-90d18b93-3cce-fbdc-bff4-003f7939e521" Mar 7 00:56:17.356328 containerd[2016]: 2026-03-07 00:56:16.895 [INFO][4700] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" iface="eth0" netns="/var/run/netns/cni-90d18b93-3cce-fbdc-bff4-003f7939e521" Mar 7 00:56:17.356328 containerd[2016]: 2026-03-07 00:56:16.896 [INFO][4700] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Mar 7 00:56:17.356328 containerd[2016]: 2026-03-07 00:56:16.896 [INFO][4700] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Mar 7 00:56:17.356328 containerd[2016]: 2026-03-07 00:56:17.230 [INFO][4760] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" HandleID="k8s-pod-network.65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Workload="ip--172--31--17--228-k8s-whisker--555b5549d8--sl8wr-eth0" Mar 7 00:56:17.356328 containerd[2016]: 2026-03-07 00:56:17.231 [INFO][4760] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:17.356328 containerd[2016]: 2026-03-07 00:56:17.231 [INFO][4760] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:17.356328 containerd[2016]: 2026-03-07 00:56:17.291 [WARNING][4760] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" HandleID="k8s-pod-network.65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Workload="ip--172--31--17--228-k8s-whisker--555b5549d8--sl8wr-eth0" Mar 7 00:56:17.356328 containerd[2016]: 2026-03-07 00:56:17.291 [INFO][4760] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" HandleID="k8s-pod-network.65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Workload="ip--172--31--17--228-k8s-whisker--555b5549d8--sl8wr-eth0" Mar 7 00:56:17.356328 containerd[2016]: 2026-03-07 00:56:17.329 [INFO][4760] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:17.356328 containerd[2016]: 2026-03-07 00:56:17.343 [INFO][4700] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Mar 7 00:56:17.362129 containerd[2016]: time="2026-03-07T00:56:17.357650405Z" level=info msg="TearDown network for sandbox \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\" successfully" Mar 7 00:56:17.362129 containerd[2016]: time="2026-03-07T00:56:17.357711269Z" level=info msg="StopPodSandbox for \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\" returns successfully" Mar 7 00:56:17.366828 systemd[1]: run-netns-cni\x2d90d18b93\x2d3cce\x2dfbdc\x2dbff4\x2d003f7939e521.mount: Deactivated successfully. Mar 7 00:56:17.437146 containerd[2016]: time="2026-03-07T00:56:17.437001389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fbhbv,Uid:0b3c7869-094b-4d01-b5aa-efdf0e733f4f,Namespace:calico-system,Attempt:0,} returns sandbox id \"d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc\"" Mar 7 00:56:17.440292 kubelet[3232]: I0307 00:56:17.439607 3232 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15380836-54c5-44a9-8a93-212d68a67553-whisker-ca-bundle\") pod \"15380836-54c5-44a9-8a93-212d68a67553\" (UID: \"15380836-54c5-44a9-8a93-212d68a67553\") " Mar 7 00:56:17.440292 kubelet[3232]: I0307 00:56:17.439684 3232 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/15380836-54c5-44a9-8a93-212d68a67553-whisker-backend-key-pair\") pod \"15380836-54c5-44a9-8a93-212d68a67553\" (UID: \"15380836-54c5-44a9-8a93-212d68a67553\") " Mar 7 00:56:17.440292 kubelet[3232]: I0307 00:56:17.439729 3232 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/15380836-54c5-44a9-8a93-212d68a67553-nginx-config\") pod \"15380836-54c5-44a9-8a93-212d68a67553\" (UID: \"15380836-54c5-44a9-8a93-212d68a67553\") " Mar 7 00:56:17.440292 kubelet[3232]: I0307 00:56:17.439768 3232 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7prq6\" (UniqueName: \"kubernetes.io/projected/15380836-54c5-44a9-8a93-212d68a67553-kube-api-access-7prq6\") pod \"15380836-54c5-44a9-8a93-212d68a67553\" (UID: \"15380836-54c5-44a9-8a93-212d68a67553\") " Mar 7 00:56:17.440666 kubelet[3232]: I0307 00:56:17.440321 3232 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15380836-54c5-44a9-8a93-212d68a67553-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "15380836-54c5-44a9-8a93-212d68a67553" (UID: "15380836-54c5-44a9-8a93-212d68a67553"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:56:17.446750 kubelet[3232]: I0307 00:56:17.446185 3232 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15380836-54c5-44a9-8a93-212d68a67553-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "15380836-54c5-44a9-8a93-212d68a67553" (UID: "15380836-54c5-44a9-8a93-212d68a67553"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:56:17.449867 containerd[2016]: time="2026-03-07T00:56:17.449690729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 00:56:17.455438 kubelet[3232]: I0307 00:56:17.455170 3232 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15380836-54c5-44a9-8a93-212d68a67553-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "15380836-54c5-44a9-8a93-212d68a67553" (UID: "15380836-54c5-44a9-8a93-212d68a67553"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 00:56:17.459597 kubelet[3232]: I0307 00:56:17.458233 3232 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15380836-54c5-44a9-8a93-212d68a67553-kube-api-access-7prq6" (OuterVolumeSpecName: "kube-api-access-7prq6") pod "15380836-54c5-44a9-8a93-212d68a67553" (UID: "15380836-54c5-44a9-8a93-212d68a67553"). InnerVolumeSpecName "kube-api-access-7prq6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 00:56:17.536850 systemd[1]: var-lib-kubelet-pods-15380836\x2d54c5\x2d44a9\x2d8a93\x2d212d68a67553-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7prq6.mount: Deactivated successfully. Mar 7 00:56:17.537061 systemd[1]: var-lib-kubelet-pods-15380836\x2d54c5\x2d44a9\x2d8a93\x2d212d68a67553-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 00:56:17.539966 containerd[2016]: 2026-03-07 00:56:17.161 [INFO][4735] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Mar 7 00:56:17.539966 containerd[2016]: 2026-03-07 00:56:17.165 [INFO][4735] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" iface="eth0" netns="/var/run/netns/cni-b7f980d9-ca44-4229-0868-f2f911bcec6e" Mar 7 00:56:17.539966 containerd[2016]: 2026-03-07 00:56:17.167 [INFO][4735] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" iface="eth0" netns="/var/run/netns/cni-b7f980d9-ca44-4229-0868-f2f911bcec6e" Mar 7 00:56:17.539966 containerd[2016]: 2026-03-07 00:56:17.168 [INFO][4735] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" iface="eth0" netns="/var/run/netns/cni-b7f980d9-ca44-4229-0868-f2f911bcec6e" Mar 7 00:56:17.539966 containerd[2016]: 2026-03-07 00:56:17.173 [INFO][4735] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Mar 7 00:56:17.539966 containerd[2016]: 2026-03-07 00:56:17.173 [INFO][4735] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Mar 7 00:56:17.539966 containerd[2016]: 2026-03-07 00:56:17.464 [INFO][4800] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" HandleID="k8s-pod-network.230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" Mar 7 00:56:17.539966 containerd[2016]: 2026-03-07 00:56:17.464 [INFO][4800] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:17.539966 containerd[2016]: 2026-03-07 00:56:17.464 [INFO][4800] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:17.539966 containerd[2016]: 2026-03-07 00:56:17.508 [WARNING][4800] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" HandleID="k8s-pod-network.230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" Mar 7 00:56:17.539966 containerd[2016]: 2026-03-07 00:56:17.508 [INFO][4800] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" HandleID="k8s-pod-network.230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" Mar 7 00:56:17.539966 containerd[2016]: 2026-03-07 00:56:17.512 [INFO][4800] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:17.539966 containerd[2016]: 2026-03-07 00:56:17.519 [INFO][4735] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Mar 7 00:56:17.542327 kubelet[3232]: I0307 00:56:17.541645 3232 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/15380836-54c5-44a9-8a93-212d68a67553-whisker-backend-key-pair\") on node \"ip-172-31-17-228\" DevicePath \"\"" Mar 7 00:56:17.542327 kubelet[3232]: I0307 00:56:17.541686 3232 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/15380836-54c5-44a9-8a93-212d68a67553-nginx-config\") on node \"ip-172-31-17-228\" DevicePath \"\"" Mar 7 00:56:17.542327 kubelet[3232]: I0307 00:56:17.541711 3232 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7prq6\" (UniqueName: \"kubernetes.io/projected/15380836-54c5-44a9-8a93-212d68a67553-kube-api-access-7prq6\") on node \"ip-172-31-17-228\" DevicePath \"\"" Mar 7 00:56:17.542327 kubelet[3232]: I0307 00:56:17.541735 3232 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15380836-54c5-44a9-8a93-212d68a67553-whisker-ca-bundle\") on node \"ip-172-31-17-228\" DevicePath \"\"" Mar 7 00:56:17.546571 systemd[1]: run-netns-cni\x2db7f980d9\x2dca44\x2d4229\x2d0868\x2df2f911bcec6e.mount: Deactivated successfully. Mar 7 00:56:17.548222 containerd[2016]: time="2026-03-07T00:56:17.547623870Z" level=info msg="TearDown network for sandbox \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\" successfully" Mar 7 00:56:17.548222 containerd[2016]: time="2026-03-07T00:56:17.547689246Z" level=info msg="StopPodSandbox for \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\" returns successfully" Mar 7 00:56:17.569399 containerd[2016]: time="2026-03-07T00:56:17.568954938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hlxj9,Uid:db5a2b01-b80e-4ffd-95f8-409702863707,Namespace:kube-system,Attempt:1,}" Mar 7 00:56:17.575766 containerd[2016]: 2026-03-07 00:56:17.179 [INFO][4709] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Mar 7 00:56:17.575766 containerd[2016]: 2026-03-07 00:56:17.192 [INFO][4709] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" iface="eth0" netns="/var/run/netns/cni-20b13463-c52d-c666-c4fa-d38bd518f8d0" Mar 7 00:56:17.575766 containerd[2016]: 2026-03-07 00:56:17.194 [INFO][4709] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" iface="eth0" netns="/var/run/netns/cni-20b13463-c52d-c666-c4fa-d38bd518f8d0" Mar 7 00:56:17.575766 containerd[2016]: 2026-03-07 00:56:17.197 [INFO][4709] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" iface="eth0" netns="/var/run/netns/cni-20b13463-c52d-c666-c4fa-d38bd518f8d0" Mar 7 00:56:17.575766 containerd[2016]: 2026-03-07 00:56:17.198 [INFO][4709] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Mar 7 00:56:17.575766 containerd[2016]: 2026-03-07 00:56:17.199 [INFO][4709] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Mar 7 00:56:17.575766 containerd[2016]: 2026-03-07 00:56:17.513 [INFO][4808] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" HandleID="k8s-pod-network.b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" Mar 7 00:56:17.575766 containerd[2016]: 2026-03-07 00:56:17.513 [INFO][4808] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:17.575766 containerd[2016]: 2026-03-07 00:56:17.513 [INFO][4808] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:17.575766 containerd[2016]: 2026-03-07 00:56:17.544 [WARNING][4808] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" HandleID="k8s-pod-network.b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" Mar 7 00:56:17.575766 containerd[2016]: 2026-03-07 00:56:17.544 [INFO][4808] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" HandleID="k8s-pod-network.b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" Mar 7 00:56:17.575766 containerd[2016]: 2026-03-07 00:56:17.552 [INFO][4808] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:17.575766 containerd[2016]: 2026-03-07 00:56:17.561 [INFO][4709] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Mar 7 00:56:17.580655 containerd[2016]: time="2026-03-07T00:56:17.579126198Z" level=info msg="TearDown network for sandbox \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\" successfully" Mar 7 00:56:17.581170 containerd[2016]: time="2026-03-07T00:56:17.580639878Z" level=info msg="StopPodSandbox for \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\" returns successfully" Mar 7 00:56:17.582631 systemd[1]: run-netns-cni\x2d20b13463\x2dc52d\x2dc666\x2dc4fa\x2dd38bd518f8d0.mount: Deactivated successfully. Mar 7 00:56:17.598606 containerd[2016]: time="2026-03-07T00:56:17.598238214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54c564cdd4-frsdt,Uid:b162dfce-342e-4a97-8794-bfbba685c555,Namespace:calico-system,Attempt:1,}" Mar 7 00:56:17.616813 containerd[2016]: 2026-03-07 00:56:17.219 [INFO][4712] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Mar 7 00:56:17.616813 containerd[2016]: 2026-03-07 00:56:17.219 [INFO][4712] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" iface="eth0" netns="/var/run/netns/cni-2e747bd5-2a16-11ea-94fa-8f5d9c03efcb" Mar 7 00:56:17.616813 containerd[2016]: 2026-03-07 00:56:17.219 [INFO][4712] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" iface="eth0" netns="/var/run/netns/cni-2e747bd5-2a16-11ea-94fa-8f5d9c03efcb" Mar 7 00:56:17.616813 containerd[2016]: 2026-03-07 00:56:17.230 [INFO][4712] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" iface="eth0" netns="/var/run/netns/cni-2e747bd5-2a16-11ea-94fa-8f5d9c03efcb" Mar 7 00:56:17.616813 containerd[2016]: 2026-03-07 00:56:17.230 [INFO][4712] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Mar 7 00:56:17.616813 containerd[2016]: 2026-03-07 00:56:17.230 [INFO][4712] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Mar 7 00:56:17.616813 containerd[2016]: 2026-03-07 00:56:17.503 [INFO][4818] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" HandleID="k8s-pod-network.7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Workload="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" Mar 7 00:56:17.616813 containerd[2016]: 2026-03-07 00:56:17.503 [INFO][4818] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:17.616813 containerd[2016]: 2026-03-07 00:56:17.552 [INFO][4818] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:17.616813 containerd[2016]: 2026-03-07 00:56:17.591 [WARNING][4818] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" HandleID="k8s-pod-network.7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Workload="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" Mar 7 00:56:17.616813 containerd[2016]: 2026-03-07 00:56:17.592 [INFO][4818] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" HandleID="k8s-pod-network.7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Workload="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" Mar 7 00:56:17.616813 containerd[2016]: 2026-03-07 00:56:17.599 [INFO][4818] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:17.616813 containerd[2016]: 2026-03-07 00:56:17.606 [INFO][4712] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Mar 7 00:56:17.619732 containerd[2016]: time="2026-03-07T00:56:17.617632458Z" level=info msg="TearDown network for sandbox \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\" successfully" Mar 7 00:56:17.619732 containerd[2016]: time="2026-03-07T00:56:17.617717814Z" level=info msg="StopPodSandbox for \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\" returns successfully" Mar 7 00:56:17.625898 systemd[1]: run-netns-cni\x2d2e747bd5\x2d2a16\x2d11ea\x2d94fa\x2d8f5d9c03efcb.mount: Deactivated successfully. Mar 7 00:56:17.632732 containerd[2016]: time="2026-03-07T00:56:17.632676858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d8c4bf8bd-drgff,Uid:c91a82c4-5f97-4c22-95be-166930ad0926,Namespace:calico-system,Attempt:1,}" Mar 7 00:56:17.663767 containerd[2016]: 2026-03-07 00:56:17.318 [INFO][4694] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Mar 7 00:56:17.663767 containerd[2016]: 2026-03-07 00:56:17.328 [INFO][4694] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" iface="eth0" netns="/var/run/netns/cni-cb107161-d617-0642-62fd-0a54f2e0e071" Mar 7 00:56:17.663767 containerd[2016]: 2026-03-07 00:56:17.334 [INFO][4694] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" iface="eth0" netns="/var/run/netns/cni-cb107161-d617-0642-62fd-0a54f2e0e071" Mar 7 00:56:17.663767 containerd[2016]: 2026-03-07 00:56:17.334 [INFO][4694] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" iface="eth0" netns="/var/run/netns/cni-cb107161-d617-0642-62fd-0a54f2e0e071" Mar 7 00:56:17.663767 containerd[2016]: 2026-03-07 00:56:17.334 [INFO][4694] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Mar 7 00:56:17.663767 containerd[2016]: 2026-03-07 00:56:17.334 [INFO][4694] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Mar 7 00:56:17.663767 containerd[2016]: 2026-03-07 00:56:17.573 [INFO][4839] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" HandleID="k8s-pod-network.b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Workload="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" Mar 7 00:56:17.663767 containerd[2016]: 2026-03-07 00:56:17.574 [INFO][4839] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:17.663767 containerd[2016]: 2026-03-07 00:56:17.601 [INFO][4839] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:17.663767 containerd[2016]: 2026-03-07 00:56:17.636 [WARNING][4839] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" HandleID="k8s-pod-network.b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Workload="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" Mar 7 00:56:17.663767 containerd[2016]: 2026-03-07 00:56:17.636 [INFO][4839] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" HandleID="k8s-pod-network.b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Workload="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" Mar 7 00:56:17.663767 containerd[2016]: 2026-03-07 00:56:17.643 [INFO][4839] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:17.663767 containerd[2016]: 2026-03-07 00:56:17.649 [INFO][4694] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Mar 7 00:56:17.666802 containerd[2016]: time="2026-03-07T00:56:17.666130854Z" level=info msg="TearDown network for sandbox \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\" successfully" Mar 7 00:56:17.666802 containerd[2016]: time="2026-03-07T00:56:17.666179226Z" level=info msg="StopPodSandbox for \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\" returns successfully" Mar 7 00:56:17.683905 containerd[2016]: time="2026-03-07T00:56:17.683836470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-qnkgm,Uid:07a42616-368f-4a16-92ee-c16f0eba21ab,Namespace:calico-system,Attempt:1,}" Mar 7 00:56:17.699496 systemd[1]: Removed slice kubepods-besteffort-pod15380836_54c5_44a9_8a93_212d68a67553.slice - libcontainer container kubepods-besteffort-pod15380836_54c5_44a9_8a93_212d68a67553.slice. Mar 7 00:56:17.740401 containerd[2016]: 2026-03-07 00:56:17.297 [INFO][4683] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Mar 7 00:56:17.740401 containerd[2016]: 2026-03-07 00:56:17.297 [INFO][4683] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" iface="eth0" netns="/var/run/netns/cni-149f432a-3e1d-e27a-5eee-3ced3c0e01eb" Mar 7 00:56:17.740401 containerd[2016]: 2026-03-07 00:56:17.302 [INFO][4683] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" iface="eth0" netns="/var/run/netns/cni-149f432a-3e1d-e27a-5eee-3ced3c0e01eb" Mar 7 00:56:17.740401 containerd[2016]: 2026-03-07 00:56:17.303 [INFO][4683] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" iface="eth0" netns="/var/run/netns/cni-149f432a-3e1d-e27a-5eee-3ced3c0e01eb" Mar 7 00:56:17.740401 containerd[2016]: 2026-03-07 00:56:17.303 [INFO][4683] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Mar 7 00:56:17.740401 containerd[2016]: 2026-03-07 00:56:17.304 [INFO][4683] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Mar 7 00:56:17.740401 containerd[2016]: 2026-03-07 00:56:17.616 [INFO][4830] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" HandleID="k8s-pod-network.9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" Mar 7 00:56:17.740401 containerd[2016]: 2026-03-07 00:56:17.619 [INFO][4830] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:17.740401 containerd[2016]: 2026-03-07 00:56:17.643 [INFO][4830] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:17.740401 containerd[2016]: 2026-03-07 00:56:17.675 [WARNING][4830] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" HandleID="k8s-pod-network.9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" Mar 7 00:56:17.740401 containerd[2016]: 2026-03-07 00:56:17.675 [INFO][4830] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" HandleID="k8s-pod-network.9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" Mar 7 00:56:17.740401 containerd[2016]: 2026-03-07 00:56:17.680 [INFO][4830] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:17.740401 containerd[2016]: 2026-03-07 00:56:17.712 [INFO][4683] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Mar 7 00:56:17.746417 containerd[2016]: time="2026-03-07T00:56:17.741128298Z" level=info msg="TearDown network for sandbox \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\" successfully" Mar 7 00:56:17.746417 containerd[2016]: time="2026-03-07T00:56:17.743089374Z" level=info msg="StopPodSandbox for \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\" returns successfully" Mar 7 00:56:17.754871 containerd[2016]: time="2026-03-07T00:56:17.754817899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54c564cdd4-5c6xm,Uid:ee2a76da-5fba-4560-8907-48edf40e4afd,Namespace:calico-system,Attempt:1,}" Mar 7 00:56:17.963957 systemd[1]: Created slice kubepods-besteffort-poded51614f_5023_4b66_8c72_837be09873ac.slice - libcontainer container kubepods-besteffort-poded51614f_5023_4b66_8c72_837be09873ac.slice. Mar 7 00:56:18.045501 kubelet[3232]: I0307 00:56:18.045448 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed51614f-5023-4b66-8c72-837be09873ac-whisker-backend-key-pair\") pod \"whisker-b5bb86c7c-89nsw\" (UID: \"ed51614f-5023-4b66-8c72-837be09873ac\") " pod="calico-system/whisker-b5bb86c7c-89nsw" Mar 7 00:56:18.050580 kubelet[3232]: I0307 00:56:18.047486 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdp5r\" (UniqueName: \"kubernetes.io/projected/ed51614f-5023-4b66-8c72-837be09873ac-kube-api-access-fdp5r\") pod \"whisker-b5bb86c7c-89nsw\" (UID: \"ed51614f-5023-4b66-8c72-837be09873ac\") " pod="calico-system/whisker-b5bb86c7c-89nsw" Mar 7 00:56:18.050580 kubelet[3232]: I0307 00:56:18.047690 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ed51614f-5023-4b66-8c72-837be09873ac-nginx-config\") pod \"whisker-b5bb86c7c-89nsw\" (UID: \"ed51614f-5023-4b66-8c72-837be09873ac\") " pod="calico-system/whisker-b5bb86c7c-89nsw" Mar 7 00:56:18.050580 kubelet[3232]: I0307 00:56:18.048131 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed51614f-5023-4b66-8c72-837be09873ac-whisker-ca-bundle\") pod \"whisker-b5bb86c7c-89nsw\" (UID: \"ed51614f-5023-4b66-8c72-837be09873ac\") " pod="calico-system/whisker-b5bb86c7c-89nsw" Mar 7 00:56:18.182415 kubelet[3232]: I0307 00:56:18.180581 3232 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15380836-54c5-44a9-8a93-212d68a67553" path="/var/lib/kubelet/pods/15380836-54c5-44a9-8a93-212d68a67553/volumes" Mar 7 00:56:18.288575 containerd[2016]: time="2026-03-07T00:56:18.288442781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b5bb86c7c-89nsw,Uid:ed51614f-5023-4b66-8c72-837be09873ac,Namespace:calico-system,Attempt:0,}" Mar 7 00:56:18.455254 systemd-networkd[1932]: caliac8d5653ce6: Gained IPv6LL Mar 7 00:56:18.576206 (udev-worker)[4748]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:56:18.578664 systemd[1]: run-netns-cni\x2dcb107161\x2dd617\x2d0642\x2d62fd\x2d0a54f2e0e071.mount: Deactivated successfully. Mar 7 00:56:18.578848 systemd[1]: run-netns-cni\x2d149f432a\x2d3e1d\x2de27a\x2d5eee\x2d3ced3c0e01eb.mount: Deactivated successfully. Mar 7 00:56:18.615817 systemd-networkd[1932]: cali38a0c6cf24b: Link UP Mar 7 00:56:18.616419 systemd-networkd[1932]: cali38a0c6cf24b: Gained carrier Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:17.894 [ERROR][4870] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.028 [INFO][4870] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0 coredns-66bc5c9577- kube-system db5a2b01-b80e-4ffd-95f8-409702863707 935 0 2026-03-07 00:55:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-228 coredns-66bc5c9577-hlxj9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali38a0c6cf24b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" Namespace="kube-system" Pod="coredns-66bc5c9577-hlxj9" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-" Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.052 [INFO][4870] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" Namespace="kube-system" Pod="coredns-66bc5c9577-hlxj9" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.341 [INFO][4977] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" HandleID="k8s-pod-network.a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.373 [INFO][4977] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" HandleID="k8s-pod-network.a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b93a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-228", "pod":"coredns-66bc5c9577-hlxj9", "timestamp":"2026-03-07 00:56:18.341065325 +0000 UTC"}, Hostname:"ip-172-31-17-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000ea840)} Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.373 [INFO][4977] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.375 [INFO][4977] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.375 [INFO][4977] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-228' Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.382 [INFO][4977] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" host="ip-172-31-17-228" Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.400 [INFO][4977] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-228" Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.429 [INFO][4977] ipam/ipam.go 526: Trying affinity for 192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.435 [INFO][4977] ipam/ipam.go 160: Attempting to load block cidr=192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.450 [INFO][4977] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.450 [INFO][4977] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" host="ip-172-31-17-228" Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.460 [INFO][4977] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650 Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.482 [INFO][4977] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" host="ip-172-31-17-228" Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.519 [INFO][4977] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.52.130/26] block=192.168.52.128/26 handle="k8s-pod-network.a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" host="ip-172-31-17-228" Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.524 [INFO][4977] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.52.130/26] handle="k8s-pod-network.a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" host="ip-172-31-17-228" Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.525 [INFO][4977] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:18.735847 containerd[2016]: 2026-03-07 00:56:18.525 [INFO][4977] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.52.130/26] IPv6=[] ContainerID="a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" HandleID="k8s-pod-network.a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" Mar 7 00:56:18.738850 containerd[2016]: 2026-03-07 00:56:18.567 [INFO][4870] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" Namespace="kube-system" Pod="coredns-66bc5c9577-hlxj9" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db5a2b01-b80e-4ffd-95f8-409702863707", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"", Pod:"coredns-66bc5c9577-hlxj9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38a0c6cf24b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:18.738850 containerd[2016]: 2026-03-07 00:56:18.567 [INFO][4870] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.130/32] ContainerID="a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" Namespace="kube-system" Pod="coredns-66bc5c9577-hlxj9" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" Mar 7 00:56:18.738850 containerd[2016]: 2026-03-07 00:56:18.567 [INFO][4870] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38a0c6cf24b ContainerID="a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" Namespace="kube-system" Pod="coredns-66bc5c9577-hlxj9" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" Mar 7 00:56:18.738850 containerd[2016]: 2026-03-07 00:56:18.644 [INFO][4870] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" Namespace="kube-system" Pod="coredns-66bc5c9577-hlxj9" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" Mar 7 00:56:18.738850 containerd[2016]: 2026-03-07 00:56:18.649 [INFO][4870] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" Namespace="kube-system" Pod="coredns-66bc5c9577-hlxj9" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db5a2b01-b80e-4ffd-95f8-409702863707", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650", Pod:"coredns-66bc5c9577-hlxj9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38a0c6cf24b", MAC:"b2:30:97:d2:1b:57", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:18.738850 containerd[2016]: 2026-03-07 00:56:18.709 [INFO][4870] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650" Namespace="kube-system" Pod="coredns-66bc5c9577-hlxj9" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" Mar 7 00:56:18.888846 systemd-networkd[1932]: cali4c605b41138: Link UP Mar 7 00:56:18.891692 systemd-networkd[1932]: cali4c605b41138: Gained carrier Mar 7 00:56:18.942554 containerd[2016]: time="2026-03-07T00:56:18.937825376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:56:18.942554 containerd[2016]: time="2026-03-07T00:56:18.942170576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:56:18.942554 containerd[2016]: time="2026-03-07T00:56:18.942211784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:18.943402 containerd[2016]: time="2026-03-07T00:56:18.942930752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:17.931 [ERROR][4889] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.039 [INFO][4889] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0 calico-kube-controllers-5d8c4bf8bd- calico-system c91a82c4-5f97-4c22-95be-166930ad0926 937 0 2026-03-07 00:55:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d8c4bf8bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-228 calico-kube-controllers-5d8c4bf8bd-drgff eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4c605b41138 [] [] }} ContainerID="f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" Namespace="calico-system" Pod="calico-kube-controllers-5d8c4bf8bd-drgff" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-" Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.053 [INFO][4889] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" Namespace="calico-system" Pod="calico-kube-controllers-5d8c4bf8bd-drgff" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.501 [INFO][4975] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" HandleID="k8s-pod-network.f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" Workload="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.618 [INFO][4975] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" HandleID="k8s-pod-network.f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" Workload="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035a6b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-228", "pod":"calico-kube-controllers-5d8c4bf8bd-drgff", "timestamp":"2026-03-07 00:56:18.501613326 +0000 UTC"}, Hostname:"ip-172-31-17-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002d2000)} Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.624 [INFO][4975] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.624 [INFO][4975] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.624 [INFO][4975] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-228' Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.645 [INFO][4975] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" host="ip-172-31-17-228" Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.684 [INFO][4975] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-228" Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.728 [INFO][4975] ipam/ipam.go 526: Trying affinity for 192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.737 [INFO][4975] ipam/ipam.go 160: Attempting to load block cidr=192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.753 [INFO][4975] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.753 [INFO][4975] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" host="ip-172-31-17-228" Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.765 [INFO][4975] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.788 [INFO][4975] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" host="ip-172-31-17-228" Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.845 [INFO][4975] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.52.131/26] block=192.168.52.128/26 handle="k8s-pod-network.f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" host="ip-172-31-17-228" Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.845 [INFO][4975] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.52.131/26] handle="k8s-pod-network.f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" host="ip-172-31-17-228" Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.845 [INFO][4975] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:19.021309 containerd[2016]: 2026-03-07 00:56:18.845 [INFO][4975] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.52.131/26] IPv6=[] ContainerID="f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" HandleID="k8s-pod-network.f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" Workload="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" Mar 7 00:56:19.023354 containerd[2016]: 2026-03-07 00:56:18.863 [INFO][4889] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" Namespace="calico-system" Pod="calico-kube-controllers-5d8c4bf8bd-drgff" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0", GenerateName:"calico-kube-controllers-5d8c4bf8bd-", Namespace:"calico-system", SelfLink:"", UID:"c91a82c4-5f97-4c22-95be-166930ad0926", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d8c4bf8bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"", Pod:"calico-kube-controllers-5d8c4bf8bd-drgff", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c605b41138", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:19.023354 containerd[2016]: 2026-03-07 00:56:18.863 [INFO][4889] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.131/32] ContainerID="f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" Namespace="calico-system" Pod="calico-kube-controllers-5d8c4bf8bd-drgff" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" Mar 7 00:56:19.023354 containerd[2016]: 2026-03-07 00:56:18.863 [INFO][4889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c605b41138 ContainerID="f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" Namespace="calico-system" Pod="calico-kube-controllers-5d8c4bf8bd-drgff" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" Mar 7 00:56:19.023354 containerd[2016]: 2026-03-07 00:56:18.891 [INFO][4889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" Namespace="calico-system" Pod="calico-kube-controllers-5d8c4bf8bd-drgff" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" Mar 7 00:56:19.023354 containerd[2016]: 2026-03-07 00:56:18.899 [INFO][4889] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" Namespace="calico-system" Pod="calico-kube-controllers-5d8c4bf8bd-drgff" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0", GenerateName:"calico-kube-controllers-5d8c4bf8bd-", Namespace:"calico-system", SelfLink:"", UID:"c91a82c4-5f97-4c22-95be-166930ad0926", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d8c4bf8bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a", Pod:"calico-kube-controllers-5d8c4bf8bd-drgff", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c605b41138", MAC:"7a:60:e4:52:67:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:19.023354 containerd[2016]: 2026-03-07 00:56:19.005 [INFO][4889] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a" Namespace="calico-system" Pod="calico-kube-controllers-5d8c4bf8bd-drgff" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" Mar 7 00:56:19.078723 systemd[1]: Started cri-containerd-a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650.scope - libcontainer container a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650. Mar 7 00:56:19.219441 containerd[2016]: time="2026-03-07T00:56:19.218889246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:56:19.220947 containerd[2016]: time="2026-03-07T00:56:19.220553058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:56:19.220947 containerd[2016]: time="2026-03-07T00:56:19.220628706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:19.221704 containerd[2016]: time="2026-03-07T00:56:19.221508510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:19.290922 systemd-networkd[1932]: calif7301c3dfaf: Link UP Mar 7 00:56:19.298679 systemd-networkd[1932]: calif7301c3dfaf: Gained carrier Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:17.936 [ERROR][4878] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:18.048 [INFO][4878] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0 calico-apiserver-54c564cdd4- calico-system b162dfce-342e-4a97-8794-bfbba685c555 936 0 2026-03-07 00:55:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54c564cdd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-228 calico-apiserver-54c564cdd4-frsdt eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif7301c3dfaf [] [] }} ContainerID="5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" Namespace="calico-system" Pod="calico-apiserver-54c564cdd4-frsdt" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-" Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:18.061 [INFO][4878] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" Namespace="calico-system" Pod="calico-apiserver-54c564cdd4-frsdt" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:18.588 [INFO][4980] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" HandleID="k8s-pod-network.5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:18.673 [INFO][4980] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" HandleID="k8s-pod-network.5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003d5790), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-228", "pod":"calico-apiserver-54c564cdd4-frsdt", "timestamp":"2026-03-07 00:56:18.588590371 +0000 UTC"}, Hostname:"ip-172-31-17-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40005deb00)} Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:18.673 [INFO][4980] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:18.845 [INFO][4980] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:18.847 [INFO][4980] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-228' Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:18.887 [INFO][4980] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" host="ip-172-31-17-228" Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:18.995 [INFO][4980] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-228" Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:19.027 [INFO][4980] ipam/ipam.go 526: Trying affinity for 192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:19.067 [INFO][4980] ipam/ipam.go 160: Attempting to load block cidr=192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:19.096 [INFO][4980] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:19.098 [INFO][4980] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" host="ip-172-31-17-228" Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:19.129 [INFO][4980] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:19.186 [INFO][4980] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" host="ip-172-31-17-228" Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:19.255 [INFO][4980] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.52.132/26] block=192.168.52.128/26 handle="k8s-pod-network.5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" host="ip-172-31-17-228" Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:19.256 [INFO][4980] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.52.132/26] handle="k8s-pod-network.5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" host="ip-172-31-17-228" Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:19.257 [INFO][4980] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:19.395361 containerd[2016]: 2026-03-07 00:56:19.259 [INFO][4980] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.52.132/26] IPv6=[] ContainerID="5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" HandleID="k8s-pod-network.5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" Mar 7 00:56:19.396693 containerd[2016]: 2026-03-07 00:56:19.273 [INFO][4878] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" Namespace="calico-system" Pod="calico-apiserver-54c564cdd4-frsdt" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0", GenerateName:"calico-apiserver-54c564cdd4-", Namespace:"calico-system", SelfLink:"", UID:"b162dfce-342e-4a97-8794-bfbba685c555", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54c564cdd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"", Pod:"calico-apiserver-54c564cdd4-frsdt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif7301c3dfaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:19.396693 containerd[2016]: 2026-03-07 00:56:19.274 [INFO][4878] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.132/32] ContainerID="5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" Namespace="calico-system" Pod="calico-apiserver-54c564cdd4-frsdt" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" Mar 7 00:56:19.396693 containerd[2016]: 2026-03-07 00:56:19.274 [INFO][4878] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7301c3dfaf ContainerID="5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" Namespace="calico-system" Pod="calico-apiserver-54c564cdd4-frsdt" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" Mar 7 00:56:19.396693 containerd[2016]: 2026-03-07 00:56:19.305 [INFO][4878] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" Namespace="calico-system" Pod="calico-apiserver-54c564cdd4-frsdt" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" Mar 7 00:56:19.396693 containerd[2016]: 2026-03-07 00:56:19.308 [INFO][4878] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" Namespace="calico-system" Pod="calico-apiserver-54c564cdd4-frsdt" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0", GenerateName:"calico-apiserver-54c564cdd4-", Namespace:"calico-system", SelfLink:"", UID:"b162dfce-342e-4a97-8794-bfbba685c555", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54c564cdd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c", Pod:"calico-apiserver-54c564cdd4-frsdt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif7301c3dfaf", MAC:"ce:05:f8:d3:bc:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:19.396693 containerd[2016]: 2026-03-07 00:56:19.390 [INFO][4878] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c" Namespace="calico-system" Pod="calico-apiserver-54c564cdd4-frsdt" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" Mar 7 00:56:19.503722 containerd[2016]: time="2026-03-07T00:56:19.496190023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:56:19.503722 containerd[2016]: time="2026-03-07T00:56:19.497817511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:56:19.503722 containerd[2016]: time="2026-03-07T00:56:19.497941051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:19.503722 containerd[2016]: time="2026-03-07T00:56:19.498301063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:19.520558 systemd-networkd[1932]: cali2154040b6c7: Link UP Mar 7 00:56:19.522936 systemd-networkd[1932]: cali2154040b6c7: Gained carrier Mar 7 00:56:19.530275 systemd[1]: Started cri-containerd-f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a.scope - libcontainer container f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a. Mar 7 00:56:19.539736 systemd[1]: run-containerd-runc-k8s.io-f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a-runc.zouhjX.mount: Deactivated successfully. Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:18.376 [ERROR][4943] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:18.515 [INFO][4943] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0 goldmane-cccfbd5cf- calico-system 07a42616-368f-4a16-92ee-c16f0eba21ab 939 0 2026-03-07 00:55:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-17-228 goldmane-cccfbd5cf-qnkgm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2154040b6c7 [] [] }} ContainerID="ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" Namespace="calico-system" Pod="goldmane-cccfbd5cf-qnkgm" WorkloadEndpoint="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-" Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:18.516 [INFO][4943] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" Namespace="calico-system" Pod="goldmane-cccfbd5cf-qnkgm" WorkloadEndpoint="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:18.901 [INFO][5057] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" HandleID="k8s-pod-network.ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" Workload="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:19.002 [INFO][5057] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" HandleID="k8s-pod-network.ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" Workload="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e2d10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-228", "pod":"goldmane-cccfbd5cf-qnkgm", "timestamp":"2026-03-07 00:56:18.90114614 +0000 UTC"}, Hostname:"ip-172-31-17-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000ea580)} Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:19.002 [INFO][5057] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:19.257 [INFO][5057] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:19.257 [INFO][5057] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-228' Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:19.295 [INFO][5057] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" host="ip-172-31-17-228" Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:19.383 [INFO][5057] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-228" Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:19.404 [INFO][5057] ipam/ipam.go 526: Trying affinity for 192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:19.410 [INFO][5057] ipam/ipam.go 160: Attempting to load block cidr=192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:19.417 [INFO][5057] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:19.418 [INFO][5057] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" host="ip-172-31-17-228" Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:19.424 [INFO][5057] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8 Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:19.441 [INFO][5057] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" host="ip-172-31-17-228" Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:19.479 [INFO][5057] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.52.133/26] block=192.168.52.128/26 handle="k8s-pod-network.ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" host="ip-172-31-17-228" Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:19.480 [INFO][5057] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.52.133/26] handle="k8s-pod-network.ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" host="ip-172-31-17-228" Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:19.480 [INFO][5057] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:19.619795 containerd[2016]: 2026-03-07 00:56:19.481 [INFO][5057] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.52.133/26] IPv6=[] ContainerID="ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" HandleID="k8s-pod-network.ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" Workload="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" Mar 7 00:56:19.621075 containerd[2016]: 2026-03-07 00:56:19.494 [INFO][4943] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" Namespace="calico-system" Pod="goldmane-cccfbd5cf-qnkgm" WorkloadEndpoint="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"07a42616-368f-4a16-92ee-c16f0eba21ab", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"", Pod:"goldmane-cccfbd5cf-qnkgm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2154040b6c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:19.621075 containerd[2016]: 2026-03-07 00:56:19.494 [INFO][4943] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.133/32] ContainerID="ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" Namespace="calico-system" Pod="goldmane-cccfbd5cf-qnkgm" WorkloadEndpoint="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" Mar 7 00:56:19.621075 containerd[2016]: 2026-03-07 00:56:19.494 [INFO][4943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2154040b6c7 ContainerID="ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" Namespace="calico-system" Pod="goldmane-cccfbd5cf-qnkgm" WorkloadEndpoint="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" Mar 7 00:56:19.621075 containerd[2016]: 2026-03-07 00:56:19.562 [INFO][4943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" Namespace="calico-system" Pod="goldmane-cccfbd5cf-qnkgm" WorkloadEndpoint="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" Mar 7 00:56:19.621075 containerd[2016]: 2026-03-07 00:56:19.574 [INFO][4943] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" Namespace="calico-system" Pod="goldmane-cccfbd5cf-qnkgm" WorkloadEndpoint="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"07a42616-368f-4a16-92ee-c16f0eba21ab", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8", Pod:"goldmane-cccfbd5cf-qnkgm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2154040b6c7", MAC:"3e:0d:9f:30:0b:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:19.621075 containerd[2016]: 2026-03-07 00:56:19.591 [INFO][4943] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8" Namespace="calico-system" Pod="goldmane-cccfbd5cf-qnkgm" WorkloadEndpoint="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" Mar 7 00:56:19.673025 containerd[2016]: time="2026-03-07T00:56:19.672968636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hlxj9,Uid:db5a2b01-b80e-4ffd-95f8-409702863707,Namespace:kube-system,Attempt:1,} returns sandbox id \"a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650\"" Mar 7 00:56:19.705161 containerd[2016]: time="2026-03-07T00:56:19.704391872Z" level=info msg="CreateContainer within sandbox \"a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:56:19.730753 systemd[1]: Started cri-containerd-5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c.scope - libcontainer container 5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c. Mar 7 00:56:19.811233 systemd-networkd[1932]: calif9362c93187: Link UP Mar 7 00:56:19.814749 systemd-networkd[1932]: calif9362c93187: Gained carrier Mar 7 00:56:19.900681 containerd[2016]: time="2026-03-07T00:56:19.900489021Z" level=info msg="CreateContainer within sandbox \"a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d867ec11700e0b289a7b76d172eec3e3b8c29873d5490391ce257afd31620e3f\"" Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:18.494 [ERROR][4959] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:18.648 [INFO][4959] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0 calico-apiserver-54c564cdd4- calico-system ee2a76da-5fba-4560-8907-48edf40e4afd 938 0 2026-03-07 00:55:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54c564cdd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-228 calico-apiserver-54c564cdd4-5c6xm eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif9362c93187 [] [] }} ContainerID="4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" Namespace="calico-system" Pod="calico-apiserver-54c564cdd4-5c6xm" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-" Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:18.652 [INFO][4959] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" Namespace="calico-system" Pod="calico-apiserver-54c564cdd4-5c6xm" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.083 [INFO][5068] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" HandleID="k8s-pod-network.4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.226 [INFO][5068] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" HandleID="k8s-pod-network.4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000381540), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-228", "pod":"calico-apiserver-54c564cdd4-5c6xm", "timestamp":"2026-03-07 00:56:19.083111597 +0000 UTC"}, Hostname:"ip-172-31-17-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002d8000)} Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.226 [INFO][5068] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.481 [INFO][5068] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.481 [INFO][5068] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-228' Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.493 [INFO][5068] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" host="ip-172-31-17-228" Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.546 [INFO][5068] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-228" Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.603 [INFO][5068] ipam/ipam.go 526: Trying affinity for 192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.618 [INFO][5068] ipam/ipam.go 160: Attempting to load block cidr=192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.629 [INFO][5068] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.629 [INFO][5068] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" host="ip-172-31-17-228" Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.641 [INFO][5068] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23 Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.676 [INFO][5068] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" host="ip-172-31-17-228" Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.721 [INFO][5068] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.52.134/26] block=192.168.52.128/26 handle="k8s-pod-network.4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" host="ip-172-31-17-228" Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.721 [INFO][5068] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.52.134/26] handle="k8s-pod-network.4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" host="ip-172-31-17-228" Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.721 [INFO][5068] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:19.904008 containerd[2016]: 2026-03-07 00:56:19.721 [INFO][5068] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.52.134/26] IPv6=[] ContainerID="4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" HandleID="k8s-pod-network.4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" Mar 7 00:56:19.905167 containerd[2016]: 2026-03-07 00:56:19.746 [INFO][4959] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" Namespace="calico-system" Pod="calico-apiserver-54c564cdd4-5c6xm" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0", GenerateName:"calico-apiserver-54c564cdd4-", Namespace:"calico-system", SelfLink:"", UID:"ee2a76da-5fba-4560-8907-48edf40e4afd", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54c564cdd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"", Pod:"calico-apiserver-54c564cdd4-5c6xm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif9362c93187", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:19.905167 containerd[2016]: 2026-03-07 00:56:19.748 [INFO][4959] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.134/32] ContainerID="4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" Namespace="calico-system" Pod="calico-apiserver-54c564cdd4-5c6xm" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" Mar 7 00:56:19.905167 containerd[2016]: 2026-03-07 00:56:19.750 [INFO][4959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9362c93187 ContainerID="4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" Namespace="calico-system" Pod="calico-apiserver-54c564cdd4-5c6xm" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" Mar 7 00:56:19.905167 containerd[2016]: 2026-03-07 00:56:19.831 [INFO][4959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" Namespace="calico-system" Pod="calico-apiserver-54c564cdd4-5c6xm" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" Mar 7 00:56:19.905167 containerd[2016]: 2026-03-07 00:56:19.834 [INFO][4959] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" Namespace="calico-system" Pod="calico-apiserver-54c564cdd4-5c6xm" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0", GenerateName:"calico-apiserver-54c564cdd4-", Namespace:"calico-system", SelfLink:"", UID:"ee2a76da-5fba-4560-8907-48edf40e4afd", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54c564cdd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23", Pod:"calico-apiserver-54c564cdd4-5c6xm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif9362c93187", MAC:"36:c5:a7:0c:36:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:19.905167 containerd[2016]: 2026-03-07 00:56:19.872 [INFO][4959] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23" Namespace="calico-system" Pod="calico-apiserver-54c564cdd4-5c6xm" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" Mar 7 00:56:19.909614 containerd[2016]: time="2026-03-07T00:56:19.908167977Z" level=info msg="StartContainer for \"d867ec11700e0b289a7b76d172eec3e3b8c29873d5490391ce257afd31620e3f\"" Mar 7 00:56:19.938515 containerd[2016]: time="2026-03-07T00:56:19.938199753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d8c4bf8bd-drgff,Uid:c91a82c4-5f97-4c22-95be-166930ad0926,Namespace:calico-system,Attempt:1,} returns sandbox id \"f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a\"" Mar 7 00:56:19.942042 containerd[2016]: time="2026-03-07T00:56:19.941617569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:56:19.945962 containerd[2016]: time="2026-03-07T00:56:19.941718225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:56:19.951359 containerd[2016]: time="2026-03-07T00:56:19.944434845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:19.951359 containerd[2016]: time="2026-03-07T00:56:19.946549569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:20.040681 systemd-networkd[1932]: cali2965ec91035: Link UP Mar 7 00:56:20.050726 systemd-networkd[1932]: cali2965ec91035: Gained carrier Mar 7 00:56:20.091746 systemd[1]: Started cri-containerd-ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8.scope - libcontainer container ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8. Mar 7 00:56:20.131598 containerd[2016]: time="2026-03-07T00:56:20.125255418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:56:20.131598 containerd[2016]: time="2026-03-07T00:56:20.125360226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:56:20.135475 containerd[2016]: time="2026-03-07T00:56:20.134655882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:18.746 [ERROR][5026] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:18.810 [INFO][5026] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--228-k8s-whisker--b5bb86c7c--89nsw-eth0 whisker-b5bb86c7c- calico-system ed51614f-5023-4b66-8c72-837be09873ac 956 0 2026-03-07 00:56:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:b5bb86c7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-17-228 whisker-b5bb86c7c-89nsw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2965ec91035 [] [] }} ContainerID="c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" Namespace="calico-system" Pod="whisker-b5bb86c7c-89nsw" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--b5bb86c7c--89nsw-" Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:18.811 [INFO][5026] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" Namespace="calico-system" Pod="whisker-b5bb86c7c-89nsw" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--b5bb86c7c--89nsw-eth0" Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.171 [INFO][5106] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" HandleID="k8s-pod-network.c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" Workload="ip--172--31--17--228-k8s-whisker--b5bb86c7c--89nsw-eth0" Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.259 [INFO][5106] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" HandleID="k8s-pod-network.c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" Workload="ip--172--31--17--228-k8s-whisker--b5bb86c7c--89nsw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c610), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-228", "pod":"whisker-b5bb86c7c-89nsw", "timestamp":"2026-03-07 00:56:19.171699402 +0000 UTC"}, Hostname:"ip-172-31-17-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001b6c60)} Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.259 [INFO][5106] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.722 [INFO][5106] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.722 [INFO][5106] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-228' Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.737 [INFO][5106] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" host="ip-172-31-17-228" Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.802 [INFO][5106] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-228" Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.862 [INFO][5106] ipam/ipam.go 526: Trying affinity for 192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.880 [INFO][5106] ipam/ipam.go 160: Attempting to load block cidr=192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.902 [INFO][5106] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.904 [INFO][5106] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" host="ip-172-31-17-228" Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.911 [INFO][5106] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.952 [INFO][5106] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" host="ip-172-31-17-228" Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.986 [INFO][5106] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.52.135/26] block=192.168.52.128/26 handle="k8s-pod-network.c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" host="ip-172-31-17-228" Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.989 [INFO][5106] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.52.135/26] handle="k8s-pod-network.c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" host="ip-172-31-17-228" Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.989 [INFO][5106] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:20.146053 containerd[2016]: 2026-03-07 00:56:19.989 [INFO][5106] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.52.135/26] IPv6=[] ContainerID="c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" HandleID="k8s-pod-network.c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" Workload="ip--172--31--17--228-k8s-whisker--b5bb86c7c--89nsw-eth0" Mar 7 00:56:20.147171 containerd[2016]: 2026-03-07 00:56:20.020 [INFO][5026] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" Namespace="calico-system" Pod="whisker-b5bb86c7c-89nsw" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--b5bb86c7c--89nsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-whisker--b5bb86c7c--89nsw-eth0", GenerateName:"whisker-b5bb86c7c-", Namespace:"calico-system", SelfLink:"", UID:"ed51614f-5023-4b66-8c72-837be09873ac", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 56, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b5bb86c7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"", Pod:"whisker-b5bb86c7c-89nsw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.52.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2965ec91035", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:20.147171 containerd[2016]: 2026-03-07 00:56:20.026 [INFO][5026] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.135/32] ContainerID="c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" Namespace="calico-system" Pod="whisker-b5bb86c7c-89nsw" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--b5bb86c7c--89nsw-eth0" Mar 7 00:56:20.147171 containerd[2016]: 2026-03-07 00:56:20.026 [INFO][5026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2965ec91035 ContainerID="c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" Namespace="calico-system" Pod="whisker-b5bb86c7c-89nsw" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--b5bb86c7c--89nsw-eth0" Mar 7 00:56:20.147171 containerd[2016]: 2026-03-07 00:56:20.063 [INFO][5026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" Namespace="calico-system" Pod="whisker-b5bb86c7c-89nsw" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--b5bb86c7c--89nsw-eth0" Mar 7 00:56:20.147171 containerd[2016]: 2026-03-07 00:56:20.078 [INFO][5026] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" Namespace="calico-system" Pod="whisker-b5bb86c7c-89nsw" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--b5bb86c7c--89nsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-whisker--b5bb86c7c--89nsw-eth0", GenerateName:"whisker-b5bb86c7c-", Namespace:"calico-system", SelfLink:"", UID:"ed51614f-5023-4b66-8c72-837be09873ac", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 56, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b5bb86c7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d", Pod:"whisker-b5bb86c7c-89nsw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.52.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2965ec91035", MAC:"f2:ee:08:e0:cc:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:20.147171 containerd[2016]: 2026-03-07 00:56:20.123 [INFO][5026] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d" Namespace="calico-system" Pod="whisker-b5bb86c7c-89nsw" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--b5bb86c7c--89nsw-eth0" Mar 7 00:56:20.163914 containerd[2016]: time="2026-03-07T00:56:20.154879338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:20.230734 systemd[1]: Started cri-containerd-d867ec11700e0b289a7b76d172eec3e3b8c29873d5490391ce257afd31620e3f.scope - libcontainer container d867ec11700e0b289a7b76d172eec3e3b8c29873d5490391ce257afd31620e3f. Mar 7 00:56:20.310740 systemd[1]: Started cri-containerd-4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23.scope - libcontainer container 4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23. Mar 7 00:56:20.336967 containerd[2016]: time="2026-03-07T00:56:20.336913051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54c564cdd4-frsdt,Uid:b162dfce-342e-4a97-8794-bfbba685c555,Namespace:calico-system,Attempt:1,} returns sandbox id \"5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c\"" Mar 7 00:56:20.397993 containerd[2016]: time="2026-03-07T00:56:20.395222576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:56:20.397993 containerd[2016]: time="2026-03-07T00:56:20.395331464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:56:20.401221 containerd[2016]: time="2026-03-07T00:56:20.401107304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:20.403571 containerd[2016]: time="2026-03-07T00:56:20.401934476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:20.461691 containerd[2016]: time="2026-03-07T00:56:20.461631260Z" level=info msg="StartContainer for \"d867ec11700e0b289a7b76d172eec3e3b8c29873d5490391ce257afd31620e3f\" returns successfully" Mar 7 00:56:20.533853 systemd[1]: Started cri-containerd-c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d.scope - libcontainer container c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d. Mar 7 00:56:20.630857 systemd-networkd[1932]: cali38a0c6cf24b: Gained IPv6LL Mar 7 00:56:20.694926 systemd-networkd[1932]: cali4c605b41138: Gained IPv6LL Mar 7 00:56:20.770141 containerd[2016]: time="2026-03-07T00:56:20.770067526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:20.776824 containerd[2016]: time="2026-03-07T00:56:20.776750926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 7 00:56:20.778771 containerd[2016]: time="2026-03-07T00:56:20.778698946Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:20.792444 containerd[2016]: time="2026-03-07T00:56:20.790716478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:20.797898 containerd[2016]: time="2026-03-07T00:56:20.797793622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 3.348019649s" Mar 7 00:56:20.797898 containerd[2016]: time="2026-03-07T00:56:20.797891866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 7 00:56:20.802151 containerd[2016]: time="2026-03-07T00:56:20.802062862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 00:56:20.815336 containerd[2016]: time="2026-03-07T00:56:20.814816078Z" level=info msg="CreateContainer within sandbox \"d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 00:56:20.826414 systemd-networkd[1932]: calif9362c93187: Gained IPv6LL Mar 7 00:56:20.837337 kubelet[3232]: I0307 00:56:20.836415 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hlxj9" podStartSLOduration=45.836351578 podStartE2EDuration="45.836351578s" podCreationTimestamp="2026-03-07 00:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:56:20.775898494 +0000 UTC m=+50.920762346" watchObservedRunningTime="2026-03-07 00:56:20.836351578 +0000 UTC m=+50.981215430" Mar 7 00:56:20.883539 containerd[2016]: time="2026-03-07T00:56:20.867487066Z" level=info msg="CreateContainer within sandbox \"d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"430c69dc4549b4e3984b4dd6608dfcef1bcfe15be124fda81e7a0e6a095d9749\"" Mar 7 00:56:20.883539 containerd[2016]: time="2026-03-07T00:56:20.868980058Z" level=info msg="StartContainer for \"430c69dc4549b4e3984b4dd6608dfcef1bcfe15be124fda81e7a0e6a095d9749\"" Mar 7 00:56:21.060978 systemd[1]: Started cri-containerd-430c69dc4549b4e3984b4dd6608dfcef1bcfe15be124fda81e7a0e6a095d9749.scope - libcontainer container 430c69dc4549b4e3984b4dd6608dfcef1bcfe15be124fda81e7a0e6a095d9749. Mar 7 00:56:21.078826 containerd[2016]: time="2026-03-07T00:56:21.078675847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-qnkgm,Uid:07a42616-368f-4a16-92ee-c16f0eba21ab,Namespace:calico-system,Attempt:1,} returns sandbox id \"ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8\"" Mar 7 00:56:21.087613 containerd[2016]: time="2026-03-07T00:56:21.087434875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54c564cdd4-5c6xm,Uid:ee2a76da-5fba-4560-8907-48edf40e4afd,Namespace:calico-system,Attempt:1,} returns sandbox id \"4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23\"" Mar 7 00:56:21.202655 containerd[2016]: time="2026-03-07T00:56:21.202560764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b5bb86c7c-89nsw,Uid:ed51614f-5023-4b66-8c72-837be09873ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d\"" Mar 7 00:56:21.209191 systemd-networkd[1932]: calif7301c3dfaf: Gained IPv6LL Mar 7 00:56:21.271485 systemd-networkd[1932]: cali2154040b6c7: Gained IPv6LL Mar 7 00:56:21.317443 containerd[2016]: time="2026-03-07T00:56:21.315797108Z" level=info msg="StartContainer for \"430c69dc4549b4e3984b4dd6608dfcef1bcfe15be124fda81e7a0e6a095d9749\" returns successfully" Mar 7 00:56:21.654119 kernel: calico-node[4953]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 00:56:21.814708 systemd[1]: Started sshd@7-172.31.17.228:22-20.161.92.111:55224.service - OpenSSH per-connection server daemon (20.161.92.111:55224). Mar 7 00:56:22.039832 systemd-networkd[1932]: cali2965ec91035: Gained IPv6LL Mar 7 00:56:22.363193 sshd[5513]: Accepted publickey for core from 20.161.92.111 port 55224 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:22.366839 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:22.379389 systemd-logind[1991]: New session 8 of user core. Mar 7 00:56:22.388693 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 00:56:22.573411 systemd-networkd[1932]: vxlan.calico: Link UP Mar 7 00:56:22.573430 systemd-networkd[1932]: vxlan.calico: Gained carrier Mar 7 00:56:22.990708 sshd[5513]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:23.001530 systemd[1]: sshd@7-172.31.17.228:22-20.161.92.111:55224.service: Deactivated successfully. Mar 7 00:56:23.009995 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 00:56:23.017021 systemd-logind[1991]: Session 8 logged out. Waiting for processes to exit. Mar 7 00:56:23.020843 systemd-logind[1991]: Removed session 8. Mar 7 00:56:23.639973 systemd-networkd[1932]: vxlan.calico: Gained IPv6LL Mar 7 00:56:24.572921 containerd[2016]: time="2026-03-07T00:56:24.572858472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:24.575304 containerd[2016]: time="2026-03-07T00:56:24.575213496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 7 00:56:24.577210 containerd[2016]: time="2026-03-07T00:56:24.577161804Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:24.582632 containerd[2016]: time="2026-03-07T00:56:24.582530064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:24.585449 containerd[2016]: time="2026-03-07T00:56:24.585186588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.783049914s" Mar 7 00:56:24.585449 containerd[2016]: time="2026-03-07T00:56:24.585248340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 7 00:56:24.590175 containerd[2016]: time="2026-03-07T00:56:24.589046052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 00:56:24.632634 containerd[2016]: time="2026-03-07T00:56:24.632547697Z" level=info msg="CreateContainer within sandbox \"f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 00:56:24.664163 containerd[2016]: time="2026-03-07T00:56:24.663627025Z" level=info msg="CreateContainer within sandbox \"f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c300d4c8de10c5d67778edee795d9517700399ad8b7a0f5cd1f912fa57882fc4\"" Mar 7 00:56:24.666111 containerd[2016]: time="2026-03-07T00:56:24.666045925Z" level=info msg="StartContainer for \"c300d4c8de10c5d67778edee795d9517700399ad8b7a0f5cd1f912fa57882fc4\"" Mar 7 00:56:24.666852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3044844753.mount: Deactivated successfully. Mar 7 00:56:24.781724 systemd[1]: Started cri-containerd-c300d4c8de10c5d67778edee795d9517700399ad8b7a0f5cd1f912fa57882fc4.scope - libcontainer container c300d4c8de10c5d67778edee795d9517700399ad8b7a0f5cd1f912fa57882fc4. Mar 7 00:56:24.864096 containerd[2016]: time="2026-03-07T00:56:24.863754842Z" level=info msg="StartContainer for \"c300d4c8de10c5d67778edee795d9517700399ad8b7a0f5cd1f912fa57882fc4\" returns successfully" Mar 7 00:56:25.862494 kubelet[3232]: I0307 00:56:25.862312 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d8c4bf8bd-drgff" podStartSLOduration=25.226506332 podStartE2EDuration="29.862289883s" podCreationTimestamp="2026-03-07 00:55:56 +0000 UTC" firstStartedPulling="2026-03-07 00:56:19.951933897 +0000 UTC m=+50.096797737" lastFinishedPulling="2026-03-07 00:56:24.587717448 +0000 UTC m=+54.732581288" observedRunningTime="2026-03-07 00:56:25.862237863 +0000 UTC m=+56.007101703" watchObservedRunningTime="2026-03-07 00:56:25.862289883 +0000 UTC m=+56.007153807" Mar 7 00:56:26.392884 ntpd[1986]: Listen normally on 8 vxlan.calico 192.168.52.128:123 Mar 7 00:56:26.394678 ntpd[1986]: 7 Mar 00:56:26 ntpd[1986]: Listen normally on 8 vxlan.calico 192.168.52.128:123 Mar 7 00:56:26.394678 ntpd[1986]: 7 Mar 00:56:26 ntpd[1986]: Listen normally on 9 caliac8d5653ce6 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 7 00:56:26.394678 ntpd[1986]: 7 Mar 00:56:26 ntpd[1986]: Listen normally on 10 cali38a0c6cf24b [fe80::ecee:eeff:feee:eeee%5]:123 Mar 7 00:56:26.394678 ntpd[1986]: 7 Mar 00:56:26 ntpd[1986]: Listen normally on 11 cali4c605b41138 [fe80::ecee:eeff:feee:eeee%6]:123 Mar 7 00:56:26.394678 ntpd[1986]: 7 Mar 00:56:26 ntpd[1986]: Listen normally on 12 calif7301c3dfaf [fe80::ecee:eeff:feee:eeee%7]:123 Mar 7 00:56:26.394678 ntpd[1986]: 7 Mar 00:56:26 ntpd[1986]: Listen normally on 13 cali2154040b6c7 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 7 00:56:26.394678 ntpd[1986]: 7 Mar 00:56:26 ntpd[1986]: Listen normally on 14 calif9362c93187 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 7 00:56:26.394678 ntpd[1986]: 7 Mar 00:56:26 ntpd[1986]: Listen normally on 15 cali2965ec91035 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 7 00:56:26.394678 ntpd[1986]: 7 Mar 00:56:26 ntpd[1986]: Listen normally on 16 vxlan.calico [fe80::64e8:95ff:feff:fb88%11]:123 Mar 7 00:56:26.393674 ntpd[1986]: Listen normally on 9 caliac8d5653ce6 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 7 00:56:26.393755 ntpd[1986]: Listen normally on 10 cali38a0c6cf24b [fe80::ecee:eeff:feee:eeee%5]:123 Mar 7 00:56:26.393843 ntpd[1986]: Listen normally on 11 cali4c605b41138 [fe80::ecee:eeff:feee:eeee%6]:123 Mar 7 00:56:26.393913 ntpd[1986]: Listen normally on 12 calif7301c3dfaf [fe80::ecee:eeff:feee:eeee%7]:123 Mar 7 00:56:26.393983 ntpd[1986]: Listen normally on 13 cali2154040b6c7 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 7 00:56:26.394049 ntpd[1986]: Listen normally on 14 calif9362c93187 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 7 00:56:26.394122 ntpd[1986]: Listen normally on 15 cali2965ec91035 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 7 00:56:26.394193 ntpd[1986]: Listen normally on 16 vxlan.calico [fe80::64e8:95ff:feff:fb88%11]:123 Mar 7 00:56:27.138452 containerd[2016]: time="2026-03-07T00:56:27.137878873Z" level=info msg="StopPodSandbox for \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\"" Mar 7 00:56:27.287285 containerd[2016]: time="2026-03-07T00:56:27.287214014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:27.290422 containerd[2016]: time="2026-03-07T00:56:27.290318018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 7 00:56:27.296553 containerd[2016]: time="2026-03-07T00:56:27.296452106Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:27.304257 containerd[2016]: time="2026-03-07T00:56:27.304183466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:27.306654 containerd[2016]: time="2026-03-07T00:56:27.306584978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.717450174s" Mar 7 00:56:27.306838 containerd[2016]: time="2026-03-07T00:56:27.306659138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 7 00:56:27.312236 containerd[2016]: time="2026-03-07T00:56:27.312168026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 00:56:27.321112 containerd[2016]: time="2026-03-07T00:56:27.320919374Z" level=info msg="CreateContainer within sandbox \"5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 00:56:27.365524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount873700771.mount: Deactivated successfully. Mar 7 00:56:27.374657 containerd[2016]: time="2026-03-07T00:56:27.372911342Z" level=info msg="CreateContainer within sandbox \"5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c04ed86e4b9ebe0908184227ffbf4435ebb4fc17475d82e1d1f95a728e4a86d0\"" Mar 7 00:56:27.376658 containerd[2016]: time="2026-03-07T00:56:27.375261878Z" level=info msg="StartContainer for \"c04ed86e4b9ebe0908184227ffbf4435ebb4fc17475d82e1d1f95a728e4a86d0\"" Mar 7 00:56:27.417722 containerd[2016]: 2026-03-07 00:56:27.274 [INFO][5705] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Mar 7 00:56:27.417722 containerd[2016]: 2026-03-07 00:56:27.276 [INFO][5705] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" iface="eth0" netns="/var/run/netns/cni-5ad7ab88-5bd3-9851-9fec-e7f2f5b25f21" Mar 7 00:56:27.417722 containerd[2016]: 2026-03-07 00:56:27.277 [INFO][5705] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" iface="eth0" netns="/var/run/netns/cni-5ad7ab88-5bd3-9851-9fec-e7f2f5b25f21" Mar 7 00:56:27.417722 containerd[2016]: 2026-03-07 00:56:27.278 [INFO][5705] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" iface="eth0" netns="/var/run/netns/cni-5ad7ab88-5bd3-9851-9fec-e7f2f5b25f21" Mar 7 00:56:27.417722 containerd[2016]: 2026-03-07 00:56:27.278 [INFO][5705] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Mar 7 00:56:27.417722 containerd[2016]: 2026-03-07 00:56:27.278 [INFO][5705] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Mar 7 00:56:27.417722 containerd[2016]: 2026-03-07 00:56:27.373 [INFO][5717] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" HandleID="k8s-pod-network.bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" Mar 7 00:56:27.417722 containerd[2016]: 2026-03-07 00:56:27.373 [INFO][5717] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:27.417722 containerd[2016]: 2026-03-07 00:56:27.374 [INFO][5717] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:27.417722 containerd[2016]: 2026-03-07 00:56:27.397 [WARNING][5717] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" HandleID="k8s-pod-network.bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" Mar 7 00:56:27.417722 containerd[2016]: 2026-03-07 00:56:27.397 [INFO][5717] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" HandleID="k8s-pod-network.bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" Mar 7 00:56:27.417722 containerd[2016]: 2026-03-07 00:56:27.401 [INFO][5717] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:27.417722 containerd[2016]: 2026-03-07 00:56:27.405 [INFO][5705] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Mar 7 00:56:27.421997 containerd[2016]: time="2026-03-07T00:56:27.421920135Z" level=info msg="TearDown network for sandbox \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\" successfully" Mar 7 00:56:27.422195 containerd[2016]: time="2026-03-07T00:56:27.422164491Z" level=info msg="StopPodSandbox for \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\" returns successfully" Mar 7 00:56:27.431527 systemd[1]: run-netns-cni\x2d5ad7ab88\x2d5bd3\x2d9851\x2d9fec\x2de7f2f5b25f21.mount: Deactivated successfully. Mar 7 00:56:27.434836 containerd[2016]: time="2026-03-07T00:56:27.434188035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b8g2z,Uid:ead8cb66-a254-46cd-b1ed-0e08793150d4,Namespace:kube-system,Attempt:1,}" Mar 7 00:56:27.469500 systemd[1]: Started cri-containerd-c04ed86e4b9ebe0908184227ffbf4435ebb4fc17475d82e1d1f95a728e4a86d0.scope - libcontainer container c04ed86e4b9ebe0908184227ffbf4435ebb4fc17475d82e1d1f95a728e4a86d0. Mar 7 00:56:27.621043 containerd[2016]: time="2026-03-07T00:56:27.620961328Z" level=info msg="StartContainer for \"c04ed86e4b9ebe0908184227ffbf4435ebb4fc17475d82e1d1f95a728e4a86d0\" returns successfully" Mar 7 00:56:27.779451 systemd-networkd[1932]: calia81eac5e368: Link UP Mar 7 00:56:27.782559 systemd-networkd[1932]: calia81eac5e368: Gained carrier Mar 7 00:56:27.789044 (udev-worker)[5777]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.597 [INFO][5741] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0 coredns-66bc5c9577- kube-system ead8cb66-a254-46cd-b1ed-0e08793150d4 1071 0 2026-03-07 00:55:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-228 coredns-66bc5c9577-b8g2z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia81eac5e368 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" Namespace="kube-system" Pod="coredns-66bc5c9577-b8g2z" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-" Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.598 [INFO][5741] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" Namespace="kube-system" Pod="coredns-66bc5c9577-b8g2z" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.675 [INFO][5764] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" HandleID="k8s-pod-network.b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.696 [INFO][5764] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" HandleID="k8s-pod-network.b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f2170), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-228", "pod":"coredns-66bc5c9577-b8g2z", "timestamp":"2026-03-07 00:56:27.675466648 +0000 UTC"}, Hostname:"ip-172-31-17-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400018f340)} Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.696 [INFO][5764] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.696 [INFO][5764] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.697 [INFO][5764] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-228' Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.701 [INFO][5764] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" host="ip-172-31-17-228" Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.716 [INFO][5764] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-228" Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.726 [INFO][5764] ipam/ipam.go 526: Trying affinity for 192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.731 [INFO][5764] ipam/ipam.go 160: Attempting to load block cidr=192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.738 [INFO][5764] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ip-172-31-17-228" Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.738 [INFO][5764] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" host="ip-172-31-17-228" Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.742 [INFO][5764] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540 Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.749 [INFO][5764] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" host="ip-172-31-17-228" Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.765 [INFO][5764] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.52.136/26] block=192.168.52.128/26 handle="k8s-pod-network.b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" host="ip-172-31-17-228" Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.765 [INFO][5764] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.52.136/26] handle="k8s-pod-network.b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" host="ip-172-31-17-228" Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.766 [INFO][5764] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:27.824915 containerd[2016]: 2026-03-07 00:56:27.766 [INFO][5764] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.52.136/26] IPv6=[] ContainerID="b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" HandleID="k8s-pod-network.b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" Mar 7 00:56:27.827702 containerd[2016]: 2026-03-07 00:56:27.772 [INFO][5741] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" Namespace="kube-system" Pod="coredns-66bc5c9577-b8g2z" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ead8cb66-a254-46cd-b1ed-0e08793150d4", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"", Pod:"coredns-66bc5c9577-b8g2z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia81eac5e368", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:27.827702 containerd[2016]: 2026-03-07 00:56:27.773 [INFO][5741] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.136/32] ContainerID="b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" Namespace="kube-system" Pod="coredns-66bc5c9577-b8g2z" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" Mar 7 00:56:27.827702 containerd[2016]: 2026-03-07 00:56:27.773 [INFO][5741] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia81eac5e368 ContainerID="b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" Namespace="kube-system" Pod="coredns-66bc5c9577-b8g2z" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" Mar 7 00:56:27.827702 containerd[2016]: 2026-03-07 00:56:27.784 [INFO][5741] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" Namespace="kube-system" Pod="coredns-66bc5c9577-b8g2z" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" Mar 7 00:56:27.827702 containerd[2016]: 2026-03-07 00:56:27.786 [INFO][5741] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" Namespace="kube-system" Pod="coredns-66bc5c9577-b8g2z" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ead8cb66-a254-46cd-b1ed-0e08793150d4", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540", Pod:"coredns-66bc5c9577-b8g2z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia81eac5e368", MAC:"4a:3d:9e:24:3c:60", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:27.827702 containerd[2016]: 2026-03-07 00:56:27.809 [INFO][5741] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540" Namespace="kube-system" Pod="coredns-66bc5c9577-b8g2z" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" Mar 7 00:56:27.898749 containerd[2016]: time="2026-03-07T00:56:27.896149601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:56:27.898749 containerd[2016]: time="2026-03-07T00:56:27.896266745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:56:27.898749 containerd[2016]: time="2026-03-07T00:56:27.896313509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:27.903619 containerd[2016]: time="2026-03-07T00:56:27.896847365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:27.947886 systemd[1]: Started cri-containerd-b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540.scope - libcontainer container b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540. Mar 7 00:56:28.037902 containerd[2016]: time="2026-03-07T00:56:28.037277978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b8g2z,Uid:ead8cb66-a254-46cd-b1ed-0e08793150d4,Namespace:kube-system,Attempt:1,} returns sandbox id \"b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540\"" Mar 7 00:56:28.055764 containerd[2016]: time="2026-03-07T00:56:28.055574354Z" level=info msg="CreateContainer within sandbox \"b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:56:28.091110 systemd[1]: Started sshd@8-172.31.17.228:22-20.161.92.111:55226.service - OpenSSH per-connection server daemon (20.161.92.111:55226). Mar 7 00:56:28.108967 containerd[2016]: time="2026-03-07T00:56:28.108894878Z" level=info msg="CreateContainer within sandbox \"b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1aa276e444bad5032969d977350a26de97fcf2f642024c5e07c4ef9440145082\"" Mar 7 00:56:28.110966 containerd[2016]: time="2026-03-07T00:56:28.110465306Z" level=info msg="StartContainer for \"1aa276e444bad5032969d977350a26de97fcf2f642024c5e07c4ef9440145082\"" Mar 7 00:56:28.223274 systemd[1]: Started cri-containerd-1aa276e444bad5032969d977350a26de97fcf2f642024c5e07c4ef9440145082.scope - libcontainer container 1aa276e444bad5032969d977350a26de97fcf2f642024c5e07c4ef9440145082. Mar 7 00:56:28.299693 containerd[2016]: time="2026-03-07T00:56:28.298863351Z" level=info msg="StartContainer for \"1aa276e444bad5032969d977350a26de97fcf2f642024c5e07c4ef9440145082\" returns successfully" Mar 7 00:56:28.646947 sshd[5842]: Accepted publickey for core from 20.161.92.111 port 55226 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:28.650339 sshd[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:28.669404 systemd-logind[1991]: New session 9 of user core. Mar 7 00:56:28.681984 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 00:56:28.824164 systemd-networkd[1932]: calia81eac5e368: Gained IPv6LL Mar 7 00:56:28.860023 kubelet[3232]: I0307 00:56:28.859965 3232 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:56:28.931398 kubelet[3232]: I0307 00:56:28.930564 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-54c564cdd4-frsdt" podStartSLOduration=29.967664195 podStartE2EDuration="36.930537738s" podCreationTimestamp="2026-03-07 00:55:52 +0000 UTC" firstStartedPulling="2026-03-07 00:56:20.347744743 +0000 UTC m=+50.492608571" lastFinishedPulling="2026-03-07 00:56:27.310618286 +0000 UTC m=+57.455482114" observedRunningTime="2026-03-07 00:56:27.876021773 +0000 UTC m=+58.020885673" watchObservedRunningTime="2026-03-07 00:56:28.930537738 +0000 UTC m=+59.075401566" Mar 7 00:56:29.010300 kubelet[3232]: I0307 00:56:29.010199 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-b8g2z" podStartSLOduration=54.010177418 podStartE2EDuration="54.010177418s" podCreationTimestamp="2026-03-07 00:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:56:28.92348217 +0000 UTC m=+59.068346022" watchObservedRunningTime="2026-03-07 00:56:29.010177418 +0000 UTC m=+59.155041246" Mar 7 00:56:29.440705 sshd[5842]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:29.448276 systemd[1]: sshd@8-172.31.17.228:22-20.161.92.111:55226.service: Deactivated successfully. Mar 7 00:56:29.455948 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 00:56:29.459916 systemd-logind[1991]: Session 9 logged out. Waiting for processes to exit. Mar 7 00:56:29.466019 systemd-logind[1991]: Removed session 9. Mar 7 00:56:30.188108 containerd[2016]: time="2026-03-07T00:56:30.187518220Z" level=info msg="StopPodSandbox for \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\"" Mar 7 00:56:30.235918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2546901841.mount: Deactivated successfully. Mar 7 00:56:30.492149 containerd[2016]: 2026-03-07 00:56:30.326 [WARNING][5930] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db5a2b01-b80e-4ffd-95f8-409702863707", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650", Pod:"coredns-66bc5c9577-hlxj9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38a0c6cf24b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:30.492149 containerd[2016]: 2026-03-07 00:56:30.327 [INFO][5930] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Mar 7 00:56:30.492149 containerd[2016]: 2026-03-07 00:56:30.327 [INFO][5930] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" iface="eth0" netns="" Mar 7 00:56:30.492149 containerd[2016]: 2026-03-07 00:56:30.327 [INFO][5930] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Mar 7 00:56:30.492149 containerd[2016]: 2026-03-07 00:56:30.334 [INFO][5930] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Mar 7 00:56:30.492149 containerd[2016]: 2026-03-07 00:56:30.438 [INFO][5944] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" HandleID="k8s-pod-network.230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" Mar 7 00:56:30.492149 containerd[2016]: 2026-03-07 00:56:30.438 [INFO][5944] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:30.492149 containerd[2016]: 2026-03-07 00:56:30.438 [INFO][5944] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:30.492149 containerd[2016]: 2026-03-07 00:56:30.473 [WARNING][5944] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" HandleID="k8s-pod-network.230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" Mar 7 00:56:30.492149 containerd[2016]: 2026-03-07 00:56:30.474 [INFO][5944] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" HandleID="k8s-pod-network.230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" Mar 7 00:56:30.492149 containerd[2016]: 2026-03-07 00:56:30.477 [INFO][5944] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:30.492149 containerd[2016]: 2026-03-07 00:56:30.484 [INFO][5930] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Mar 7 00:56:30.494402 containerd[2016]: time="2026-03-07T00:56:30.494178822Z" level=info msg="TearDown network for sandbox \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\" successfully" Mar 7 00:56:30.494402 containerd[2016]: time="2026-03-07T00:56:30.494232366Z" level=info msg="StopPodSandbox for \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\" returns successfully" Mar 7 00:56:30.496352 containerd[2016]: time="2026-03-07T00:56:30.496107666Z" level=info msg="RemovePodSandbox for \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\"" Mar 7 00:56:30.496592 containerd[2016]: time="2026-03-07T00:56:30.496461642Z" level=info msg="Forcibly stopping sandbox \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\"" Mar 7 00:56:30.849160 containerd[2016]: 2026-03-07 00:56:30.674 [WARNING][5958] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db5a2b01-b80e-4ffd-95f8-409702863707", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"a18939563535046f6e50654d50e2efa3adfc7ad022e3d0b30183cd07da262650", Pod:"coredns-66bc5c9577-hlxj9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38a0c6cf24b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:30.849160 containerd[2016]: 2026-03-07 00:56:30.675 [INFO][5958] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Mar 7 00:56:30.849160 containerd[2016]: 2026-03-07 00:56:30.675 [INFO][5958] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" iface="eth0" netns="" Mar 7 00:56:30.849160 containerd[2016]: 2026-03-07 00:56:30.675 [INFO][5958] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Mar 7 00:56:30.849160 containerd[2016]: 2026-03-07 00:56:30.675 [INFO][5958] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Mar 7 00:56:30.849160 containerd[2016]: 2026-03-07 00:56:30.796 [INFO][5964] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" HandleID="k8s-pod-network.230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" Mar 7 00:56:30.849160 containerd[2016]: 2026-03-07 00:56:30.799 [INFO][5964] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:30.849160 containerd[2016]: 2026-03-07 00:56:30.799 [INFO][5964] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:30.849160 containerd[2016]: 2026-03-07 00:56:30.826 [WARNING][5964] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" HandleID="k8s-pod-network.230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" Mar 7 00:56:30.849160 containerd[2016]: 2026-03-07 00:56:30.826 [INFO][5964] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" HandleID="k8s-pod-network.230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--hlxj9-eth0" Mar 7 00:56:30.849160 containerd[2016]: 2026-03-07 00:56:30.830 [INFO][5964] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:30.849160 containerd[2016]: 2026-03-07 00:56:30.838 [INFO][5958] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94" Mar 7 00:56:30.851594 containerd[2016]: time="2026-03-07T00:56:30.850183916Z" level=info msg="TearDown network for sandbox \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\" successfully" Mar 7 00:56:30.863946 containerd[2016]: time="2026-03-07T00:56:30.863886548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:56:30.864660 containerd[2016]: time="2026-03-07T00:56:30.864620576Z" level=info msg="RemovePodSandbox \"230403696f88d89490f985b845ce48cc168b16e083f27d95a22494eba2419d94\" returns successfully" Mar 7 00:56:30.865710 containerd[2016]: time="2026-03-07T00:56:30.865664156Z" level=info msg="StopPodSandbox for \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\"" Mar 7 00:56:31.241913 containerd[2016]: 2026-03-07 00:56:31.061 [WARNING][5979] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0", GenerateName:"calico-apiserver-54c564cdd4-", Namespace:"calico-system", SelfLink:"", UID:"b162dfce-342e-4a97-8794-bfbba685c555", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54c564cdd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c", Pod:"calico-apiserver-54c564cdd4-frsdt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif7301c3dfaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:31.241913 containerd[2016]: 2026-03-07 00:56:31.063 [INFO][5979] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Mar 7 00:56:31.241913 containerd[2016]: 2026-03-07 00:56:31.063 [INFO][5979] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" iface="eth0" netns="" Mar 7 00:56:31.241913 containerd[2016]: 2026-03-07 00:56:31.064 [INFO][5979] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Mar 7 00:56:31.241913 containerd[2016]: 2026-03-07 00:56:31.064 [INFO][5979] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Mar 7 00:56:31.241913 containerd[2016]: 2026-03-07 00:56:31.189 [INFO][5986] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" HandleID="k8s-pod-network.b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" Mar 7 00:56:31.241913 containerd[2016]: 2026-03-07 00:56:31.190 [INFO][5986] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:31.241913 containerd[2016]: 2026-03-07 00:56:31.190 [INFO][5986] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:31.241913 containerd[2016]: 2026-03-07 00:56:31.219 [WARNING][5986] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" HandleID="k8s-pod-network.b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" Mar 7 00:56:31.241913 containerd[2016]: 2026-03-07 00:56:31.220 [INFO][5986] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" HandleID="k8s-pod-network.b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" Mar 7 00:56:31.241913 containerd[2016]: 2026-03-07 00:56:31.225 [INFO][5986] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:31.241913 containerd[2016]: 2026-03-07 00:56:31.235 [INFO][5979] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Mar 7 00:56:31.245639 containerd[2016]: time="2026-03-07T00:56:31.242906586Z" level=info msg="TearDown network for sandbox \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\" successfully" Mar 7 00:56:31.245639 containerd[2016]: time="2026-03-07T00:56:31.242954682Z" level=info msg="StopPodSandbox for \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\" returns successfully" Mar 7 00:56:31.246997 containerd[2016]: time="2026-03-07T00:56:31.246752034Z" level=info msg="RemovePodSandbox for \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\"" Mar 7 00:56:31.248106 containerd[2016]: time="2026-03-07T00:56:31.247949538Z" level=info msg="Forcibly stopping sandbox \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\"" Mar 7 00:56:31.393049 ntpd[1986]: Listen normally on 17 calia81eac5e368 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 7 00:56:31.393687 ntpd[1986]: 7 Mar 00:56:31 ntpd[1986]: Listen normally on 17 calia81eac5e368 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 7 00:56:31.597920 containerd[2016]: 2026-03-07 00:56:31.411 [WARNING][6000] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0", GenerateName:"calico-apiserver-54c564cdd4-", Namespace:"calico-system", SelfLink:"", UID:"b162dfce-342e-4a97-8794-bfbba685c555", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54c564cdd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"5721c0ccc338f1996cc3c56da7ce3b777e1ff2ef927e881bbc82fa962b45bf9c", Pod:"calico-apiserver-54c564cdd4-frsdt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif7301c3dfaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:31.597920 containerd[2016]: 2026-03-07 00:56:31.411 [INFO][6000] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Mar 7 00:56:31.597920 containerd[2016]: 2026-03-07 00:56:31.411 [INFO][6000] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" iface="eth0" netns="" Mar 7 00:56:31.597920 containerd[2016]: 2026-03-07 00:56:31.411 [INFO][6000] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Mar 7 00:56:31.597920 containerd[2016]: 2026-03-07 00:56:31.411 [INFO][6000] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Mar 7 00:56:31.597920 containerd[2016]: 2026-03-07 00:56:31.533 [INFO][6007] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" HandleID="k8s-pod-network.b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" Mar 7 00:56:31.597920 containerd[2016]: 2026-03-07 00:56:31.535 [INFO][6007] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:31.597920 containerd[2016]: 2026-03-07 00:56:31.535 [INFO][6007] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:31.597920 containerd[2016]: 2026-03-07 00:56:31.567 [WARNING][6007] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" HandleID="k8s-pod-network.b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" Mar 7 00:56:31.597920 containerd[2016]: 2026-03-07 00:56:31.567 [INFO][6007] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" HandleID="k8s-pod-network.b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--frsdt-eth0" Mar 7 00:56:31.597920 containerd[2016]: 2026-03-07 00:56:31.574 [INFO][6007] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:31.597920 containerd[2016]: 2026-03-07 00:56:31.585 [INFO][6000] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60" Mar 7 00:56:31.600823 containerd[2016]: time="2026-03-07T00:56:31.599593399Z" level=info msg="TearDown network for sandbox \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\" successfully" Mar 7 00:56:31.626071 containerd[2016]: time="2026-03-07T00:56:31.624601003Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:56:31.626071 containerd[2016]: time="2026-03-07T00:56:31.625896043Z" level=info msg="RemovePodSandbox \"b90c08daef81eb91a9d5570c672e03180123f145f97e4beca9f45a438aed4b60\" returns successfully" Mar 7 00:56:31.628232 containerd[2016]: time="2026-03-07T00:56:31.627717331Z" level=info msg="StopPodSandbox for \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\"" Mar 7 00:56:31.702415 containerd[2016]: time="2026-03-07T00:56:31.701090120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:31.707407 containerd[2016]: time="2026-03-07T00:56:31.707325812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 7 00:56:31.709544 containerd[2016]: time="2026-03-07T00:56:31.709495088Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:31.718404 containerd[2016]: time="2026-03-07T00:56:31.717034700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:31.722037 containerd[2016]: time="2026-03-07T00:56:31.721954400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 4.409495242s" Mar 7 00:56:31.722037 containerd[2016]: time="2026-03-07T00:56:31.722035376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 7 00:56:31.727479 containerd[2016]: time="2026-03-07T00:56:31.727429316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 00:56:31.735795 containerd[2016]: time="2026-03-07T00:56:31.735730088Z" level=info msg="CreateContainer within sandbox \"ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 00:56:31.772700 containerd[2016]: time="2026-03-07T00:56:31.772556540Z" level=info msg="CreateContainer within sandbox \"ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c3fbd2a28dd1ed1ee0f7b22ff347d49b8fb864a4101ed814a26dc88a06876cd6\"" Mar 7 00:56:31.775411 containerd[2016]: time="2026-03-07T00:56:31.774732176Z" level=info msg="StartContainer for \"c3fbd2a28dd1ed1ee0f7b22ff347d49b8fb864a4101ed814a26dc88a06876cd6\"" Mar 7 00:56:31.894699 systemd[1]: Started cri-containerd-c3fbd2a28dd1ed1ee0f7b22ff347d49b8fb864a4101ed814a26dc88a06876cd6.scope - libcontainer container c3fbd2a28dd1ed1ee0f7b22ff347d49b8fb864a4101ed814a26dc88a06876cd6. Mar 7 00:56:31.900104 containerd[2016]: 2026-03-07 00:56:31.733 [WARNING][6023] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0", GenerateName:"calico-apiserver-54c564cdd4-", Namespace:"calico-system", SelfLink:"", UID:"ee2a76da-5fba-4560-8907-48edf40e4afd", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54c564cdd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23", Pod:"calico-apiserver-54c564cdd4-5c6xm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif9362c93187", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:31.900104 containerd[2016]: 2026-03-07 00:56:31.734 [INFO][6023] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Mar 7 00:56:31.900104 containerd[2016]: 2026-03-07 00:56:31.734 [INFO][6023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" iface="eth0" netns="" Mar 7 00:56:31.900104 containerd[2016]: 2026-03-07 00:56:31.734 [INFO][6023] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Mar 7 00:56:31.900104 containerd[2016]: 2026-03-07 00:56:31.734 [INFO][6023] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Mar 7 00:56:31.900104 containerd[2016]: 2026-03-07 00:56:31.847 [INFO][6035] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" HandleID="k8s-pod-network.9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" Mar 7 00:56:31.900104 containerd[2016]: 2026-03-07 00:56:31.847 [INFO][6035] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:31.900104 containerd[2016]: 2026-03-07 00:56:31.847 [INFO][6035] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:31.900104 containerd[2016]: 2026-03-07 00:56:31.872 [WARNING][6035] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" HandleID="k8s-pod-network.9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" Mar 7 00:56:31.900104 containerd[2016]: 2026-03-07 00:56:31.872 [INFO][6035] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" HandleID="k8s-pod-network.9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" Mar 7 00:56:31.900104 containerd[2016]: 2026-03-07 00:56:31.878 [INFO][6035] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:31.900104 containerd[2016]: 2026-03-07 00:56:31.884 [INFO][6023] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Mar 7 00:56:31.900104 containerd[2016]: time="2026-03-07T00:56:31.899545077Z" level=info msg="TearDown network for sandbox \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\" successfully" Mar 7 00:56:31.900104 containerd[2016]: time="2026-03-07T00:56:31.899607093Z" level=info msg="StopPodSandbox for \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\" returns successfully" Mar 7 00:56:31.902426 containerd[2016]: time="2026-03-07T00:56:31.901724421Z" level=info msg="RemovePodSandbox for \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\"" Mar 7 00:56:31.902426 containerd[2016]: time="2026-03-07T00:56:31.901845753Z" level=info msg="Forcibly stopping sandbox \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\"" Mar 7 00:56:32.069748 containerd[2016]: time="2026-03-07T00:56:32.069674118Z" level=info msg="StartContainer for \"c3fbd2a28dd1ed1ee0f7b22ff347d49b8fb864a4101ed814a26dc88a06876cd6\" returns successfully" Mar 7 00:56:32.108952 containerd[2016]: time="2026-03-07T00:56:32.108789630Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:32.111951 containerd[2016]: time="2026-03-07T00:56:32.111361122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 00:56:32.140934 containerd[2016]: time="2026-03-07T00:56:32.140847990Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 413.006726ms" Mar 7 00:56:32.141577 containerd[2016]: time="2026-03-07T00:56:32.141477198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 7 00:56:32.162854 containerd[2016]: time="2026-03-07T00:56:32.162464370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 00:56:32.162854 containerd[2016]: 2026-03-07 00:56:32.013 [WARNING][6070] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0", GenerateName:"calico-apiserver-54c564cdd4-", Namespace:"calico-system", SelfLink:"", UID:"ee2a76da-5fba-4560-8907-48edf40e4afd", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54c564cdd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23", Pod:"calico-apiserver-54c564cdd4-5c6xm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif9362c93187", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:32.162854 containerd[2016]: 2026-03-07 00:56:32.016 [INFO][6070] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Mar 7 00:56:32.162854 containerd[2016]: 2026-03-07 00:56:32.018 [INFO][6070] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" iface="eth0" netns="" Mar 7 00:56:32.162854 containerd[2016]: 2026-03-07 00:56:32.018 [INFO][6070] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Mar 7 00:56:32.162854 containerd[2016]: 2026-03-07 00:56:32.019 [INFO][6070] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Mar 7 00:56:32.162854 containerd[2016]: 2026-03-07 00:56:32.088 [INFO][6082] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" HandleID="k8s-pod-network.9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" Mar 7 00:56:32.162854 containerd[2016]: 2026-03-07 00:56:32.089 [INFO][6082] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:32.162854 containerd[2016]: 2026-03-07 00:56:32.089 [INFO][6082] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:32.162854 containerd[2016]: 2026-03-07 00:56:32.119 [WARNING][6082] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" HandleID="k8s-pod-network.9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" Mar 7 00:56:32.162854 containerd[2016]: 2026-03-07 00:56:32.123 [INFO][6082] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" HandleID="k8s-pod-network.9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Workload="ip--172--31--17--228-k8s-calico--apiserver--54c564cdd4--5c6xm-eth0" Mar 7 00:56:32.162854 containerd[2016]: 2026-03-07 00:56:32.135 [INFO][6082] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:32.162854 containerd[2016]: 2026-03-07 00:56:32.149 [INFO][6070] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76" Mar 7 00:56:32.162854 containerd[2016]: time="2026-03-07T00:56:32.162592434Z" level=info msg="TearDown network for sandbox \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\" successfully" Mar 7 00:56:32.180042 containerd[2016]: time="2026-03-07T00:56:32.179796918Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:56:32.182189 containerd[2016]: time="2026-03-07T00:56:32.180409314Z" level=info msg="RemovePodSandbox \"9d2566c2cb9c133e7f33679cbb3a4d97597651957c1c3f12cffda7ad4c4e4a76\" returns successfully" Mar 7 00:56:32.189405 containerd[2016]: time="2026-03-07T00:56:32.188251962Z" level=info msg="StopPodSandbox for \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\"" Mar 7 00:56:32.198934 containerd[2016]: time="2026-03-07T00:56:32.198878826Z" level=info msg="CreateContainer within sandbox \"4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 00:56:32.278187 containerd[2016]: time="2026-03-07T00:56:32.277719247Z" level=info msg="CreateContainer within sandbox \"4c195f22f120a336d03dd193d9fefcc37f6ecfbb6f18ed620a4629bbc7568c23\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"970dd80e476303aeec484ce69e4be357656d90085f04acc3aa9c11dc4d04ec77\"" Mar 7 00:56:32.283466 containerd[2016]: time="2026-03-07T00:56:32.282653323Z" level=info msg="StartContainer for \"970dd80e476303aeec484ce69e4be357656d90085f04acc3aa9c11dc4d04ec77\"" Mar 7 00:56:32.415732 systemd[1]: Started cri-containerd-970dd80e476303aeec484ce69e4be357656d90085f04acc3aa9c11dc4d04ec77.scope - libcontainer container 970dd80e476303aeec484ce69e4be357656d90085f04acc3aa9c11dc4d04ec77. Mar 7 00:56:32.519595 containerd[2016]: 2026-03-07 00:56:32.393 [WARNING][6104] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ead8cb66-a254-46cd-b1ed-0e08793150d4", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540", Pod:"coredns-66bc5c9577-b8g2z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia81eac5e368", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:32.519595 containerd[2016]: 2026-03-07 00:56:32.397 [INFO][6104] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Mar 7 00:56:32.519595 containerd[2016]: 2026-03-07 00:56:32.397 [INFO][6104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" iface="eth0" netns="" Mar 7 00:56:32.519595 containerd[2016]: 2026-03-07 00:56:32.397 [INFO][6104] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Mar 7 00:56:32.519595 containerd[2016]: 2026-03-07 00:56:32.397 [INFO][6104] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Mar 7 00:56:32.519595 containerd[2016]: 2026-03-07 00:56:32.489 [INFO][6133] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" HandleID="k8s-pod-network.bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" Mar 7 00:56:32.519595 containerd[2016]: 2026-03-07 00:56:32.491 [INFO][6133] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:32.519595 containerd[2016]: 2026-03-07 00:56:32.492 [INFO][6133] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:32.519595 containerd[2016]: 2026-03-07 00:56:32.508 [WARNING][6133] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" HandleID="k8s-pod-network.bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" Mar 7 00:56:32.519595 containerd[2016]: 2026-03-07 00:56:32.508 [INFO][6133] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" HandleID="k8s-pod-network.bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" Mar 7 00:56:32.519595 containerd[2016]: 2026-03-07 00:56:32.510 [INFO][6133] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:32.519595 containerd[2016]: 2026-03-07 00:56:32.514 [INFO][6104] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Mar 7 00:56:32.520858 containerd[2016]: time="2026-03-07T00:56:32.519646484Z" level=info msg="TearDown network for sandbox \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\" successfully" Mar 7 00:56:32.520858 containerd[2016]: time="2026-03-07T00:56:32.519684680Z" level=info msg="StopPodSandbox for \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\" returns successfully" Mar 7 00:56:32.520985 containerd[2016]: time="2026-03-07T00:56:32.520855976Z" level=info msg="RemovePodSandbox for \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\"" Mar 7 00:56:32.520985 containerd[2016]: time="2026-03-07T00:56:32.520925876Z" level=info msg="Forcibly stopping sandbox \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\"" Mar 7 00:56:32.614110 containerd[2016]: time="2026-03-07T00:56:32.614036816Z" level=info msg="StartContainer for \"970dd80e476303aeec484ce69e4be357656d90085f04acc3aa9c11dc4d04ec77\" returns successfully" Mar 7 00:56:32.705734 containerd[2016]: 2026-03-07 00:56:32.617 [WARNING][6156] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ead8cb66-a254-46cd-b1ed-0e08793150d4", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"b36db094c150fdadd2885a11cb12baec470d178a12916a9ddbedce0fe056a540", Pod:"coredns-66bc5c9577-b8g2z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia81eac5e368", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:32.705734 containerd[2016]: 2026-03-07 00:56:32.618 [INFO][6156] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Mar 7 00:56:32.705734 containerd[2016]: 2026-03-07 00:56:32.618 [INFO][6156] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" iface="eth0" netns="" Mar 7 00:56:32.705734 containerd[2016]: 2026-03-07 00:56:32.618 [INFO][6156] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Mar 7 00:56:32.705734 containerd[2016]: 2026-03-07 00:56:32.618 [INFO][6156] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Mar 7 00:56:32.705734 containerd[2016]: 2026-03-07 00:56:32.676 [INFO][6171] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" HandleID="k8s-pod-network.bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" Mar 7 00:56:32.705734 containerd[2016]: 2026-03-07 00:56:32.676 [INFO][6171] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:32.705734 containerd[2016]: 2026-03-07 00:56:32.676 [INFO][6171] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:32.705734 containerd[2016]: 2026-03-07 00:56:32.693 [WARNING][6171] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" HandleID="k8s-pod-network.bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" Mar 7 00:56:32.705734 containerd[2016]: 2026-03-07 00:56:32.693 [INFO][6171] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" HandleID="k8s-pod-network.bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Workload="ip--172--31--17--228-k8s-coredns--66bc5c9577--b8g2z-eth0" Mar 7 00:56:32.705734 containerd[2016]: 2026-03-07 00:56:32.698 [INFO][6171] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:32.705734 containerd[2016]: 2026-03-07 00:56:32.701 [INFO][6156] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e" Mar 7 00:56:32.705734 containerd[2016]: time="2026-03-07T00:56:32.704764161Z" level=info msg="TearDown network for sandbox \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\" successfully" Mar 7 00:56:32.716304 containerd[2016]: time="2026-03-07T00:56:32.716205117Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:56:32.716624 containerd[2016]: time="2026-03-07T00:56:32.716324541Z" level=info msg="RemovePodSandbox \"bfb6bf548e5bb7e0c1c48432fd547f354e84f921e38a19081acae5236ef4023e\" returns successfully" Mar 7 00:56:32.718432 containerd[2016]: time="2026-03-07T00:56:32.717966297Z" level=info msg="StopPodSandbox for \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\"" Mar 7 00:56:32.908640 containerd[2016]: 2026-03-07 00:56:32.810 [WARNING][6186] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0", GenerateName:"calico-kube-controllers-5d8c4bf8bd-", Namespace:"calico-system", SelfLink:"", UID:"c91a82c4-5f97-4c22-95be-166930ad0926", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d8c4bf8bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a", Pod:"calico-kube-controllers-5d8c4bf8bd-drgff", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c605b41138", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:32.908640 containerd[2016]: 2026-03-07 00:56:32.811 [INFO][6186] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Mar 7 00:56:32.908640 containerd[2016]: 2026-03-07 00:56:32.811 [INFO][6186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" iface="eth0" netns="" Mar 7 00:56:32.908640 containerd[2016]: 2026-03-07 00:56:32.811 [INFO][6186] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Mar 7 00:56:32.908640 containerd[2016]: 2026-03-07 00:56:32.812 [INFO][6186] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Mar 7 00:56:32.908640 containerd[2016]: 2026-03-07 00:56:32.880 [INFO][6198] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" HandleID="k8s-pod-network.7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Workload="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" Mar 7 00:56:32.908640 containerd[2016]: 2026-03-07 00:56:32.880 [INFO][6198] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:32.908640 containerd[2016]: 2026-03-07 00:56:32.880 [INFO][6198] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:32.908640 containerd[2016]: 2026-03-07 00:56:32.899 [WARNING][6198] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" HandleID="k8s-pod-network.7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Workload="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" Mar 7 00:56:32.908640 containerd[2016]: 2026-03-07 00:56:32.899 [INFO][6198] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" HandleID="k8s-pod-network.7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Workload="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" Mar 7 00:56:32.908640 containerd[2016]: 2026-03-07 00:56:32.901 [INFO][6198] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:32.908640 containerd[2016]: 2026-03-07 00:56:32.904 [INFO][6186] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Mar 7 00:56:32.910358 containerd[2016]: time="2026-03-07T00:56:32.909979918Z" level=info msg="TearDown network for sandbox \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\" successfully" Mar 7 00:56:32.910358 containerd[2016]: time="2026-03-07T00:56:32.910034134Z" level=info msg="StopPodSandbox for \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\" returns successfully" Mar 7 00:56:32.911801 containerd[2016]: time="2026-03-07T00:56:32.910851478Z" level=info msg="RemovePodSandbox for \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\"" Mar 7 00:56:32.911801 containerd[2016]: time="2026-03-07T00:56:32.910901818Z" level=info msg="Forcibly stopping sandbox \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\"" Mar 7 00:56:32.987006 kubelet[3232]: I0307 00:56:32.986242 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-qnkgm" podStartSLOduration=30.347102445 podStartE2EDuration="40.986217778s" podCreationTimestamp="2026-03-07 00:55:52 +0000 UTC" firstStartedPulling="2026-03-07 00:56:21.085160023 +0000 UTC m=+51.230023851" lastFinishedPulling="2026-03-07 00:56:31.724275272 +0000 UTC m=+61.869139184" observedRunningTime="2026-03-07 00:56:32.980063566 +0000 UTC m=+63.124927418" watchObservedRunningTime="2026-03-07 00:56:32.986217778 +0000 UTC m=+63.131081606" Mar 7 00:56:33.026079 systemd[1]: run-containerd-runc-k8s.io-c3fbd2a28dd1ed1ee0f7b22ff347d49b8fb864a4101ed814a26dc88a06876cd6-runc.WvdDDx.mount: Deactivated successfully. Mar 7 00:56:33.198180 containerd[2016]: 2026-03-07 00:56:33.095 [WARNING][6212] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0", GenerateName:"calico-kube-controllers-5d8c4bf8bd-", Namespace:"calico-system", SelfLink:"", UID:"c91a82c4-5f97-4c22-95be-166930ad0926", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d8c4bf8bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"f87bc9f8845ffb5ef5837285f60693ca4393d414c580e34d95dd5e24fe1b009a", Pod:"calico-kube-controllers-5d8c4bf8bd-drgff", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c605b41138", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:33.198180 containerd[2016]: 2026-03-07 00:56:33.096 [INFO][6212] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Mar 7 00:56:33.198180 containerd[2016]: 2026-03-07 00:56:33.096 [INFO][6212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" iface="eth0" netns="" Mar 7 00:56:33.198180 containerd[2016]: 2026-03-07 00:56:33.096 [INFO][6212] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Mar 7 00:56:33.198180 containerd[2016]: 2026-03-07 00:56:33.096 [INFO][6212] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Mar 7 00:56:33.198180 containerd[2016]: 2026-03-07 00:56:33.165 [INFO][6239] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" HandleID="k8s-pod-network.7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Workload="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" Mar 7 00:56:33.198180 containerd[2016]: 2026-03-07 00:56:33.165 [INFO][6239] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:33.198180 containerd[2016]: 2026-03-07 00:56:33.166 [INFO][6239] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:33.198180 containerd[2016]: 2026-03-07 00:56:33.183 [WARNING][6239] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" HandleID="k8s-pod-network.7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Workload="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" Mar 7 00:56:33.198180 containerd[2016]: 2026-03-07 00:56:33.183 [INFO][6239] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" HandleID="k8s-pod-network.7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Workload="ip--172--31--17--228-k8s-calico--kube--controllers--5d8c4bf8bd--drgff-eth0" Mar 7 00:56:33.198180 containerd[2016]: 2026-03-07 00:56:33.186 [INFO][6239] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:33.198180 containerd[2016]: 2026-03-07 00:56:33.190 [INFO][6212] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48" Mar 7 00:56:33.199036 containerd[2016]: time="2026-03-07T00:56:33.198227815Z" level=info msg="TearDown network for sandbox \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\" successfully" Mar 7 00:56:33.221033 containerd[2016]: time="2026-03-07T00:56:33.220959847Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:56:33.221180 containerd[2016]: time="2026-03-07T00:56:33.221115295Z" level=info msg="RemovePodSandbox \"7a05d08349170b084bca656f8530bde5a08df2db7dc2a472b0c306936ed14c48\" returns successfully" Mar 7 00:56:33.224513 containerd[2016]: time="2026-03-07T00:56:33.222040171Z" level=info msg="StopPodSandbox for \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\"" Mar 7 00:56:33.435748 containerd[2016]: 2026-03-07 00:56:33.320 [WARNING][6257] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"07a42616-368f-4a16-92ee-c16f0eba21ab", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8", Pod:"goldmane-cccfbd5cf-qnkgm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2154040b6c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:33.435748 containerd[2016]: 2026-03-07 00:56:33.322 [INFO][6257] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Mar 7 00:56:33.435748 containerd[2016]: 2026-03-07 00:56:33.322 [INFO][6257] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" iface="eth0" netns="" Mar 7 00:56:33.435748 containerd[2016]: 2026-03-07 00:56:33.322 [INFO][6257] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Mar 7 00:56:33.435748 containerd[2016]: 2026-03-07 00:56:33.322 [INFO][6257] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Mar 7 00:56:33.435748 containerd[2016]: 2026-03-07 00:56:33.390 [INFO][6265] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" HandleID="k8s-pod-network.b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Workload="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" Mar 7 00:56:33.435748 containerd[2016]: 2026-03-07 00:56:33.393 [INFO][6265] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:33.435748 containerd[2016]: 2026-03-07 00:56:33.393 [INFO][6265] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:33.435748 containerd[2016]: 2026-03-07 00:56:33.421 [WARNING][6265] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" HandleID="k8s-pod-network.b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Workload="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" Mar 7 00:56:33.435748 containerd[2016]: 2026-03-07 00:56:33.421 [INFO][6265] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" HandleID="k8s-pod-network.b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Workload="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" Mar 7 00:56:33.435748 containerd[2016]: 2026-03-07 00:56:33.425 [INFO][6265] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:33.435748 containerd[2016]: 2026-03-07 00:56:33.429 [INFO][6257] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Mar 7 00:56:33.437128 containerd[2016]: time="2026-03-07T00:56:33.435788864Z" level=info msg="TearDown network for sandbox \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\" successfully" Mar 7 00:56:33.437128 containerd[2016]: time="2026-03-07T00:56:33.435847184Z" level=info msg="StopPodSandbox for \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\" returns successfully" Mar 7 00:56:33.438138 containerd[2016]: time="2026-03-07T00:56:33.437544500Z" level=info msg="RemovePodSandbox for \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\"" Mar 7 00:56:33.438138 containerd[2016]: time="2026-03-07T00:56:33.437620064Z" level=info msg="Forcibly stopping sandbox \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\"" Mar 7 00:56:33.743761 containerd[2016]: 2026-03-07 00:56:33.575 [WARNING][6285] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"07a42616-368f-4a16-92ee-c16f0eba21ab", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"ad006d6a17de04a0b8fec4675b04c0d883e24eab8a3c0f5ca5a8f3fccc02d2a8", Pod:"goldmane-cccfbd5cf-qnkgm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2154040b6c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:33.743761 containerd[2016]: 2026-03-07 00:56:33.576 [INFO][6285] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Mar 7 00:56:33.743761 containerd[2016]: 2026-03-07 00:56:33.577 [INFO][6285] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" iface="eth0" netns="" Mar 7 00:56:33.743761 containerd[2016]: 2026-03-07 00:56:33.577 [INFO][6285] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Mar 7 00:56:33.743761 containerd[2016]: 2026-03-07 00:56:33.577 [INFO][6285] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Mar 7 00:56:33.743761 containerd[2016]: 2026-03-07 00:56:33.677 [INFO][6297] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" HandleID="k8s-pod-network.b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Workload="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" Mar 7 00:56:33.743761 containerd[2016]: 2026-03-07 00:56:33.678 [INFO][6297] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:33.743761 containerd[2016]: 2026-03-07 00:56:33.678 [INFO][6297] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:33.743761 containerd[2016]: 2026-03-07 00:56:33.703 [WARNING][6297] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" HandleID="k8s-pod-network.b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Workload="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" Mar 7 00:56:33.743761 containerd[2016]: 2026-03-07 00:56:33.703 [INFO][6297] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" HandleID="k8s-pod-network.b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Workload="ip--172--31--17--228-k8s-goldmane--cccfbd5cf--qnkgm-eth0" Mar 7 00:56:33.743761 containerd[2016]: 2026-03-07 00:56:33.708 [INFO][6297] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:33.743761 containerd[2016]: 2026-03-07 00:56:33.726 [INFO][6285] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499" Mar 7 00:56:33.747048 containerd[2016]: time="2026-03-07T00:56:33.745538206Z" level=info msg="TearDown network for sandbox \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\" successfully" Mar 7 00:56:33.770144 containerd[2016]: time="2026-03-07T00:56:33.769777342Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:56:33.770144 containerd[2016]: time="2026-03-07T00:56:33.769920622Z" level=info msg="RemovePodSandbox \"b825a81e6cad170208e520d2c191d2921fda9e271065a7d6dffe682096ae0499\" returns successfully" Mar 7 00:56:33.771407 containerd[2016]: time="2026-03-07T00:56:33.771336070Z" level=info msg="StopPodSandbox for \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\"" Mar 7 00:56:33.984112 kubelet[3232]: I0307 00:56:33.984043 3232 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:56:34.023361 containerd[2016]: time="2026-03-07T00:56:34.022910299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:34.026829 containerd[2016]: time="2026-03-07T00:56:34.026669875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 7 00:56:34.034086 containerd[2016]: time="2026-03-07T00:56:34.033747943Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:34.065405 containerd[2016]: time="2026-03-07T00:56:34.065279072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:34.076154 containerd[2016]: time="2026-03-07T00:56:34.075920204Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.913391646s" Mar 7 00:56:34.076154 containerd[2016]: time="2026-03-07T00:56:34.075983720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 7 00:56:34.096078 containerd[2016]: time="2026-03-07T00:56:34.094840664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 00:56:34.107234 containerd[2016]: time="2026-03-07T00:56:34.107128088Z" level=info msg="CreateContainer within sandbox \"c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 00:56:34.113506 systemd[1]: run-containerd-runc-k8s.io-c3fbd2a28dd1ed1ee0f7b22ff347d49b8fb864a4101ed814a26dc88a06876cd6-runc.Vb4Bkh.mount: Deactivated successfully. Mar 7 00:56:34.218911 containerd[2016]: 2026-03-07 00:56:33.989 [WARNING][6312] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--555b5549d8--sl8wr-eth0" Mar 7 00:56:34.218911 containerd[2016]: 2026-03-07 00:56:33.991 [INFO][6312] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Mar 7 00:56:34.218911 containerd[2016]: 2026-03-07 00:56:33.991 [INFO][6312] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" iface="eth0" netns="" Mar 7 00:56:34.218911 containerd[2016]: 2026-03-07 00:56:33.991 [INFO][6312] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Mar 7 00:56:34.218911 containerd[2016]: 2026-03-07 00:56:33.991 [INFO][6312] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Mar 7 00:56:34.218911 containerd[2016]: 2026-03-07 00:56:34.157 [INFO][6321] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" HandleID="k8s-pod-network.65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Workload="ip--172--31--17--228-k8s-whisker--555b5549d8--sl8wr-eth0" Mar 7 00:56:34.218911 containerd[2016]: 2026-03-07 00:56:34.158 [INFO][6321] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:34.218911 containerd[2016]: 2026-03-07 00:56:34.159 [INFO][6321] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:34.218911 containerd[2016]: 2026-03-07 00:56:34.196 [WARNING][6321] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" HandleID="k8s-pod-network.65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Workload="ip--172--31--17--228-k8s-whisker--555b5549d8--sl8wr-eth0" Mar 7 00:56:34.218911 containerd[2016]: 2026-03-07 00:56:34.196 [INFO][6321] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" HandleID="k8s-pod-network.65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Workload="ip--172--31--17--228-k8s-whisker--555b5549d8--sl8wr-eth0" Mar 7 00:56:34.218911 containerd[2016]: 2026-03-07 00:56:34.203 [INFO][6321] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:34.218911 containerd[2016]: 2026-03-07 00:56:34.212 [INFO][6312] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Mar 7 00:56:34.221670 containerd[2016]: time="2026-03-07T00:56:34.218942348Z" level=info msg="TearDown network for sandbox \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\" successfully" Mar 7 00:56:34.221670 containerd[2016]: time="2026-03-07T00:56:34.218979968Z" level=info msg="StopPodSandbox for \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\" returns successfully" Mar 7 00:56:34.221670 containerd[2016]: time="2026-03-07T00:56:34.220044092Z" level=info msg="RemovePodSandbox for \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\"" Mar 7 00:56:34.221670 containerd[2016]: time="2026-03-07T00:56:34.220096184Z" level=info msg="Forcibly stopping sandbox \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\"" Mar 7 00:56:34.236197 containerd[2016]: time="2026-03-07T00:56:34.236125340Z" level=info msg="CreateContainer within sandbox \"c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d41cfa3164962ee088c605bdd9f1729d457bae1d14bdda3c3397174f998c3968\"" Mar 7 00:56:34.238785 containerd[2016]: time="2026-03-07T00:56:34.238715624Z" level=info msg="StartContainer for \"d41cfa3164962ee088c605bdd9f1729d457bae1d14bdda3c3397174f998c3968\"" Mar 7 00:56:34.380941 systemd[1]: Started cri-containerd-d41cfa3164962ee088c605bdd9f1729d457bae1d14bdda3c3397174f998c3968.scope - libcontainer container d41cfa3164962ee088c605bdd9f1729d457bae1d14bdda3c3397174f998c3968. Mar 7 00:56:34.546126 systemd[1]: Started sshd@9-172.31.17.228:22-20.161.92.111:35470.service - OpenSSH per-connection server daemon (20.161.92.111:35470). Mar 7 00:56:34.630913 containerd[2016]: 2026-03-07 00:56:34.439 [WARNING][6358] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--555b5549d8--sl8wr-eth0" Mar 7 00:56:34.630913 containerd[2016]: 2026-03-07 00:56:34.440 [INFO][6358] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Mar 7 00:56:34.630913 containerd[2016]: 2026-03-07 00:56:34.441 [INFO][6358] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" iface="eth0" netns="" Mar 7 00:56:34.630913 containerd[2016]: 2026-03-07 00:56:34.441 [INFO][6358] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Mar 7 00:56:34.630913 containerd[2016]: 2026-03-07 00:56:34.441 [INFO][6358] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Mar 7 00:56:34.630913 containerd[2016]: 2026-03-07 00:56:34.572 [INFO][6394] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" HandleID="k8s-pod-network.65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Workload="ip--172--31--17--228-k8s-whisker--555b5549d8--sl8wr-eth0" Mar 7 00:56:34.630913 containerd[2016]: 2026-03-07 00:56:34.579 [INFO][6394] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:34.630913 containerd[2016]: 2026-03-07 00:56:34.580 [INFO][6394] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:34.630913 containerd[2016]: 2026-03-07 00:56:34.607 [WARNING][6394] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" HandleID="k8s-pod-network.65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Workload="ip--172--31--17--228-k8s-whisker--555b5549d8--sl8wr-eth0" Mar 7 00:56:34.630913 containerd[2016]: 2026-03-07 00:56:34.607 [INFO][6394] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" HandleID="k8s-pod-network.65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Workload="ip--172--31--17--228-k8s-whisker--555b5549d8--sl8wr-eth0" Mar 7 00:56:34.630913 containerd[2016]: 2026-03-07 00:56:34.614 [INFO][6394] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:34.630913 containerd[2016]: 2026-03-07 00:56:34.624 [INFO][6358] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458" Mar 7 00:56:34.632726 containerd[2016]: time="2026-03-07T00:56:34.631169542Z" level=info msg="TearDown network for sandbox \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\" successfully" Mar 7 00:56:34.642796 containerd[2016]: time="2026-03-07T00:56:34.642690754Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:56:34.643152 containerd[2016]: time="2026-03-07T00:56:34.642818470Z" level=info msg="RemovePodSandbox \"65deeec9e0c048ef6a248f916305701cfd5a9c7f85c103118488d80d4ca7c458\" returns successfully" Mar 7 00:56:34.725646 containerd[2016]: time="2026-03-07T00:56:34.725553323Z" level=info msg="StartContainer for \"d41cfa3164962ee088c605bdd9f1729d457bae1d14bdda3c3397174f998c3968\" returns successfully" Mar 7 00:56:35.108360 sshd[6400]: Accepted publickey for core from 20.161.92.111 port 35470 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:35.114094 sshd[6400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:35.128032 systemd-logind[1991]: New session 10 of user core. Mar 7 00:56:35.137185 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 00:56:35.757526 sshd[6400]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:35.769139 systemd[1]: sshd@9-172.31.17.228:22-20.161.92.111:35470.service: Deactivated successfully. Mar 7 00:56:35.775739 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 00:56:35.782458 systemd-logind[1991]: Session 10 logged out. Waiting for processes to exit. Mar 7 00:56:35.786773 systemd-logind[1991]: Removed session 10. Mar 7 00:56:36.158336 containerd[2016]: time="2026-03-07T00:56:36.158255638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:36.161878 containerd[2016]: time="2026-03-07T00:56:36.161804446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 7 00:56:36.164737 containerd[2016]: time="2026-03-07T00:56:36.164666470Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:36.172990 containerd[2016]: time="2026-03-07T00:56:36.172918354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:36.175801 containerd[2016]: time="2026-03-07T00:56:36.175583074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.072959978s" Mar 7 00:56:36.175801 containerd[2016]: time="2026-03-07T00:56:36.175647466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 7 00:56:36.178292 containerd[2016]: time="2026-03-07T00:56:36.178080094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 00:56:36.187743 kubelet[3232]: I0307 00:56:36.186807 3232 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:56:36.189344 containerd[2016]: time="2026-03-07T00:56:36.187004770Z" level=info msg="CreateContainer within sandbox \"d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 00:56:36.235466 containerd[2016]: time="2026-03-07T00:56:36.234701446Z" level=info msg="CreateContainer within sandbox \"d313a2dd2e5e28b145f753dbeaf0eb65a8e601ea095436b45c863b595eb4eecc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f02cf8df39a3b1c4cda8880ac92152e35a76475fddacf66ad19c2a0c95acc6a8\"" Mar 7 00:56:36.239450 containerd[2016]: time="2026-03-07T00:56:36.238476154Z" level=info msg="StartContainer for \"f02cf8df39a3b1c4cda8880ac92152e35a76475fddacf66ad19c2a0c95acc6a8\"" Mar 7 00:56:36.272319 kubelet[3232]: I0307 00:56:36.271992 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-54c564cdd4-5c6xm" podStartSLOduration=33.21751128 podStartE2EDuration="44.271966247s" podCreationTimestamp="2026-03-07 00:55:52 +0000 UTC" firstStartedPulling="2026-03-07 00:56:21.094126051 +0000 UTC m=+51.238989879" lastFinishedPulling="2026-03-07 00:56:32.148581006 +0000 UTC m=+62.293444846" observedRunningTime="2026-03-07 00:56:33.052880023 +0000 UTC m=+63.197743899" watchObservedRunningTime="2026-03-07 00:56:36.271966247 +0000 UTC m=+66.416830255" Mar 7 00:56:36.368701 systemd[1]: Started cri-containerd-f02cf8df39a3b1c4cda8880ac92152e35a76475fddacf66ad19c2a0c95acc6a8.scope - libcontainer container f02cf8df39a3b1c4cda8880ac92152e35a76475fddacf66ad19c2a0c95acc6a8. Mar 7 00:56:36.437404 containerd[2016]: time="2026-03-07T00:56:36.437209979Z" level=info msg="StartContainer for \"f02cf8df39a3b1c4cda8880ac92152e35a76475fddacf66ad19c2a0c95acc6a8\" returns successfully" Mar 7 00:56:37.040739 kubelet[3232]: I0307 00:56:37.040643 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fbhbv" podStartSLOduration=22.309944765 podStartE2EDuration="41.04061503s" podCreationTimestamp="2026-03-07 00:55:56 +0000 UTC" firstStartedPulling="2026-03-07 00:56:17.446444729 +0000 UTC m=+47.591308557" lastFinishedPulling="2026-03-07 00:56:36.17711497 +0000 UTC m=+66.321978822" observedRunningTime="2026-03-07 00:56:37.039095926 +0000 UTC m=+67.183959802" watchObservedRunningTime="2026-03-07 00:56:37.04061503 +0000 UTC m=+67.185478870" Mar 7 00:56:37.385896 kubelet[3232]: I0307 00:56:37.383899 3232 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 00:56:37.385896 kubelet[3232]: I0307 00:56:37.383979 3232 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 00:56:38.069211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3340553203.mount: Deactivated successfully. Mar 7 00:56:38.105792 containerd[2016]: time="2026-03-07T00:56:38.105690192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:38.109238 containerd[2016]: time="2026-03-07T00:56:38.108703632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 7 00:56:38.111574 containerd[2016]: time="2026-03-07T00:56:38.111504132Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:38.119282 containerd[2016]: time="2026-03-07T00:56:38.119127036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:38.121192 containerd[2016]: time="2026-03-07T00:56:38.120819564Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.942675942s" Mar 7 00:56:38.121192 containerd[2016]: time="2026-03-07T00:56:38.120882636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 7 00:56:38.132872 containerd[2016]: time="2026-03-07T00:56:38.132734532Z" level=info msg="CreateContainer within sandbox \"c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 00:56:38.167587 containerd[2016]: time="2026-03-07T00:56:38.167510196Z" level=info msg="CreateContainer within sandbox \"c0f9370fbde7ce3468571059198f4a329b266879e58b4fd75e4e63736ed61a9d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"fc3e01ba24dd42f40bd4c2254ea18d75c4368cb4255a0b83905230ffc6f1e204\"" Mar 7 00:56:38.169178 containerd[2016]: time="2026-03-07T00:56:38.168583908Z" level=info msg="StartContainer for \"fc3e01ba24dd42f40bd4c2254ea18d75c4368cb4255a0b83905230ffc6f1e204\"" Mar 7 00:56:38.234686 systemd[1]: Started cri-containerd-fc3e01ba24dd42f40bd4c2254ea18d75c4368cb4255a0b83905230ffc6f1e204.scope - libcontainer container fc3e01ba24dd42f40bd4c2254ea18d75c4368cb4255a0b83905230ffc6f1e204. Mar 7 00:56:38.327506 containerd[2016]: time="2026-03-07T00:56:38.325665829Z" level=info msg="StartContainer for \"fc3e01ba24dd42f40bd4c2254ea18d75c4368cb4255a0b83905230ffc6f1e204\" returns successfully" Mar 7 00:56:39.047177 kubelet[3232]: I0307 00:56:39.045874 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-b5bb86c7c-89nsw" podStartSLOduration=5.130375556 podStartE2EDuration="22.04585068s" podCreationTimestamp="2026-03-07 00:56:17 +0000 UTC" firstStartedPulling="2026-03-07 00:56:21.208185704 +0000 UTC m=+51.353049544" lastFinishedPulling="2026-03-07 00:56:38.12366084 +0000 UTC m=+68.268524668" observedRunningTime="2026-03-07 00:56:39.043668624 +0000 UTC m=+69.188532488" watchObservedRunningTime="2026-03-07 00:56:39.04585068 +0000 UTC m=+69.190714520" Mar 7 00:56:40.856016 systemd[1]: Started sshd@10-172.31.17.228:22-20.161.92.111:35750.service - OpenSSH per-connection server daemon (20.161.92.111:35750). Mar 7 00:56:41.367814 sshd[6568]: Accepted publickey for core from 20.161.92.111 port 35750 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:41.371442 sshd[6568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:41.381717 systemd-logind[1991]: New session 11 of user core. Mar 7 00:56:41.387651 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 00:56:41.875703 sshd[6568]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:41.881916 systemd[1]: sshd@10-172.31.17.228:22-20.161.92.111:35750.service: Deactivated successfully. Mar 7 00:56:41.886295 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 00:56:41.888968 systemd-logind[1991]: Session 11 logged out. Waiting for processes to exit. Mar 7 00:56:41.891115 systemd-logind[1991]: Removed session 11. Mar 7 00:56:41.969918 systemd[1]: Started sshd@11-172.31.17.228:22-20.161.92.111:35766.service - OpenSSH per-connection server daemon (20.161.92.111:35766). Mar 7 00:56:42.487294 sshd[6584]: Accepted publickey for core from 20.161.92.111 port 35766 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:42.489504 sshd[6584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:42.503474 systemd-logind[1991]: New session 12 of user core. Mar 7 00:56:42.508657 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 00:56:43.084688 sshd[6584]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:43.092032 systemd[1]: sshd@11-172.31.17.228:22-20.161.92.111:35766.service: Deactivated successfully. Mar 7 00:56:43.099821 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 00:56:43.102803 systemd-logind[1991]: Session 12 logged out. Waiting for processes to exit. Mar 7 00:56:43.107347 systemd-logind[1991]: Removed session 12. Mar 7 00:56:43.184965 systemd[1]: Started sshd@12-172.31.17.228:22-20.161.92.111:35776.service - OpenSSH per-connection server daemon (20.161.92.111:35776). Mar 7 00:56:43.708943 sshd[6610]: Accepted publickey for core from 20.161.92.111 port 35776 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:43.714453 sshd[6610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:43.724449 systemd-logind[1991]: New session 13 of user core. Mar 7 00:56:43.733795 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 00:56:44.245874 sshd[6610]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:44.253021 systemd[1]: sshd@12-172.31.17.228:22-20.161.92.111:35776.service: Deactivated successfully. Mar 7 00:56:44.260168 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 00:56:44.263901 systemd-logind[1991]: Session 13 logged out. Waiting for processes to exit. Mar 7 00:56:44.266446 systemd-logind[1991]: Removed session 13. Mar 7 00:56:49.328508 kubelet[3232]: I0307 00:56:49.328444 3232 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:56:49.350704 systemd[1]: Started sshd@13-172.31.17.228:22-20.161.92.111:35786.service - OpenSSH per-connection server daemon (20.161.92.111:35786). Mar 7 00:56:49.875730 sshd[6669]: Accepted publickey for core from 20.161.92.111 port 35786 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:49.878871 sshd[6669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:49.887964 systemd-logind[1991]: New session 14 of user core. Mar 7 00:56:49.893706 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 00:56:50.377716 sshd[6669]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:50.386753 systemd[1]: sshd@13-172.31.17.228:22-20.161.92.111:35786.service: Deactivated successfully. Mar 7 00:56:50.392723 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 00:56:50.395918 systemd-logind[1991]: Session 14 logged out. Waiting for processes to exit. Mar 7 00:56:50.398460 systemd-logind[1991]: Removed session 14. Mar 7 00:56:50.471929 systemd[1]: Started sshd@14-172.31.17.228:22-20.161.92.111:36066.service - OpenSSH per-connection server daemon (20.161.92.111:36066). Mar 7 00:56:50.983989 sshd[6684]: Accepted publickey for core from 20.161.92.111 port 36066 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:50.986767 sshd[6684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:50.998107 systemd-logind[1991]: New session 15 of user core. Mar 7 00:56:51.000691 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 00:56:51.856017 sshd[6684]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:51.865867 systemd[1]: sshd@14-172.31.17.228:22-20.161.92.111:36066.service: Deactivated successfully. Mar 7 00:56:51.876158 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 00:56:51.885224 systemd-logind[1991]: Session 15 logged out. Waiting for processes to exit. Mar 7 00:56:51.889855 systemd-logind[1991]: Removed session 15. Mar 7 00:56:51.980430 systemd[1]: Started sshd@15-172.31.17.228:22-20.161.92.111:36072.service - OpenSSH per-connection server daemon (20.161.92.111:36072). Mar 7 00:56:52.500784 sshd[6696]: Accepted publickey for core from 20.161.92.111 port 36072 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:52.505123 sshd[6696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:52.523317 systemd-logind[1991]: New session 16 of user core. Mar 7 00:56:52.530964 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 00:56:53.996263 sshd[6696]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:54.008259 systemd-logind[1991]: Session 16 logged out. Waiting for processes to exit. Mar 7 00:56:54.008710 systemd[1]: sshd@15-172.31.17.228:22-20.161.92.111:36072.service: Deactivated successfully. Mar 7 00:56:54.017204 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 00:56:54.022417 systemd-logind[1991]: Removed session 16. Mar 7 00:56:54.093923 systemd[1]: Started sshd@16-172.31.17.228:22-20.161.92.111:36084.service - OpenSSH per-connection server daemon (20.161.92.111:36084). Mar 7 00:56:54.617406 sshd[6725]: Accepted publickey for core from 20.161.92.111 port 36084 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:54.623100 sshd[6725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:54.637951 systemd-logind[1991]: New session 17 of user core. Mar 7 00:56:54.648936 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 00:56:55.551703 sshd[6725]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:55.562001 systemd[1]: sshd@16-172.31.17.228:22-20.161.92.111:36084.service: Deactivated successfully. Mar 7 00:56:55.570094 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 00:56:55.574807 systemd-logind[1991]: Session 17 logged out. Waiting for processes to exit. Mar 7 00:56:55.576955 systemd-logind[1991]: Removed session 17. Mar 7 00:56:55.655045 systemd[1]: Started sshd@17-172.31.17.228:22-20.161.92.111:36092.service - OpenSSH per-connection server daemon (20.161.92.111:36092). Mar 7 00:56:56.194119 sshd[6738]: Accepted publickey for core from 20.161.92.111 port 36092 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:56.196835 sshd[6738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:56.212785 systemd-logind[1991]: New session 18 of user core. Mar 7 00:56:56.221726 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 00:56:56.725474 sshd[6738]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:56.733000 systemd[1]: sshd@17-172.31.17.228:22-20.161.92.111:36092.service: Deactivated successfully. Mar 7 00:56:56.741794 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 00:56:56.745420 systemd-logind[1991]: Session 18 logged out. Waiting for processes to exit. Mar 7 00:56:56.747881 systemd-logind[1991]: Removed session 18. Mar 7 00:57:01.824911 systemd[1]: Started sshd@18-172.31.17.228:22-20.161.92.111:58162.service - OpenSSH per-connection server daemon (20.161.92.111:58162). Mar 7 00:57:02.342589 sshd[6770]: Accepted publickey for core from 20.161.92.111 port 58162 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:57:02.345505 sshd[6770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:02.354485 systemd-logind[1991]: New session 19 of user core. Mar 7 00:57:02.360652 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 00:57:02.833096 sshd[6770]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:02.840852 systemd[1]: sshd@18-172.31.17.228:22-20.161.92.111:58162.service: Deactivated successfully. Mar 7 00:57:02.845720 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 00:57:02.847601 systemd-logind[1991]: Session 19 logged out. Waiting for processes to exit. Mar 7 00:57:02.849318 systemd-logind[1991]: Removed session 19. Mar 7 00:57:05.071273 systemd[1]: run-containerd-runc-k8s.io-c3fbd2a28dd1ed1ee0f7b22ff347d49b8fb864a4101ed814a26dc88a06876cd6-runc.Xvdup7.mount: Deactivated successfully. Mar 7 00:57:07.933936 systemd[1]: Started sshd@19-172.31.17.228:22-20.161.92.111:58170.service - OpenSSH per-connection server daemon (20.161.92.111:58170). Mar 7 00:57:08.447788 sshd[6828]: Accepted publickey for core from 20.161.92.111 port 58170 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:57:08.450565 sshd[6828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:08.457530 systemd-logind[1991]: New session 20 of user core. Mar 7 00:57:08.467679 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 00:57:08.923729 sshd[6828]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:08.929870 systemd-logind[1991]: Session 20 logged out. Waiting for processes to exit. Mar 7 00:57:08.931854 systemd[1]: sshd@19-172.31.17.228:22-20.161.92.111:58170.service: Deactivated successfully. Mar 7 00:57:08.936077 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 00:57:08.939693 systemd-logind[1991]: Removed session 20. Mar 7 00:57:14.017923 systemd[1]: Started sshd@20-172.31.17.228:22-20.161.92.111:42792.service - OpenSSH per-connection server daemon (20.161.92.111:42792). Mar 7 00:57:14.525749 sshd[6847]: Accepted publickey for core from 20.161.92.111 port 42792 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:57:14.528532 sshd[6847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:14.537249 systemd-logind[1991]: New session 21 of user core. Mar 7 00:57:14.544654 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 00:57:14.993830 sshd[6847]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:15.000345 systemd[1]: sshd@20-172.31.17.228:22-20.161.92.111:42792.service: Deactivated successfully. Mar 7 00:57:15.005085 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 00:57:15.007841 systemd-logind[1991]: Session 21 logged out. Waiting for processes to exit. Mar 7 00:57:15.010312 systemd-logind[1991]: Removed session 21. Mar 7 00:57:20.089036 systemd[1]: Started sshd@21-172.31.17.228:22-20.161.92.111:42798.service - OpenSSH per-connection server daemon (20.161.92.111:42798). Mar 7 00:57:20.617424 sshd[6884]: Accepted publickey for core from 20.161.92.111 port 42798 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:57:20.619505 sshd[6884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:20.628069 systemd-logind[1991]: New session 22 of user core. Mar 7 00:57:20.633685 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 00:57:21.083176 sshd[6884]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:21.088366 systemd-logind[1991]: Session 22 logged out. Waiting for processes to exit. Mar 7 00:57:21.090399 systemd[1]: sshd@21-172.31.17.228:22-20.161.92.111:42798.service: Deactivated successfully. Mar 7 00:57:21.094794 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 00:57:21.100766 systemd-logind[1991]: Removed session 22. Mar 7 00:57:26.182935 systemd[1]: Started sshd@22-172.31.17.228:22-20.161.92.111:50118.service - OpenSSH per-connection server daemon (20.161.92.111:50118). Mar 7 00:57:26.700289 sshd[6918]: Accepted publickey for core from 20.161.92.111 port 50118 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:57:26.702222 sshd[6918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:26.709982 systemd-logind[1991]: New session 23 of user core. Mar 7 00:57:26.718687 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 00:57:27.180706 sshd[6918]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:27.187213 systemd[1]: sshd@22-172.31.17.228:22-20.161.92.111:50118.service: Deactivated successfully. Mar 7 00:57:27.192781 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 00:57:27.195708 systemd-logind[1991]: Session 23 logged out. Waiting for processes to exit. Mar 7 00:57:27.198926 systemd-logind[1991]: Removed session 23. Mar 7 00:57:35.035352 systemd[1]: run-containerd-runc-k8s.io-c3fbd2a28dd1ed1ee0f7b22ff347d49b8fb864a4101ed814a26dc88a06876cd6-runc.BLCx3a.mount: Deactivated successfully. Mar 7 00:57:42.569074 systemd[1]: Started sshd@23-172.31.17.228:22-205.210.31.141:49542.service - OpenSSH per-connection server daemon (205.210.31.141:49542). Mar 7 00:57:43.013326 sshd[6995]: Connection closed by 205.210.31.141 port 49542 Mar 7 00:57:43.014859 systemd[1]: sshd@23-172.31.17.228:22-205.210.31.141:49542.service: Deactivated successfully. Mar 7 00:57:55.853343 systemd[1]: run-containerd-runc-k8s.io-c300d4c8de10c5d67778edee795d9517700399ad8b7a0f5cd1f912fa57882fc4-runc.APAJpr.mount: Deactivated successfully. Mar 7 00:58:15.522740 systemd[1]: cri-containerd-6e172f3aa4861d5761db199c79adcd5074fd8c81c2fa1f0fe6560c13a3a4c34c.scope: Deactivated successfully. Mar 7 00:58:15.523892 systemd[1]: cri-containerd-6e172f3aa4861d5761db199c79adcd5074fd8c81c2fa1f0fe6560c13a3a4c34c.scope: Consumed 22.834s CPU time. Mar 7 00:58:15.562788 containerd[2016]: time="2026-03-07T00:58:15.562704732Z" level=info msg="shim disconnected" id=6e172f3aa4861d5761db199c79adcd5074fd8c81c2fa1f0fe6560c13a3a4c34c namespace=k8s.io Mar 7 00:58:15.567486 containerd[2016]: time="2026-03-07T00:58:15.565243656Z" level=warning msg="cleaning up after shim disconnected" id=6e172f3aa4861d5761db199c79adcd5074fd8c81c2fa1f0fe6560c13a3a4c34c namespace=k8s.io Mar 7 00:58:15.567486 containerd[2016]: time="2026-03-07T00:58:15.565297896Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:58:15.567999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e172f3aa4861d5761db199c79adcd5074fd8c81c2fa1f0fe6560c13a3a4c34c-rootfs.mount: Deactivated successfully. Mar 7 00:58:16.355791 kubelet[3232]: I0307 00:58:16.355743 3232 scope.go:117] "RemoveContainer" containerID="6e172f3aa4861d5761db199c79adcd5074fd8c81c2fa1f0fe6560c13a3a4c34c" Mar 7 00:58:16.361173 containerd[2016]: time="2026-03-07T00:58:16.361106184Z" level=info msg="CreateContainer within sandbox \"88ce35478c187e993b2c05604e191c5c8c26bdd1c2aabf1c78802d9271f67553\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 7 00:58:16.389069 containerd[2016]: time="2026-03-07T00:58:16.388985796Z" level=info msg="CreateContainer within sandbox \"88ce35478c187e993b2c05604e191c5c8c26bdd1c2aabf1c78802d9271f67553\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"2df5c7f3ccd4cb44777f9677b7622b71a0e7c2d4704d2156c934c3819fda7ac0\"" Mar 7 00:58:16.390362 containerd[2016]: time="2026-03-07T00:58:16.389737548Z" level=info msg="StartContainer for \"2df5c7f3ccd4cb44777f9677b7622b71a0e7c2d4704d2156c934c3819fda7ac0\"" Mar 7 00:58:16.454032 systemd[1]: cri-containerd-0a23b8460fdf6b25b48127cd0bd8a68fcfbfb0567525e83e72869fd277692787.scope: Deactivated successfully. Mar 7 00:58:16.455005 systemd[1]: cri-containerd-0a23b8460fdf6b25b48127cd0bd8a68fcfbfb0567525e83e72869fd277692787.scope: Consumed 5.108s CPU time, 18.0M memory peak, 0B memory swap peak. Mar 7 00:58:16.464734 systemd[1]: Started cri-containerd-2df5c7f3ccd4cb44777f9677b7622b71a0e7c2d4704d2156c934c3819fda7ac0.scope - libcontainer container 2df5c7f3ccd4cb44777f9677b7622b71a0e7c2d4704d2156c934c3819fda7ac0. Mar 7 00:58:16.522211 containerd[2016]: time="2026-03-07T00:58:16.522124944Z" level=info msg="shim disconnected" id=0a23b8460fdf6b25b48127cd0bd8a68fcfbfb0567525e83e72869fd277692787 namespace=k8s.io Mar 7 00:58:16.522211 containerd[2016]: time="2026-03-07T00:58:16.522201528Z" level=warning msg="cleaning up after shim disconnected" id=0a23b8460fdf6b25b48127cd0bd8a68fcfbfb0567525e83e72869fd277692787 namespace=k8s.io Mar 7 00:58:16.522575 containerd[2016]: time="2026-03-07T00:58:16.522223872Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:58:16.547090 containerd[2016]: time="2026-03-07T00:58:16.546920605Z" level=info msg="StartContainer for \"2df5c7f3ccd4cb44777f9677b7622b71a0e7c2d4704d2156c934c3819fda7ac0\" returns successfully" Mar 7 00:58:16.567758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a23b8460fdf6b25b48127cd0bd8a68fcfbfb0567525e83e72869fd277692787-rootfs.mount: Deactivated successfully. Mar 7 00:58:17.362504 kubelet[3232]: I0307 00:58:17.362452 3232 scope.go:117] "RemoveContainer" containerID="0a23b8460fdf6b25b48127cd0bd8a68fcfbfb0567525e83e72869fd277692787" Mar 7 00:58:17.368054 containerd[2016]: time="2026-03-07T00:58:17.367848025Z" level=info msg="CreateContainer within sandbox \"09fa01ab3a413ade05051a3a794c1e07d50868535548153f4d8d77ad381b81c7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 00:58:17.407235 containerd[2016]: time="2026-03-07T00:58:17.407098357Z" level=info msg="CreateContainer within sandbox \"09fa01ab3a413ade05051a3a794c1e07d50868535548153f4d8d77ad381b81c7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"39400c46997b873c0217f384200c593602c9144916c5516b01c422a036c4ef57\"" Mar 7 00:58:17.408238 containerd[2016]: time="2026-03-07T00:58:17.408104257Z" level=info msg="StartContainer for \"39400c46997b873c0217f384200c593602c9144916c5516b01c422a036c4ef57\"" Mar 7 00:58:17.470717 systemd[1]: Started cri-containerd-39400c46997b873c0217f384200c593602c9144916c5516b01c422a036c4ef57.scope - libcontainer container 39400c46997b873c0217f384200c593602c9144916c5516b01c422a036c4ef57. Mar 7 00:58:17.541681 containerd[2016]: time="2026-03-07T00:58:17.541555190Z" level=info msg="StartContainer for \"39400c46997b873c0217f384200c593602c9144916c5516b01c422a036c4ef57\" returns successfully" Mar 7 00:58:20.387833 systemd[1]: cri-containerd-375754f898165d04b9b9a08ed8f84b91dac704d7d59ff3258ab147f40d607dc6.scope: Deactivated successfully. Mar 7 00:58:20.388972 systemd[1]: cri-containerd-375754f898165d04b9b9a08ed8f84b91dac704d7d59ff3258ab147f40d607dc6.scope: Consumed 4.559s CPU time, 16.2M memory peak, 0B memory swap peak. Mar 7 00:58:20.443730 containerd[2016]: time="2026-03-07T00:58:20.442535596Z" level=info msg="shim disconnected" id=375754f898165d04b9b9a08ed8f84b91dac704d7d59ff3258ab147f40d607dc6 namespace=k8s.io Mar 7 00:58:20.443730 containerd[2016]: time="2026-03-07T00:58:20.442673080Z" level=warning msg="cleaning up after shim disconnected" id=375754f898165d04b9b9a08ed8f84b91dac704d7d59ff3258ab147f40d607dc6 namespace=k8s.io Mar 7 00:58:20.443730 containerd[2016]: time="2026-03-07T00:58:20.442697896Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:58:20.448216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-375754f898165d04b9b9a08ed8f84b91dac704d7d59ff3258ab147f40d607dc6-rootfs.mount: Deactivated successfully. Mar 7 00:58:21.385628 kubelet[3232]: I0307 00:58:21.385584 3232 scope.go:117] "RemoveContainer" containerID="375754f898165d04b9b9a08ed8f84b91dac704d7d59ff3258ab147f40d607dc6" Mar 7 00:58:21.389980 containerd[2016]: time="2026-03-07T00:58:21.389464181Z" level=info msg="CreateContainer within sandbox \"964de55bd97423e88751fbd629f7d4cddd72c148848fc4e0aaaded5c14ac668c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 7 00:58:21.432224 containerd[2016]: time="2026-03-07T00:58:21.431891837Z" level=info msg="CreateContainer within sandbox \"964de55bd97423e88751fbd629f7d4cddd72c148848fc4e0aaaded5c14ac668c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"61d271b065cf44ac42ac021377365185dcb7599b90dec74f7d0238ae14ee53a7\"" Mar 7 00:58:21.433808 containerd[2016]: time="2026-03-07T00:58:21.432737489Z" level=info msg="StartContainer for \"61d271b065cf44ac42ac021377365185dcb7599b90dec74f7d0238ae14ee53a7\"" Mar 7 00:58:21.502711 systemd[1]: Started cri-containerd-61d271b065cf44ac42ac021377365185dcb7599b90dec74f7d0238ae14ee53a7.scope - libcontainer container 61d271b065cf44ac42ac021377365185dcb7599b90dec74f7d0238ae14ee53a7. Mar 7 00:58:21.579677 containerd[2016]: time="2026-03-07T00:58:21.578878038Z" level=info msg="StartContainer for \"61d271b065cf44ac42ac021377365185dcb7599b90dec74f7d0238ae14ee53a7\" returns successfully" Mar 7 00:58:23.082066 kubelet[3232]: E0307 00:58:23.081991 3232 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-228?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 7 00:58:28.167473 systemd[1]: cri-containerd-2df5c7f3ccd4cb44777f9677b7622b71a0e7c2d4704d2156c934c3819fda7ac0.scope: Deactivated successfully. Mar 7 00:58:28.207707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2df5c7f3ccd4cb44777f9677b7622b71a0e7c2d4704d2156c934c3819fda7ac0-rootfs.mount: Deactivated successfully. Mar 7 00:58:28.218100 containerd[2016]: time="2026-03-07T00:58:28.217748915Z" level=info msg="shim disconnected" id=2df5c7f3ccd4cb44777f9677b7622b71a0e7c2d4704d2156c934c3819fda7ac0 namespace=k8s.io Mar 7 00:58:28.218100 containerd[2016]: time="2026-03-07T00:58:28.217825643Z" level=warning msg="cleaning up after shim disconnected" id=2df5c7f3ccd4cb44777f9677b7622b71a0e7c2d4704d2156c934c3819fda7ac0 namespace=k8s.io Mar 7 00:58:28.218100 containerd[2016]: time="2026-03-07T00:58:28.217848659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:58:28.417423 kubelet[3232]: I0307 00:58:28.417324 3232 scope.go:117] "RemoveContainer" containerID="6e172f3aa4861d5761db199c79adcd5074fd8c81c2fa1f0fe6560c13a3a4c34c" Mar 7 00:58:28.418400 kubelet[3232]: I0307 00:58:28.417743 3232 scope.go:117] "RemoveContainer" containerID="2df5c7f3ccd4cb44777f9677b7622b71a0e7c2d4704d2156c934c3819fda7ac0" Mar 7 00:58:28.418400 kubelet[3232]: E0307 00:58:28.417954 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5588576f44-smrkb_tigera-operator(7191b3b9-6a86-4e87-b195-8014688e3bfd)\"" pod="tigera-operator/tigera-operator-5588576f44-smrkb" podUID="7191b3b9-6a86-4e87-b195-8014688e3bfd" Mar 7 00:58:28.421123 containerd[2016]: time="2026-03-07T00:58:28.420489468Z" level=info msg="RemoveContainer for \"6e172f3aa4861d5761db199c79adcd5074fd8c81c2fa1f0fe6560c13a3a4c34c\"" Mar 7 00:58:28.427707 containerd[2016]: time="2026-03-07T00:58:28.427653420Z" level=info msg="RemoveContainer for \"6e172f3aa4861d5761db199c79adcd5074fd8c81c2fa1f0fe6560c13a3a4c34c\" returns successfully" Mar 7 00:58:33.083081 kubelet[3232]: E0307 00:58:33.082602 3232 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-228?timeout=10s\": context deadline exceeded"