Mar 14 00:13:36.246271 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 14 00:13:36.246325 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 13 22:32:52 -00 2026 Mar 14 00:13:36.246353 kernel: KASLR disabled due to lack of seed Mar 14 00:13:36.246371 kernel: efi: EFI v2.7 by EDK II Mar 14 00:13:36.246388 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Mar 14 00:13:36.246406 kernel: ACPI: Early table checksum verification disabled Mar 14 00:13:36.246425 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 14 00:13:36.246441 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 14 00:13:36.246458 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 14 00:13:36.246473 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 14 00:13:36.246494 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 14 00:13:36.246511 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 14 00:13:36.246527 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 14 00:13:36.246545 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 14 00:13:36.246566 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 14 00:13:36.246587 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 14 00:13:36.246605 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 14 00:13:36.246621 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 14 00:13:36.246638 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 14 00:13:36.246655 kernel: printk: bootconsole [uart0] enabled Mar 14 00:13:36.246673 kernel: NUMA: Failed to initialise from firmware Mar 14 00:13:36.246690 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 14 00:13:36.246707 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 14 00:13:36.246724 kernel: Zone ranges: Mar 14 00:13:36.246741 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 14 00:13:36.246757 kernel: DMA32 empty Mar 14 00:13:36.246778 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 14 00:13:36.246796 kernel: Movable zone start for each node Mar 14 00:13:36.246812 kernel: Early memory node ranges Mar 14 00:13:36.246830 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 14 00:13:36.246847 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 14 00:13:36.246864 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 14 00:13:36.246880 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 14 00:13:36.246897 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 14 00:13:36.246914 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 14 00:13:36.246947 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 14 00:13:36.246992 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 14 00:13:36.247010 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 14 00:13:36.247034 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 14 00:13:36.247053 kernel: psci: probing for conduit method from ACPI. Mar 14 00:13:36.247077 kernel: psci: PSCIv1.0 detected in firmware. Mar 14 00:13:36.247110 kernel: psci: Using standard PSCI v0.2 function IDs Mar 14 00:13:36.247131 kernel: psci: Trusted OS migration not required Mar 14 00:13:36.247155 kernel: psci: SMC Calling Convention v1.1 Mar 14 00:13:36.247173 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Mar 14 00:13:36.247191 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 14 00:13:36.247209 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 14 00:13:36.247227 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 14 00:13:36.247245 kernel: Detected PIPT I-cache on CPU0 Mar 14 00:13:36.247262 kernel: CPU features: detected: GIC system register CPU interface Mar 14 00:13:36.247280 kernel: CPU features: detected: Spectre-v2 Mar 14 00:13:36.247298 kernel: CPU features: detected: Spectre-v3a Mar 14 00:13:36.247315 kernel: CPU features: detected: Spectre-BHB Mar 14 00:13:36.247333 kernel: CPU features: detected: ARM erratum 1742098 Mar 14 00:13:36.247355 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 14 00:13:36.247373 kernel: alternatives: applying boot alternatives Mar 14 00:13:36.247394 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:13:36.247413 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:13:36.247444 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:13:36.247465 kernel: Fallback order for Node 0: 0 Mar 14 00:13:36.247483 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 14 00:13:36.247501 kernel: Policy zone: Normal Mar 14 00:13:36.247519 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:13:36.247537 kernel: software IO TLB: area num 2. Mar 14 00:13:36.247555 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 14 00:13:36.247582 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Mar 14 00:13:36.247601 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:13:36.247619 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:13:36.247639 kernel: rcu: RCU event tracing is enabled. Mar 14 00:13:36.247659 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:13:36.247678 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:13:36.247697 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:13:36.247715 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:13:36.247733 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:13:36.247752 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 14 00:13:36.247769 kernel: GICv3: 96 SPIs implemented Mar 14 00:13:36.247793 kernel: GICv3: 0 Extended SPIs implemented Mar 14 00:13:36.247811 kernel: Root IRQ handler: gic_handle_irq Mar 14 00:13:36.247829 kernel: GICv3: GICv3 features: 16 PPIs Mar 14 00:13:36.247848 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 14 00:13:36.247866 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 14 00:13:36.247885 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 14 00:13:36.247903 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 14 00:13:36.247921 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 14 00:13:36.247939 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 14 00:13:36.247987 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 14 00:13:36.248008 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:13:36.248026 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 14 00:13:36.248052 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 14 00:13:36.248071 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 14 00:13:36.248089 kernel: Console: colour dummy device 80x25 Mar 14 00:13:36.248108 kernel: printk: console [tty1] enabled Mar 14 00:13:36.248127 kernel: ACPI: Core revision 20230628 Mar 14 00:13:36.248146 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 14 00:13:36.248164 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:13:36.248183 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:13:36.248201 kernel: landlock: Up and running. Mar 14 00:13:36.248225 kernel: SELinux: Initializing. Mar 14 00:13:36.248244 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:13:36.248263 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:13:36.248282 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:13:36.248300 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:13:36.248319 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:13:36.248337 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:13:36.248356 kernel: Platform MSI: ITS@0x10080000 domain created Mar 14 00:13:36.248374 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 14 00:13:36.248397 kernel: Remapping and enabling EFI services. Mar 14 00:13:36.248416 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:13:36.248434 kernel: Detected PIPT I-cache on CPU1 Mar 14 00:13:36.248452 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 14 00:13:36.248471 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 14 00:13:36.248489 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 14 00:13:36.248508 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:13:36.248525 kernel: SMP: Total of 2 processors activated. Mar 14 00:13:36.248544 kernel: CPU features: detected: 32-bit EL0 Support Mar 14 00:13:36.248566 kernel: CPU features: detected: 32-bit EL1 Support Mar 14 00:13:36.248585 kernel: CPU features: detected: CRC32 instructions Mar 14 00:13:36.248604 kernel: CPU: All CPU(s) started at EL1 Mar 14 00:13:36.248634 kernel: alternatives: applying system-wide alternatives Mar 14 00:13:36.248657 kernel: devtmpfs: initialized Mar 14 00:13:36.248677 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:13:36.248696 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:13:36.248717 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:13:36.248736 kernel: SMBIOS 3.0.0 present. Mar 14 00:13:36.248761 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 14 00:13:36.248780 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:13:36.248800 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 14 00:13:36.248819 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 14 00:13:36.248839 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 14 00:13:36.248858 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:13:36.248877 kernel: audit: type=2000 audit(0.288:1): state=initialized audit_enabled=0 res=1 Mar 14 00:13:36.248896 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:13:36.248920 kernel: cpuidle: using governor menu Mar 14 00:13:36.248939 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 14 00:13:36.249062 kernel: ASID allocator initialised with 65536 entries Mar 14 00:13:36.249085 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:13:36.249105 kernel: Serial: AMBA PL011 UART driver Mar 14 00:13:36.249124 kernel: Modules: 17488 pages in range for non-PLT usage Mar 14 00:13:36.249143 kernel: Modules: 509008 pages in range for PLT usage Mar 14 00:13:36.249162 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:13:36.249183 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:13:36.249209 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 14 00:13:36.249229 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 14 00:13:36.249248 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:13:36.249267 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:13:36.249286 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 14 00:13:36.249305 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 14 00:13:36.249324 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:13:36.249343 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:13:36.249361 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:13:36.249385 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:13:36.249404 kernel: ACPI: Interpreter enabled Mar 14 00:13:36.249423 kernel: ACPI: Using GIC for interrupt routing Mar 14 00:13:36.249442 kernel: ACPI: MCFG table detected, 1 entries Mar 14 00:13:36.249461 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Mar 14 00:13:36.249813 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:13:36.250080 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 14 00:13:36.250301 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 14 00:13:36.250521 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Mar 14 00:13:36.250738 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Mar 14 00:13:36.250764 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 14 00:13:36.250783 kernel: acpiphp: Slot [1] registered Mar 14 00:13:36.250803 kernel: acpiphp: Slot [2] registered Mar 14 00:13:36.250823 kernel: acpiphp: Slot [3] registered Mar 14 00:13:36.250841 kernel: acpiphp: Slot [4] registered Mar 14 00:13:36.250860 kernel: acpiphp: Slot [5] registered Mar 14 00:13:36.250885 kernel: acpiphp: Slot [6] registered Mar 14 00:13:36.250904 kernel: acpiphp: Slot [7] registered Mar 14 00:13:36.250923 kernel: acpiphp: Slot [8] registered Mar 14 00:13:36.253888 kernel: acpiphp: Slot [9] registered Mar 14 00:13:36.253918 kernel: acpiphp: Slot [10] registered Mar 14 00:13:36.253937 kernel: acpiphp: Slot [11] registered Mar 14 00:13:36.253976 kernel: acpiphp: Slot [12] registered Mar 14 00:13:36.253999 kernel: acpiphp: Slot [13] registered Mar 14 00:13:36.254018 kernel: acpiphp: Slot [14] registered Mar 14 00:13:36.254038 kernel: acpiphp: Slot [15] registered Mar 14 00:13:36.254067 kernel: acpiphp: Slot [16] registered Mar 14 00:13:36.254086 kernel: acpiphp: Slot [17] registered Mar 14 00:13:36.254106 kernel: acpiphp: Slot [18] registered Mar 14 00:13:36.254125 kernel: acpiphp: Slot [19] registered Mar 14 00:13:36.254143 kernel: acpiphp: Slot [20] registered Mar 14 00:13:36.254162 kernel: acpiphp: Slot [21] registered Mar 14 00:13:36.254181 kernel: acpiphp: Slot [22] registered Mar 14 00:13:36.254199 kernel: acpiphp: Slot [23] registered Mar 14 00:13:36.254218 kernel: acpiphp: Slot [24] registered Mar 14 00:13:36.254242 kernel: acpiphp: Slot [25] registered Mar 14 00:13:36.254261 kernel: acpiphp: Slot [26] registered Mar 14 00:13:36.254280 kernel: acpiphp: Slot [27] registered Mar 14 00:13:36.254299 kernel: acpiphp: Slot [28] registered Mar 14 00:13:36.254318 kernel: acpiphp: Slot [29] registered Mar 14 00:13:36.254337 kernel: acpiphp: Slot [30] registered Mar 14 00:13:36.254355 kernel: acpiphp: Slot [31] registered Mar 14 00:13:36.254374 kernel: PCI host bridge to bus 0000:00 Mar 14 00:13:36.254639 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 14 00:13:36.254853 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 14 00:13:36.255105 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 14 00:13:36.255316 kernel: pci_bus 0000:00: root bus resource [bus 00] Mar 14 00:13:36.255588 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 14 00:13:36.255862 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 14 00:13:36.256165 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 14 00:13:36.256421 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 14 00:13:36.256643 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 14 00:13:36.256865 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 14 00:13:36.257169 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 14 00:13:36.257460 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 14 00:13:36.257715 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 14 00:13:36.258020 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 14 00:13:36.258284 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 14 00:13:36.258504 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 14 00:13:36.258708 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 14 00:13:36.258908 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 14 00:13:36.258977 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 14 00:13:36.259003 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 14 00:13:36.259023 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 14 00:13:36.259043 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 14 00:13:36.259072 kernel: iommu: Default domain type: Translated Mar 14 00:13:36.259092 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 14 00:13:36.259111 kernel: efivars: Registered efivars operations Mar 14 00:13:36.259131 kernel: vgaarb: loaded Mar 14 00:13:36.259150 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 14 00:13:36.259169 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:13:36.259196 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:13:36.259223 kernel: pnp: PnP ACPI init Mar 14 00:13:36.259484 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 14 00:13:36.259521 kernel: pnp: PnP ACPI: found 1 devices Mar 14 00:13:36.259541 kernel: NET: Registered PF_INET protocol family Mar 14 00:13:36.259560 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:13:36.259580 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:13:36.259600 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:13:36.259619 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:13:36.259639 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:13:36.259658 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:13:36.259683 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:13:36.259703 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:13:36.259722 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:13:36.259741 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:13:36.259760 kernel: kvm [1]: HYP mode not available Mar 14 00:13:36.259779 kernel: Initialise system trusted keyrings Mar 14 00:13:36.259798 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:13:36.259817 kernel: Key type asymmetric registered Mar 14 00:13:36.259836 kernel: Asymmetric key parser 'x509' registered Mar 14 00:13:36.259860 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 14 00:13:36.259879 kernel: io scheduler mq-deadline registered Mar 14 00:13:36.259898 kernel: io scheduler kyber registered Mar 14 00:13:36.259916 kernel: io scheduler bfq registered Mar 14 00:13:36.263158 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 14 00:13:36.263205 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 14 00:13:36.263225 kernel: ACPI: button: Power Button [PWRB] Mar 14 00:13:36.263246 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 14 00:13:36.263267 kernel: ACPI: button: Sleep Button [SLPB] Mar 14 00:13:36.263297 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:13:36.263317 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 14 00:13:36.263561 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 14 00:13:36.263593 kernel: printk: console [ttyS0] disabled Mar 14 00:13:36.263615 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 14 00:13:36.263636 kernel: printk: console [ttyS0] enabled Mar 14 00:13:36.263655 kernel: printk: bootconsole [uart0] disabled Mar 14 00:13:36.263674 kernel: thunder_xcv, ver 1.0 Mar 14 00:13:36.263693 kernel: thunder_bgx, ver 1.0 Mar 14 00:13:36.263720 kernel: nicpf, ver 1.0 Mar 14 00:13:36.263739 kernel: nicvf, ver 1.0 Mar 14 00:13:36.264185 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 14 00:13:36.264544 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-14T00:13:35 UTC (1773447215) Mar 14 00:13:36.264576 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 14 00:13:36.264597 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 14 00:13:36.264616 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 14 00:13:36.264636 kernel: watchdog: Hard watchdog permanently disabled Mar 14 00:13:36.264665 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:13:36.264684 kernel: Segment Routing with IPv6 Mar 14 00:13:36.264703 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:13:36.264722 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:13:36.264741 kernel: Key type dns_resolver registered Mar 14 00:13:36.264760 kernel: registered taskstats version 1 Mar 14 00:13:36.264780 kernel: Loading compiled-in X.509 certificates Mar 14 00:13:36.264799 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 16e13a4d63c54048487d2b18c824fa4694264505' Mar 14 00:13:36.264818 kernel: Key type .fscrypt registered Mar 14 00:13:36.264842 kernel: Key type fscrypt-provisioning registered Mar 14 00:13:36.264860 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:13:36.264879 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:13:36.264899 kernel: ima: No architecture policies found Mar 14 00:13:36.264918 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 14 00:13:36.264938 kernel: clk: Disabling unused clocks Mar 14 00:13:36.264986 kernel: Freeing unused kernel memory: 39424K Mar 14 00:13:36.265008 kernel: Run /init as init process Mar 14 00:13:36.265027 kernel: with arguments: Mar 14 00:13:36.265054 kernel: /init Mar 14 00:13:36.265073 kernel: with environment: Mar 14 00:13:36.265094 kernel: HOME=/ Mar 14 00:13:36.265112 kernel: TERM=linux Mar 14 00:13:36.265136 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:13:36.265161 systemd[1]: Detected virtualization amazon. Mar 14 00:13:36.265182 systemd[1]: Detected architecture arm64. Mar 14 00:13:36.265202 systemd[1]: Running in initrd. Mar 14 00:13:36.265228 systemd[1]: No hostname configured, using default hostname. Mar 14 00:13:36.265248 systemd[1]: Hostname set to . Mar 14 00:13:36.265270 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:13:36.265290 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:13:36.265310 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:36.265331 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:36.265352 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:13:36.265373 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:13:36.265399 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:13:36.265420 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:13:36.265444 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:13:36.265465 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:13:36.265486 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:36.265507 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:36.265532 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:13:36.265552 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:13:36.265573 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:13:36.265593 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:13:36.265614 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:13:36.265634 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:13:36.265656 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:13:36.265677 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:13:36.265699 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:36.265725 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:36.265747 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:36.265767 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:13:36.265788 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:13:36.265813 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:13:36.265834 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:13:36.265854 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:13:36.265875 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:13:36.265895 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:13:36.265921 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:36.265942 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:13:36.265991 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:36.266056 systemd-journald[252]: Collecting audit messages is disabled. Mar 14 00:13:36.266107 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:13:36.266130 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:13:36.266151 systemd-journald[252]: Journal started Mar 14 00:13:36.266193 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2f4f39d78c358e5e1bb6649a06d49d) is 8.0M, max 75.3M, 67.3M free. Mar 14 00:13:36.265044 systemd-modules-load[253]: Inserted module 'overlay' Mar 14 00:13:36.273356 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:13:36.286789 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:13:36.302163 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:36.307627 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:13:36.314090 systemd-modules-load[253]: Inserted module 'br_netfilter' Mar 14 00:13:36.316396 kernel: Bridge firewalling registered Mar 14 00:13:36.322288 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:36.329485 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:36.339662 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:13:36.359725 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:13:36.379249 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:13:36.382726 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:36.397435 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:36.408074 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:36.417020 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:13:36.436313 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:13:36.444128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:36.472342 dracut-cmdline[287]: dracut-dracut-053 Mar 14 00:13:36.479842 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:13:36.536659 systemd-resolved[288]: Positive Trust Anchors: Mar 14 00:13:36.536697 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:13:36.536760 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:13:36.641980 kernel: SCSI subsystem initialized Mar 14 00:13:36.648995 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:13:36.661991 kernel: iscsi: registered transport (tcp) Mar 14 00:13:36.685234 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:13:36.685310 kernel: QLogic iSCSI HBA Driver Mar 14 00:13:36.763995 kernel: random: crng init done Mar 14 00:13:36.764517 systemd-resolved[288]: Defaulting to hostname 'linux'. Mar 14 00:13:36.769178 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:13:36.779156 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:36.798014 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:13:36.809253 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:13:36.846344 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:13:36.846421 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:13:36.848721 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:13:36.916008 kernel: raid6: neonx8 gen() 6799 MB/s Mar 14 00:13:36.932998 kernel: raid6: neonx4 gen() 6643 MB/s Mar 14 00:13:36.949999 kernel: raid6: neonx2 gen() 5503 MB/s Mar 14 00:13:36.966995 kernel: raid6: neonx1 gen() 3978 MB/s Mar 14 00:13:36.984003 kernel: raid6: int64x8 gen() 3818 MB/s Mar 14 00:13:37.000992 kernel: raid6: int64x4 gen() 3720 MB/s Mar 14 00:13:37.017997 kernel: raid6: int64x2 gen() 3610 MB/s Mar 14 00:13:37.036108 kernel: raid6: int64x1 gen() 2764 MB/s Mar 14 00:13:37.036148 kernel: raid6: using algorithm neonx8 gen() 6799 MB/s Mar 14 00:13:37.055080 kernel: raid6: .... xor() 4755 MB/s, rmw enabled Mar 14 00:13:37.055134 kernel: raid6: using neon recovery algorithm Mar 14 00:13:37.062991 kernel: xor: measuring software checksum speed Mar 14 00:13:37.065478 kernel: 8regs : 9857 MB/sec Mar 14 00:13:37.065514 kernel: 32regs : 11904 MB/sec Mar 14 00:13:37.066809 kernel: arm64_neon : 9287 MB/sec Mar 14 00:13:37.066855 kernel: xor: using function: 32regs (11904 MB/sec) Mar 14 00:13:37.152003 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:13:37.172874 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:13:37.184351 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:37.222734 systemd-udevd[472]: Using default interface naming scheme 'v255'. Mar 14 00:13:37.231074 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:37.245343 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:13:37.281323 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Mar 14 00:13:37.341260 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:13:37.354781 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:13:37.473462 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:37.490304 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:13:37.525325 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:13:37.531934 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:13:37.538213 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:37.544139 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:13:37.557231 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:13:37.603563 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:13:37.688598 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 14 00:13:37.688667 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 14 00:13:37.691718 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:13:37.694484 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 14 00:13:37.696275 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 14 00:13:37.691991 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:37.702422 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:37.705089 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:13:37.715108 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:37.720428 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:37.735972 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 14 00:13:37.736059 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 14 00:13:37.736388 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:bd:e8:55:b7:99 Mar 14 00:13:37.739397 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:37.753981 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 14 00:13:37.765677 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:13:37.765750 kernel: GPT:9289727 != 33554431 Mar 14 00:13:37.765777 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:13:37.769115 kernel: GPT:9289727 != 33554431 Mar 14 00:13:37.770565 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:13:37.771800 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:37.770662 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:37.781425 (udev-worker)[527]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:13:37.792303 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:37.846842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:37.886006 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (545) Mar 14 00:13:37.913981 kernel: BTRFS: device fsid df62721e-ebc0-40bc-8956-1227b067a773 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (541) Mar 14 00:13:37.990731 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 14 00:13:38.021175 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 14 00:13:38.049212 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 14 00:13:38.052138 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 14 00:13:38.068919 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:13:38.083441 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:13:38.094637 disk-uuid[662]: Primary Header is updated. Mar 14 00:13:38.094637 disk-uuid[662]: Secondary Entries is updated. Mar 14 00:13:38.094637 disk-uuid[662]: Secondary Header is updated. Mar 14 00:13:38.111001 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:38.119061 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:38.127052 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:39.125597 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:39.127492 disk-uuid[663]: The operation has completed successfully. Mar 14 00:13:39.331748 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:13:39.331935 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:13:39.380264 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:13:39.404840 sh[1004]: Success Mar 14 00:13:39.432015 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 14 00:13:39.546347 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:13:39.554191 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:13:39.561361 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:13:39.593991 kernel: BTRFS info (device dm-0): first mount of filesystem df62721e-ebc0-40bc-8956-1227b067a773 Mar 14 00:13:39.594054 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:39.594081 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:13:39.594976 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:13:39.596396 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:13:39.730995 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:13:39.733327 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:13:39.738094 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:13:39.752224 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:13:39.759257 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:13:39.791501 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:39.791571 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:39.791610 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:13:39.799992 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:13:39.819374 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:13:39.823118 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:39.833088 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:13:39.850435 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:13:39.961110 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:13:39.979322 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:13:40.034819 systemd-networkd[1197]: lo: Link UP Mar 14 00:13:40.034841 systemd-networkd[1197]: lo: Gained carrier Mar 14 00:13:40.037836 systemd-networkd[1197]: Enumeration completed Mar 14 00:13:40.038811 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:40.038818 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:13:40.045321 systemd-networkd[1197]: eth0: Link UP Mar 14 00:13:40.045329 systemd-networkd[1197]: eth0: Gained carrier Mar 14 00:13:40.045348 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:40.051348 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:13:40.060409 systemd[1]: Reached target network.target - Network. Mar 14 00:13:40.082053 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.26.39/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:13:40.339165 ignition[1107]: Ignition 2.19.0 Mar 14 00:13:40.339702 ignition[1107]: Stage: fetch-offline Mar 14 00:13:40.341612 ignition[1107]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:40.341652 ignition[1107]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:40.342236 ignition[1107]: Ignition finished successfully Mar 14 00:13:40.350826 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:13:40.363386 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:13:40.389793 ignition[1206]: Ignition 2.19.0 Mar 14 00:13:40.389814 ignition[1206]: Stage: fetch Mar 14 00:13:40.390458 ignition[1206]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:40.390483 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:40.390632 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:40.414391 ignition[1206]: PUT result: OK Mar 14 00:13:40.417888 ignition[1206]: parsed url from cmdline: "" Mar 14 00:13:40.417903 ignition[1206]: no config URL provided Mar 14 00:13:40.417918 ignition[1206]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:13:40.417971 ignition[1206]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:13:40.418008 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:40.420295 ignition[1206]: PUT result: OK Mar 14 00:13:40.420375 ignition[1206]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 14 00:13:40.427605 ignition[1206]: GET result: OK Mar 14 00:13:40.427808 ignition[1206]: parsing config with SHA512: 2aa10c1e3ff74e0c0f86c6187b83da3c19f78ac4ac4f5a90928252586da9aa8590132f46a114db62a0bfca6c7863f541c96f90e8c122ad204c3498b0f13ce576 Mar 14 00:13:40.441079 unknown[1206]: fetched base config from "system" Mar 14 00:13:40.441108 unknown[1206]: fetched base config from "system" Mar 14 00:13:40.444724 ignition[1206]: fetch: fetch complete Mar 14 00:13:40.441123 unknown[1206]: fetched user config from "aws" Mar 14 00:13:40.444735 ignition[1206]: fetch: fetch passed Mar 14 00:13:40.444823 ignition[1206]: Ignition finished successfully Mar 14 00:13:40.467784 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:13:40.496268 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:13:40.524219 ignition[1212]: Ignition 2.19.0 Mar 14 00:13:40.524252 ignition[1212]: Stage: kargs Mar 14 00:13:40.526172 ignition[1212]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:40.526199 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:40.527541 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:40.535159 ignition[1212]: PUT result: OK Mar 14 00:13:40.539516 ignition[1212]: kargs: kargs passed Mar 14 00:13:40.539837 ignition[1212]: Ignition finished successfully Mar 14 00:13:40.546051 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:13:40.559940 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:13:40.585424 ignition[1219]: Ignition 2.19.0 Mar 14 00:13:40.585445 ignition[1219]: Stage: disks Mar 14 00:13:40.586670 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:40.586697 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:40.586858 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:40.597329 ignition[1219]: PUT result: OK Mar 14 00:13:40.605610 ignition[1219]: disks: disks passed Mar 14 00:13:40.605786 ignition[1219]: Ignition finished successfully Mar 14 00:13:40.609436 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:13:40.617096 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:13:40.617381 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:13:40.617658 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:13:40.618372 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:13:40.618741 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:13:40.632394 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:13:40.693920 systemd-fsck[1227]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:13:40.698581 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:13:40.715341 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:13:40.799995 kernel: EXT4-fs (nvme0n1p9): mounted filesystem af566013-4e57-4e7f-9689-a2e15898536d r/w with ordered data mode. Quota mode: none. Mar 14 00:13:40.801800 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:13:40.806157 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:13:40.825148 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:13:40.831634 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:13:40.834538 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:13:40.834749 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:13:40.834798 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:13:40.859980 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1246) Mar 14 00:13:40.864444 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:40.864504 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:40.866489 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:13:40.872501 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:13:40.877474 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:13:40.887529 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:13:40.894878 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:13:41.259051 initrd-setup-root[1270]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:13:41.291024 initrd-setup-root[1277]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:13:41.312477 initrd-setup-root[1284]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:13:41.316734 systemd-networkd[1197]: eth0: Gained IPv6LL Mar 14 00:13:41.324474 initrd-setup-root[1291]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:13:41.728706 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:13:41.741273 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:13:41.749259 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:13:41.770984 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:41.769706 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:13:41.811431 ignition[1359]: INFO : Ignition 2.19.0 Mar 14 00:13:41.818431 ignition[1359]: INFO : Stage: mount Mar 14 00:13:41.818431 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:41.818431 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:41.818431 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:41.818431 ignition[1359]: INFO : PUT result: OK Mar 14 00:13:41.832017 ignition[1359]: INFO : mount: mount passed Mar 14 00:13:41.832017 ignition[1359]: INFO : Ignition finished successfully Mar 14 00:13:41.837379 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:13:41.840058 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:13:41.857898 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:13:41.874112 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:13:41.911993 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1371) Mar 14 00:13:41.916988 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:41.917037 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:41.917065 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:13:41.923009 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:13:41.926173 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:13:41.967248 ignition[1388]: INFO : Ignition 2.19.0 Mar 14 00:13:41.967248 ignition[1388]: INFO : Stage: files Mar 14 00:13:41.971149 ignition[1388]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:41.971149 ignition[1388]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:41.971149 ignition[1388]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:41.978730 ignition[1388]: INFO : PUT result: OK Mar 14 00:13:41.987205 ignition[1388]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:13:42.001664 ignition[1388]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:13:42.001664 ignition[1388]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:13:42.041096 ignition[1388]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:13:42.044267 ignition[1388]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:13:42.044267 ignition[1388]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:13:42.042218 unknown[1388]: wrote ssh authorized keys file for user: core Mar 14 00:13:42.055085 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:13:42.059641 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 14 00:13:42.164085 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:13:42.321859 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:13:42.327281 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:13:42.327281 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:13:42.327281 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:13:42.327281 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:13:42.327281 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:13:42.327281 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:13:42.327281 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:13:42.327281 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:13:42.327281 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:13:42.327281 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:13:42.327281 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 14 00:13:42.327281 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 14 00:13:42.327281 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 14 00:13:42.327281 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Mar 14 00:13:49.814877 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:13:50.287393 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 14 00:13:50.292283 ignition[1388]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:13:50.292283 ignition[1388]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:13:50.292283 ignition[1388]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:13:50.292283 ignition[1388]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:13:50.292283 ignition[1388]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:13:50.292283 ignition[1388]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:13:50.292283 ignition[1388]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:13:50.292283 ignition[1388]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:13:50.292283 ignition[1388]: INFO : files: files passed Mar 14 00:13:50.292283 ignition[1388]: INFO : Ignition finished successfully Mar 14 00:13:50.323454 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:13:50.341481 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:13:50.351541 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:13:50.364483 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:13:50.364896 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:13:50.386747 initrd-setup-root-after-ignition[1416]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:50.386747 initrd-setup-root-after-ignition[1416]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:50.396501 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:50.403076 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:13:50.407386 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:13:50.418283 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:13:50.475145 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:13:50.475572 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:13:50.486431 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:13:50.488859 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:13:50.491552 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:13:50.507376 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:13:50.536935 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:13:50.553986 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:13:50.580347 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:50.583232 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:50.586109 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:13:50.595167 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:13:50.595411 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:13:50.598632 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:13:50.608053 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:13:50.612169 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:13:50.615004 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:13:50.623246 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:13:50.626491 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:13:50.633147 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:13:50.636315 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:13:50.643980 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:13:50.646388 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:13:50.648399 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:13:50.648651 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:13:50.659697 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:50.662208 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:50.665534 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:13:50.670035 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:50.675836 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:13:50.676229 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:13:50.682010 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:13:50.682266 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:13:50.684735 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:13:50.684940 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:13:50.706295 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:13:50.715306 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:13:50.718098 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:50.731474 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:13:50.738295 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:13:50.741251 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:50.751536 ignition[1440]: INFO : Ignition 2.19.0 Mar 14 00:13:50.751536 ignition[1440]: INFO : Stage: umount Mar 14 00:13:50.755217 ignition[1440]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:50.755217 ignition[1440]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:50.760171 ignition[1440]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:50.759546 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:13:50.771205 ignition[1440]: INFO : PUT result: OK Mar 14 00:13:50.759866 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:13:50.780190 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:13:50.785711 ignition[1440]: INFO : umount: umount passed Mar 14 00:13:50.785711 ignition[1440]: INFO : Ignition finished successfully Mar 14 00:13:50.780405 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:13:50.787730 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:13:50.787981 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:13:50.797288 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:13:50.797404 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:13:50.802891 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:13:50.803057 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:13:50.806205 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:13:50.806301 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:13:50.808354 systemd[1]: Stopped target network.target - Network. Mar 14 00:13:50.808640 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:13:50.808724 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:13:50.809425 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:13:50.809744 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:13:50.834988 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:50.839773 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:13:50.849345 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:13:50.851679 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:13:50.851768 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:13:50.854151 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:13:50.854239 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:13:50.857098 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:13:50.857203 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:13:50.859685 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:13:50.859770 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:13:50.862464 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:13:50.865104 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:13:50.873914 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:13:50.875518 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:13:50.875742 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:13:50.880588 systemd-networkd[1197]: eth0: DHCPv6 lease lost Mar 14 00:13:50.888509 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:13:50.890019 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:13:50.895488 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:13:50.895680 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:13:50.903845 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:13:50.904014 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:50.921002 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:13:50.921127 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:13:50.940474 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:13:50.958150 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:13:50.958279 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:13:50.962497 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:13:50.962595 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:50.974783 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:13:50.974919 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:50.979986 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:13:50.980080 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:50.985523 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:51.003466 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:13:51.007055 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:51.013711 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:13:51.013852 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:51.016466 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:13:51.018466 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:51.025633 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:13:51.026735 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:13:51.036764 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:13:51.036893 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:13:51.041706 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:13:51.041801 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:51.059228 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:13:51.062044 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:13:51.062165 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:51.065769 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:13:51.065854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:51.069707 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:13:51.069934 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:13:51.110270 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:13:51.110704 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:13:51.120427 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:13:51.145347 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:13:51.164051 systemd[1]: Switching root. Mar 14 00:13:51.209780 systemd-journald[252]: Journal stopped Mar 14 00:13:53.752447 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Mar 14 00:13:53.752591 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:13:53.752642 kernel: SELinux: policy capability open_perms=1 Mar 14 00:13:53.752685 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:13:53.752716 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:13:53.752748 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:13:53.752780 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:13:53.752810 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:13:53.752840 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:13:53.752869 kernel: audit: type=1403 audit(1773447231.672:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:13:53.752902 systemd[1]: Successfully loaded SELinux policy in 61.713ms. Mar 14 00:13:53.752968 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.095ms. Mar 14 00:13:53.753010 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:13:53.753044 systemd[1]: Detected virtualization amazon. Mar 14 00:13:53.753080 systemd[1]: Detected architecture arm64. Mar 14 00:13:53.753112 systemd[1]: Detected first boot. Mar 14 00:13:53.753144 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:13:53.753176 zram_generator::config[1482]: No configuration found. Mar 14 00:13:53.753211 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:13:53.753248 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:13:53.753280 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:13:53.753314 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:13:53.753347 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:13:53.753379 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:13:53.753411 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:13:53.753443 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:13:53.753476 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:13:53.753506 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:13:53.753543 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:13:53.753575 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:13:53.753605 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:53.753636 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:53.753666 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:13:53.753696 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:13:53.753726 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:13:53.753760 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:13:53.753791 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:13:53.753827 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:53.753857 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:13:53.753887 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:13:53.753918 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:13:53.757462 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:13:53.757534 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:53.757570 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:13:53.757609 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:13:53.757643 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:13:53.757685 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:13:53.757718 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:13:53.757749 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:53.757779 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:53.757811 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:53.757842 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:13:53.757875 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:13:53.757905 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:13:53.757941 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:13:53.766581 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:13:53.766625 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:13:53.766658 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:13:53.766693 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:13:53.766727 systemd[1]: Reached target machines.target - Containers. Mar 14 00:13:53.766760 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:13:53.766794 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:53.785169 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:13:53.785204 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:13:53.785236 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:13:53.785266 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:13:53.785300 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:13:53.785330 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:13:53.785360 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:13:53.785390 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:13:53.785426 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:13:53.785459 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:13:53.785491 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:13:53.785522 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:13:53.785552 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:13:53.785584 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:13:53.785616 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:13:53.785648 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:13:53.785679 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:13:53.785715 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:13:53.785746 systemd[1]: Stopped verity-setup.service. Mar 14 00:13:53.785777 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:13:53.785807 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:13:53.785840 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:13:53.785873 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:13:53.785903 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:13:53.785936 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:13:53.799463 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:53.799515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:13:53.804023 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:13:53.804082 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:53.804117 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:13:53.804149 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:13:53.804189 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:13:53.804222 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:13:53.804253 kernel: fuse: init (API version 7.39) Mar 14 00:13:53.804282 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:13:53.804313 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:13:53.804344 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:53.804375 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:13:53.804406 kernel: ACPI: bus type drm_connector registered Mar 14 00:13:53.804446 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:13:53.804476 kernel: loop: module loaded Mar 14 00:13:53.804506 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:13:53.804538 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:13:53.804568 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:13:53.804599 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:13:53.804634 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:13:53.804664 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:13:53.804697 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:13:53.804728 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:13:53.804758 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:13:53.804788 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:13:53.804818 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:13:53.804898 systemd-journald[1560]: Collecting audit messages is disabled. Mar 14 00:13:53.804978 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:13:53.805015 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:13:53.805046 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:13:53.805078 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:13:53.805115 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:13:53.805147 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:13:53.805181 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:13:53.805218 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:13:53.805251 systemd-journald[1560]: Journal started Mar 14 00:13:53.805301 systemd-journald[1560]: Runtime Journal (/run/log/journal/ec2f4f39d78c358e5e1bb6649a06d49d) is 8.0M, max 75.3M, 67.3M free. Mar 14 00:13:53.827887 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:13:52.952984 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:13:53.024897 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 14 00:13:53.025712 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:13:53.849123 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:13:53.849214 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:13:53.841871 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:13:53.868908 kernel: loop0: detected capacity change from 0 to 114328 Mar 14 00:13:53.894136 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:13:53.917025 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:13:53.932446 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:13:53.946479 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:13:53.963449 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:13:53.966718 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:54.000128 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:13:54.020157 systemd-journald[1560]: Time spent on flushing to /var/log/journal/ec2f4f39d78c358e5e1bb6649a06d49d is 86.225ms for 905 entries. Mar 14 00:13:54.020157 systemd-journald[1560]: System Journal (/var/log/journal/ec2f4f39d78c358e5e1bb6649a06d49d) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:13:54.129775 systemd-journald[1560]: Received client request to flush runtime journal. Mar 14 00:13:54.129888 kernel: loop1: detected capacity change from 0 to 114432 Mar 14 00:13:54.041758 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:13:54.043993 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:13:54.110081 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:54.121422 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:13:54.140242 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:13:54.172001 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:13:54.188362 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:13:54.195121 kernel: loop2: detected capacity change from 0 to 200864 Mar 14 00:13:54.203419 udevadm[1626]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 14 00:13:54.284014 systemd-tmpfiles[1631]: ACLs are not supported, ignoring. Mar 14 00:13:54.284055 systemd-tmpfiles[1631]: ACLs are not supported, ignoring. Mar 14 00:13:54.303065 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:54.314003 kernel: loop3: detected capacity change from 0 to 52536 Mar 14 00:13:54.361998 kernel: loop4: detected capacity change from 0 to 114328 Mar 14 00:13:54.385000 kernel: loop5: detected capacity change from 0 to 114432 Mar 14 00:13:54.398511 kernel: loop6: detected capacity change from 0 to 200864 Mar 14 00:13:54.423001 kernel: loop7: detected capacity change from 0 to 52536 Mar 14 00:13:54.441364 (sd-merge)[1636]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 14 00:13:54.443583 (sd-merge)[1636]: Merged extensions into '/usr'. Mar 14 00:13:54.453206 systemd[1]: Reloading requested from client PID 1584 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:13:54.453232 systemd[1]: Reloading... Mar 14 00:13:54.646002 zram_generator::config[1662]: No configuration found. Mar 14 00:13:54.952672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:13:55.076437 systemd[1]: Reloading finished in 622 ms. Mar 14 00:13:55.124356 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:13:55.127804 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:13:55.148255 systemd[1]: Starting ensure-sysext.service... Mar 14 00:13:55.161305 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:13:55.174354 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:55.187426 systemd[1]: Reloading requested from client PID 1714 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:13:55.187461 systemd[1]: Reloading... Mar 14 00:13:55.247372 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:13:55.250125 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:13:55.252144 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:13:55.252848 systemd-tmpfiles[1715]: ACLs are not supported, ignoring. Mar 14 00:13:55.253749 systemd-tmpfiles[1715]: ACLs are not supported, ignoring. Mar 14 00:13:55.259637 systemd-tmpfiles[1715]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:13:55.261188 systemd-tmpfiles[1715]: Skipping /boot Mar 14 00:13:55.288610 systemd-tmpfiles[1715]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:13:55.288840 systemd-tmpfiles[1715]: Skipping /boot Mar 14 00:13:55.351471 systemd-udevd[1716]: Using default interface naming scheme 'v255'. Mar 14 00:13:55.389996 zram_generator::config[1743]: No configuration found. Mar 14 00:13:55.463182 ldconfig[1573]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:13:55.619652 (udev-worker)[1765]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:13:55.789060 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1759) Mar 14 00:13:55.809382 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:13:55.969308 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:13:55.970036 systemd[1]: Reloading finished in 781 ms. Mar 14 00:13:55.997454 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:56.004181 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:13:56.054254 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:56.131852 systemd[1]: Finished ensure-sysext.service. Mar 14 00:13:56.176135 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:13:56.188857 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:13:56.203275 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:13:56.210289 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:13:56.213247 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:56.226160 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:13:56.231932 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:13:56.237291 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:13:56.243518 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:13:56.250279 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:13:56.253010 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:56.259866 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:13:56.281318 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:13:56.295363 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:13:56.304831 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:13:56.307615 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:13:56.318240 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:13:56.325283 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:56.358017 lvm[1916]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:13:56.388373 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:13:56.403725 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:13:56.404165 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:13:56.413817 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:13:56.414294 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:13:56.420022 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:13:56.459376 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:13:56.466827 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:13:56.467464 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:13:56.471625 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:13:56.485836 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:13:56.507274 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:13:56.511823 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:13:56.513131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:13:56.518390 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:13:56.529789 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:13:56.535600 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:56.547348 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:13:56.561521 augenrules[1954]: No rules Mar 14 00:13:56.551706 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:13:56.561435 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:13:56.573356 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:13:56.585587 lvm[1955]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:13:56.590027 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:13:56.635099 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:13:56.641846 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:13:56.650716 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:56.759413 systemd-networkd[1929]: lo: Link UP Mar 14 00:13:56.760073 systemd-networkd[1929]: lo: Gained carrier Mar 14 00:13:56.761606 systemd-resolved[1930]: Positive Trust Anchors: Mar 14 00:13:56.761642 systemd-resolved[1930]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:13:56.761708 systemd-resolved[1930]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:13:56.763705 systemd-networkd[1929]: Enumeration completed Mar 14 00:13:56.764742 systemd-networkd[1929]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:56.764751 systemd-networkd[1929]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:13:56.765134 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:13:56.769719 systemd-networkd[1929]: eth0: Link UP Mar 14 00:13:56.770118 systemd-networkd[1929]: eth0: Gained carrier Mar 14 00:13:56.770152 systemd-networkd[1929]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:56.779487 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:13:56.783176 systemd-networkd[1929]: eth0: DHCPv4 address 172.31.26.39/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:13:56.792940 systemd-resolved[1930]: Defaulting to hostname 'linux'. Mar 14 00:13:56.796791 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:13:56.799520 systemd[1]: Reached target network.target - Network. Mar 14 00:13:56.801594 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:56.804825 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:13:56.807407 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:13:56.810209 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:13:56.813291 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:13:56.816167 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:13:56.818993 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:13:56.821776 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:13:56.821829 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:13:56.823900 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:13:56.827187 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:13:56.832780 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:13:56.846466 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:13:56.849795 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:13:56.852921 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:13:56.855187 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:13:56.857371 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:13:56.857435 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:13:56.866289 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:13:56.874323 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:13:56.880310 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:13:56.893188 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:13:56.899609 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:13:56.902257 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:13:56.907263 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:13:56.914706 systemd[1]: Started ntpd.service - Network Time Service. Mar 14 00:13:56.923199 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:13:56.941767 jq[1980]: false Mar 14 00:13:56.932438 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 14 00:13:56.939321 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:13:56.947346 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:13:56.963275 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:13:56.968302 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:13:56.969199 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:13:56.974191 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:13:56.979466 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:13:56.988803 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:13:56.990705 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:13:56.998531 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:13:57.024589 jq[1990]: true Mar 14 00:13:56.998882 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:13:57.089990 jq[1997]: true Mar 14 00:13:57.093608 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:13:57.095157 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:13:57.119062 update_engine[1989]: I20260314 00:13:57.118211 1989 main.cc:92] Flatcar Update Engine starting Mar 14 00:13:57.160992 extend-filesystems[1981]: Found loop4 Mar 14 00:13:57.160992 extend-filesystems[1981]: Found loop5 Mar 14 00:13:57.160992 extend-filesystems[1981]: Found loop6 Mar 14 00:13:57.160992 extend-filesystems[1981]: Found loop7 Mar 14 00:13:57.160992 extend-filesystems[1981]: Found nvme0n1 Mar 14 00:13:57.160992 extend-filesystems[1981]: Found nvme0n1p1 Mar 14 00:13:57.160992 extend-filesystems[1981]: Found nvme0n1p2 Mar 14 00:13:57.160992 extend-filesystems[1981]: Found nvme0n1p3 Mar 14 00:13:57.160992 extend-filesystems[1981]: Found usr Mar 14 00:13:57.160992 extend-filesystems[1981]: Found nvme0n1p4 Mar 14 00:13:57.160992 extend-filesystems[1981]: Found nvme0n1p6 Mar 14 00:13:57.160992 extend-filesystems[1981]: Found nvme0n1p7 Mar 14 00:13:57.160992 extend-filesystems[1981]: Found nvme0n1p9 Mar 14 00:13:57.160992 extend-filesystems[1981]: Checking size of /dev/nvme0n1p9 Mar 14 00:13:57.190400 dbus-daemon[1979]: [system] SELinux support is enabled Mar 14 00:13:57.190714 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:13:57.247431 update_engine[1989]: I20260314 00:13:57.244254 1989 update_check_scheduler.cc:74] Next update check in 5m8s Mar 14 00:13:57.228816 dbus-daemon[1979]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1929 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 14 00:13:57.199789 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:13:57.199839 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:13:57.201892 (ntainerd)[2011]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:13:57.212240 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:13:57.212280 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:13:57.246968 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 14 00:13:57.250728 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:13:57.266319 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:13:57.287173 ntpd[1983]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:57:55 UTC 2026 (1): Starting Mar 14 00:13:57.288042 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:57:55 UTC 2026 (1): Starting Mar 14 00:13:57.288042 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:13:57.288042 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: ---------------------------------------------------- Mar 14 00:13:57.288042 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:13:57.288042 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:13:57.288042 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: corporation. Support and training for ntp-4 are Mar 14 00:13:57.288042 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: available at https://www.nwtime.org/support Mar 14 00:13:57.288042 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: ---------------------------------------------------- Mar 14 00:13:57.287238 ntpd[1983]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:13:57.287260 ntpd[1983]: ---------------------------------------------------- Mar 14 00:13:57.287280 ntpd[1983]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:13:57.287300 ntpd[1983]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:13:57.295145 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: proto: precision = 0.096 usec (-23) Mar 14 00:13:57.295145 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: basedate set to 2026-03-01 Mar 14 00:13:57.295145 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: gps base set to 2026-03-01 (week 2408) Mar 14 00:13:57.287323 ntpd[1983]: corporation. Support and training for ntp-4 are Mar 14 00:13:57.287343 ntpd[1983]: available at https://www.nwtime.org/support Mar 14 00:13:57.287361 ntpd[1983]: ---------------------------------------------------- Mar 14 00:13:57.292298 ntpd[1983]: proto: precision = 0.096 usec (-23) Mar 14 00:13:57.293339 ntpd[1983]: basedate set to 2026-03-01 Mar 14 00:13:57.293372 ntpd[1983]: gps base set to 2026-03-01 (week 2408) Mar 14 00:13:57.299121 ntpd[1983]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:13:57.306862 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:13:57.306862 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:13:57.306862 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:13:57.306862 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: Listen normally on 3 eth0 172.31.26.39:123 Mar 14 00:13:57.306862 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: Listen normally on 4 lo [::1]:123 Mar 14 00:13:57.306862 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: bind(21) AF_INET6 fe80::4bd:e8ff:fe55:b799%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:13:57.306862 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: unable to create socket on eth0 (5) for fe80::4bd:e8ff:fe55:b799%2#123 Mar 14 00:13:57.306862 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: failed to init interface for address fe80::4bd:e8ff:fe55:b799%2 Mar 14 00:13:57.306862 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: Listening on routing socket on fd #21 for interface updates Mar 14 00:13:57.299228 ntpd[1983]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:13:57.299494 ntpd[1983]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:13:57.299558 ntpd[1983]: Listen normally on 3 eth0 172.31.26.39:123 Mar 14 00:13:57.299624 ntpd[1983]: Listen normally on 4 lo [::1]:123 Mar 14 00:13:57.299694 ntpd[1983]: bind(21) AF_INET6 fe80::4bd:e8ff:fe55:b799%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:13:57.299734 ntpd[1983]: unable to create socket on eth0 (5) for fe80::4bd:e8ff:fe55:b799%2#123 Mar 14 00:13:57.299768 ntpd[1983]: failed to init interface for address fe80::4bd:e8ff:fe55:b799%2 Mar 14 00:13:57.299824 ntpd[1983]: Listening on routing socket on fd #21 for interface updates Mar 14 00:13:57.332507 tar[2010]: linux-arm64/LICENSE Mar 14 00:13:57.332507 tar[2010]: linux-arm64/helm Mar 14 00:13:57.341572 extend-filesystems[1981]: Resized partition /dev/nvme0n1p9 Mar 14 00:13:57.341840 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:13:57.341840 ntpd[1983]: 14 Mar 00:13:57 ntpd[1983]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:13:57.341098 ntpd[1983]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:13:57.342075 extend-filesystems[2040]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:13:57.341145 ntpd[1983]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:13:57.380020 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 14 00:13:57.381621 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 14 00:13:57.405112 coreos-metadata[1978]: Mar 14 00:13:57.405 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:13:57.420791 coreos-metadata[1978]: Mar 14 00:13:57.420 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 14 00:13:57.429231 coreos-metadata[1978]: Mar 14 00:13:57.429 INFO Fetch successful Mar 14 00:13:57.429231 coreos-metadata[1978]: Mar 14 00:13:57.429 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 14 00:13:57.432671 coreos-metadata[1978]: Mar 14 00:13:57.432 INFO Fetch successful Mar 14 00:13:57.432671 coreos-metadata[1978]: Mar 14 00:13:57.432 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 14 00:13:57.435940 coreos-metadata[1978]: Mar 14 00:13:57.435 INFO Fetch successful Mar 14 00:13:57.435940 coreos-metadata[1978]: Mar 14 00:13:57.435 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 14 00:13:57.441170 coreos-metadata[1978]: Mar 14 00:13:57.441 INFO Fetch successful Mar 14 00:13:57.441170 coreos-metadata[1978]: Mar 14 00:13:57.441 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 14 00:13:57.443992 coreos-metadata[1978]: Mar 14 00:13:57.443 INFO Fetch failed with 404: resource not found Mar 14 00:13:57.443992 coreos-metadata[1978]: Mar 14 00:13:57.443 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 14 00:13:57.446310 coreos-metadata[1978]: Mar 14 00:13:57.446 INFO Fetch successful Mar 14 00:13:57.446310 coreos-metadata[1978]: Mar 14 00:13:57.446 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 14 00:13:57.450971 coreos-metadata[1978]: Mar 14 00:13:57.450 INFO Fetch successful Mar 14 00:13:57.450971 coreos-metadata[1978]: Mar 14 00:13:57.450 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 14 00:13:57.454159 coreos-metadata[1978]: Mar 14 00:13:57.454 INFO Fetch successful Mar 14 00:13:57.454159 coreos-metadata[1978]: Mar 14 00:13:57.454 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 14 00:13:57.457978 coreos-metadata[1978]: Mar 14 00:13:57.456 INFO Fetch successful Mar 14 00:13:57.457978 coreos-metadata[1978]: Mar 14 00:13:57.456 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 14 00:13:57.479494 coreos-metadata[1978]: Mar 14 00:13:57.458 INFO Fetch successful Mar 14 00:13:57.501733 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:13:57.520527 bash[2049]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:13:57.523981 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1759) Mar 14 00:13:57.535886 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:13:57.555191 systemd[1]: Starting sshkeys.service... Mar 14 00:13:57.578429 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 14 00:13:57.602398 extend-filesystems[2040]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 14 00:13:57.602398 extend-filesystems[2040]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 14 00:13:57.602398 extend-filesystems[2040]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 14 00:13:57.597932 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:13:57.611081 extend-filesystems[1981]: Resized filesystem in /dev/nvme0n1p9 Mar 14 00:13:57.641677 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:13:57.645532 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:13:57.647406 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:13:57.659904 systemd-logind[1988]: Watching system buttons on /dev/input/event0 (Power Button) Mar 14 00:13:57.664484 systemd-logind[1988]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 14 00:13:57.672393 systemd-logind[1988]: New seat seat0. Mar 14 00:13:57.676100 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:13:57.708002 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:13:57.712186 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:13:57.915336 coreos-metadata[2069]: Mar 14 00:13:57.914 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:13:57.917914 coreos-metadata[2069]: Mar 14 00:13:57.917 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 14 00:13:57.922307 coreos-metadata[2069]: Mar 14 00:13:57.922 INFO Fetch successful Mar 14 00:13:57.922307 coreos-metadata[2069]: Mar 14 00:13:57.922 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 14 00:13:57.923285 coreos-metadata[2069]: Mar 14 00:13:57.923 INFO Fetch successful Mar 14 00:13:57.927490 dbus-daemon[1979]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 14 00:13:57.927783 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 14 00:13:57.930303 unknown[2069]: wrote ssh authorized keys file for user: core Mar 14 00:13:57.940387 dbus-daemon[1979]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2023 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 14 00:13:57.967622 systemd[1]: Starting polkit.service - Authorization Manager... Mar 14 00:13:58.060099 update-ssh-keys[2133]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:13:58.061657 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:13:58.098911 systemd[1]: Finished sshkeys.service. Mar 14 00:13:58.127369 polkitd[2134]: Started polkitd version 121 Mar 14 00:13:58.148214 systemd-networkd[1929]: eth0: Gained IPv6LL Mar 14 00:13:58.162596 locksmithd[2024]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:13:58.175866 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:13:58.179845 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:13:58.196226 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 14 00:13:58.211100 polkitd[2134]: Loading rules from directory /etc/polkit-1/rules.d Mar 14 00:13:58.211592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:58.211244 polkitd[2134]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 14 00:13:58.222510 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:13:58.234348 polkitd[2134]: Finished loading, compiling and executing 2 rules Mar 14 00:13:58.249766 dbus-daemon[1979]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 14 00:13:58.258171 systemd[1]: Started polkit.service - Authorization Manager. Mar 14 00:13:58.264073 polkitd[2134]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 14 00:13:58.374037 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:13:58.392252 systemd-hostnamed[2023]: Hostname set to (transient) Mar 14 00:13:58.396004 containerd[2011]: time="2026-03-14T00:13:58.394669937Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:13:58.395010 systemd-resolved[1930]: System hostname changed to 'ip-172-31-26-39'. Mar 14 00:13:58.442370 amazon-ssm-agent[2171]: Initializing new seelog logger Mar 14 00:13:58.444114 amazon-ssm-agent[2171]: New Seelog Logger Creation Complete Mar 14 00:13:58.444364 amazon-ssm-agent[2171]: 2026/03/14 00:13:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:13:58.444443 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:13:58.445535 amazon-ssm-agent[2171]: 2026/03/14 00:13:58 processing appconfig overrides Mar 14 00:13:58.450051 amazon-ssm-agent[2171]: 2026/03/14 00:13:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:13:58.450051 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:13:58.450051 amazon-ssm-agent[2171]: 2026/03/14 00:13:58 processing appconfig overrides Mar 14 00:13:58.450051 amazon-ssm-agent[2171]: 2026/03/14 00:13:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:13:58.450051 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:13:58.450051 amazon-ssm-agent[2171]: 2026/03/14 00:13:58 processing appconfig overrides Mar 14 00:13:58.451106 amazon-ssm-agent[2171]: 2026-03-14 00:13:58 INFO Proxy environment variables: Mar 14 00:13:58.458767 amazon-ssm-agent[2171]: 2026/03/14 00:13:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:13:58.458767 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:13:58.458767 amazon-ssm-agent[2171]: 2026/03/14 00:13:58 processing appconfig overrides Mar 14 00:13:58.557079 amazon-ssm-agent[2171]: 2026-03-14 00:13:58 INFO http_proxy: Mar 14 00:13:58.565375 containerd[2011]: time="2026-03-14T00:13:58.565300398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:58.574746 containerd[2011]: time="2026-03-14T00:13:58.574674786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:58.577424 containerd[2011]: time="2026-03-14T00:13:58.575432514Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:13:58.577424 containerd[2011]: time="2026-03-14T00:13:58.575487198Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:13:58.577424 containerd[2011]: time="2026-03-14T00:13:58.575827746Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:13:58.577424 containerd[2011]: time="2026-03-14T00:13:58.575863506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:58.577424 containerd[2011]: time="2026-03-14T00:13:58.576011442Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:58.577424 containerd[2011]: time="2026-03-14T00:13:58.576041658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:58.577424 containerd[2011]: time="2026-03-14T00:13:58.576362274Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:58.577424 containerd[2011]: time="2026-03-14T00:13:58.576419250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:58.577424 containerd[2011]: time="2026-03-14T00:13:58.576451590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:58.577424 containerd[2011]: time="2026-03-14T00:13:58.576476574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:58.577424 containerd[2011]: time="2026-03-14T00:13:58.576631794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:58.583971 containerd[2011]: time="2026-03-14T00:13:58.582694578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:58.584328 containerd[2011]: time="2026-03-14T00:13:58.584275926Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:58.584527 containerd[2011]: time="2026-03-14T00:13:58.584496354Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:13:58.584839 containerd[2011]: time="2026-03-14T00:13:58.584810166Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:13:58.588903 containerd[2011]: time="2026-03-14T00:13:58.587724894Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:13:58.599151 containerd[2011]: time="2026-03-14T00:13:58.599083086Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:13:58.599270 containerd[2011]: time="2026-03-14T00:13:58.599183454Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:13:58.599270 containerd[2011]: time="2026-03-14T00:13:58.599221422Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:13:58.599270 containerd[2011]: time="2026-03-14T00:13:58.599260554Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:13:58.599436 containerd[2011]: time="2026-03-14T00:13:58.599322666Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:13:58.600129 containerd[2011]: time="2026-03-14T00:13:58.599578506Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:13:58.601481 containerd[2011]: time="2026-03-14T00:13:58.601421658Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:13:58.601755 containerd[2011]: time="2026-03-14T00:13:58.601709238Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:13:58.601821 containerd[2011]: time="2026-03-14T00:13:58.601757694Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:13:58.601821 containerd[2011]: time="2026-03-14T00:13:58.601807962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:13:58.601911 containerd[2011]: time="2026-03-14T00:13:58.601855050Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:13:58.601911 containerd[2011]: time="2026-03-14T00:13:58.601892286Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:13:58.602076 containerd[2011]: time="2026-03-14T00:13:58.601923570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:13:58.606986 containerd[2011]: time="2026-03-14T00:13:58.602252178Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:13:58.606986 containerd[2011]: time="2026-03-14T00:13:58.602582274Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:13:58.606986 containerd[2011]: time="2026-03-14T00:13:58.602622594Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:13:58.606986 containerd[2011]: time="2026-03-14T00:13:58.602669862Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:13:58.606986 containerd[2011]: time="2026-03-14T00:13:58.602699562Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:13:58.606986 containerd[2011]: time="2026-03-14T00:13:58.602750490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.606986 containerd[2011]: time="2026-03-14T00:13:58.602782542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.606986 containerd[2011]: time="2026-03-14T00:13:58.602813790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.606986 containerd[2011]: time="2026-03-14T00:13:58.602868786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.606986 containerd[2011]: time="2026-03-14T00:13:58.602908302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.606986 containerd[2011]: time="2026-03-14T00:13:58.602940090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.606986 containerd[2011]: time="2026-03-14T00:13:58.603002586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.606986 containerd[2011]: time="2026-03-14T00:13:58.603034302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.606986 containerd[2011]: time="2026-03-14T00:13:58.603064950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.607641 containerd[2011]: time="2026-03-14T00:13:58.603099282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.607641 containerd[2011]: time="2026-03-14T00:13:58.603127242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.607641 containerd[2011]: time="2026-03-14T00:13:58.603170022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.607641 containerd[2011]: time="2026-03-14T00:13:58.603201222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.607641 containerd[2011]: time="2026-03-14T00:13:58.603243690Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:13:58.607641 containerd[2011]: time="2026-03-14T00:13:58.603293574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.607641 containerd[2011]: time="2026-03-14T00:13:58.603330834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.607641 containerd[2011]: time="2026-03-14T00:13:58.603364518Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:13:58.607641 containerd[2011]: time="2026-03-14T00:13:58.604739490Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:13:58.607641 containerd[2011]: time="2026-03-14T00:13:58.605616954Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:13:58.607641 containerd[2011]: time="2026-03-14T00:13:58.605653314Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:13:58.607641 containerd[2011]: time="2026-03-14T00:13:58.605686290Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:13:58.607641 containerd[2011]: time="2026-03-14T00:13:58.605710782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.608244 containerd[2011]: time="2026-03-14T00:13:58.605741346Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:13:58.608244 containerd[2011]: time="2026-03-14T00:13:58.605765406Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:13:58.608244 containerd[2011]: time="2026-03-14T00:13:58.605790690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:13:58.608375 containerd[2011]: time="2026-03-14T00:13:58.607159602Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:13:58.608375 containerd[2011]: time="2026-03-14T00:13:58.607282974Z" level=info msg="Connect containerd service" Mar 14 00:13:58.608375 containerd[2011]: time="2026-03-14T00:13:58.607343958Z" level=info msg="using legacy CRI server" Mar 14 00:13:58.608375 containerd[2011]: time="2026-03-14T00:13:58.607362162Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:13:58.608375 containerd[2011]: time="2026-03-14T00:13:58.607510422Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:13:58.613076 containerd[2011]: time="2026-03-14T00:13:58.610493886Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:13:58.613076 containerd[2011]: time="2026-03-14T00:13:58.612867774Z" level=info msg="Start subscribing containerd event" Mar 14 00:13:58.613076 containerd[2011]: time="2026-03-14T00:13:58.612991086Z" level=info msg="Start recovering state" Mar 14 00:13:58.613348 containerd[2011]: time="2026-03-14T00:13:58.613120998Z" level=info msg="Start event monitor" Mar 14 00:13:58.613348 containerd[2011]: time="2026-03-14T00:13:58.613148586Z" level=info msg="Start snapshots syncer" Mar 14 00:13:58.613348 containerd[2011]: time="2026-03-14T00:13:58.613182354Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:13:58.613348 containerd[2011]: time="2026-03-14T00:13:58.613203018Z" level=info msg="Start streaming server" Mar 14 00:13:58.625066 containerd[2011]: time="2026-03-14T00:13:58.617462346Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:13:58.625066 containerd[2011]: time="2026-03-14T00:13:58.617641698Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:13:58.625066 containerd[2011]: time="2026-03-14T00:13:58.623211822Z" level=info msg="containerd successfully booted in 0.238106s" Mar 14 00:13:58.618043 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:13:58.657972 amazon-ssm-agent[2171]: 2026-03-14 00:13:58 INFO no_proxy: Mar 14 00:13:58.756279 amazon-ssm-agent[2171]: 2026-03-14 00:13:58 INFO https_proxy: Mar 14 00:13:58.855487 amazon-ssm-agent[2171]: 2026-03-14 00:13:58 INFO Checking if agent identity type OnPrem can be assumed Mar 14 00:13:58.954285 amazon-ssm-agent[2171]: 2026-03-14 00:13:58 INFO Checking if agent identity type EC2 can be assumed Mar 14 00:13:59.055031 amazon-ssm-agent[2171]: 2026-03-14 00:13:58 INFO Agent will take identity from EC2 Mar 14 00:13:59.078433 amazon-ssm-agent[2171]: 2026-03-14 00:13:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:13:59.078433 amazon-ssm-agent[2171]: 2026-03-14 00:13:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:13:59.078433 amazon-ssm-agent[2171]: 2026-03-14 00:13:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:13:59.078433 amazon-ssm-agent[2171]: 2026-03-14 00:13:58 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 14 00:13:59.078433 amazon-ssm-agent[2171]: 2026-03-14 00:13:58 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 14 00:13:59.078433 amazon-ssm-agent[2171]: 2026-03-14 00:13:58 INFO [amazon-ssm-agent] Starting Core Agent Mar 14 00:13:59.078433 amazon-ssm-agent[2171]: 2026-03-14 00:13:58 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 14 00:13:59.078433 amazon-ssm-agent[2171]: 2026-03-14 00:13:58 INFO [Registrar] Starting registrar module Mar 14 00:13:59.078433 amazon-ssm-agent[2171]: 2026-03-14 00:13:58 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 14 00:13:59.078433 amazon-ssm-agent[2171]: 2026-03-14 00:13:59 INFO [EC2Identity] EC2 registration was successful. Mar 14 00:13:59.078433 amazon-ssm-agent[2171]: 2026-03-14 00:13:59 INFO [CredentialRefresher] credentialRefresher has started Mar 14 00:13:59.078433 amazon-ssm-agent[2171]: 2026-03-14 00:13:59 INFO [CredentialRefresher] Starting credentials refresher loop Mar 14 00:13:59.078433 amazon-ssm-agent[2171]: 2026-03-14 00:13:59 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 14 00:13:59.154665 amazon-ssm-agent[2171]: 2026-03-14 00:13:59 INFO [CredentialRefresher] Next credential rotation will be in 31.7499901676 minutes Mar 14 00:13:59.326046 tar[2010]: linux-arm64/README.md Mar 14 00:13:59.356856 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:14:00.108989 amazon-ssm-agent[2171]: 2026-03-14 00:14:00 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 14 00:14:00.218986 amazon-ssm-agent[2171]: 2026-03-14 00:14:00 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2206) started Mar 14 00:14:00.288793 ntpd[1983]: Listen normally on 6 eth0 [fe80::4bd:e8ff:fe55:b799%2]:123 Mar 14 00:14:00.290731 ntpd[1983]: 14 Mar 00:14:00 ntpd[1983]: Listen normally on 6 eth0 [fe80::4bd:e8ff:fe55:b799%2]:123 Mar 14 00:14:00.317894 amazon-ssm-agent[2171]: 2026-03-14 00:14:00 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 14 00:14:00.611330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:00.626720 (kubelet)[2221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:01.550366 kubelet[2221]: E0314 00:14:01.550300 2221 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:01.555454 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:01.555782 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:01.556924 systemd[1]: kubelet.service: Consumed 1.286s CPU time. Mar 14 00:14:03.098778 sshd_keygen[2029]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:14:03.141099 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:14:03.157229 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:14:03.163474 systemd[1]: Started sshd@0-172.31.26.39:22-68.220.241.50:41530.service - OpenSSH per-connection server daemon (68.220.241.50:41530). Mar 14 00:14:03.179350 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:14:03.181068 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:14:03.195404 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:14:03.221044 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:14:03.232209 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:14:03.250525 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:14:03.253698 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:14:03.256791 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:14:03.263087 systemd[1]: Startup finished in 1.200s (kernel) + 15.845s (initrd) + 11.652s (userspace) = 28.698s. Mar 14 00:14:03.734238 sshd[2238]: Accepted publickey for core from 68.220.241.50 port 41530 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:03.739041 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:03.761233 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:14:03.772609 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:14:03.778830 systemd-logind[1988]: New session 1 of user core. Mar 14 00:14:03.804425 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:14:03.819525 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:14:03.832548 (systemd)[2254]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:14:04.061339 systemd[2254]: Queued start job for default target default.target. Mar 14 00:14:04.073372 systemd[2254]: Created slice app.slice - User Application Slice. Mar 14 00:14:04.073593 systemd[2254]: Reached target paths.target - Paths. Mar 14 00:14:04.073728 systemd[2254]: Reached target timers.target - Timers. Mar 14 00:14:04.076459 systemd[2254]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:14:04.097985 systemd[2254]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:14:04.098242 systemd[2254]: Reached target sockets.target - Sockets. Mar 14 00:14:04.098276 systemd[2254]: Reached target basic.target - Basic System. Mar 14 00:14:04.098362 systemd[2254]: Reached target default.target - Main User Target. Mar 14 00:14:04.098425 systemd[2254]: Startup finished in 254ms. Mar 14 00:14:04.099144 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:14:04.111222 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:14:04.604298 systemd-resolved[1930]: Clock change detected. Flushing caches. Mar 14 00:14:04.820471 systemd[1]: Started sshd@1-172.31.26.39:22-68.220.241.50:56658.service - OpenSSH per-connection server daemon (68.220.241.50:56658). Mar 14 00:14:05.311965 sshd[2265]: Accepted publickey for core from 68.220.241.50 port 56658 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:05.314715 sshd[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:05.323235 systemd-logind[1988]: New session 2 of user core. Mar 14 00:14:05.330123 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:14:05.666605 sshd[2265]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:05.673114 systemd[1]: sshd@1-172.31.26.39:22-68.220.241.50:56658.service: Deactivated successfully. Mar 14 00:14:05.677351 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:14:05.679358 systemd-logind[1988]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:14:05.683514 systemd-logind[1988]: Removed session 2. Mar 14 00:14:05.777338 systemd[1]: Started sshd@2-172.31.26.39:22-68.220.241.50:56672.service - OpenSSH per-connection server daemon (68.220.241.50:56672). Mar 14 00:14:06.316180 sshd[2272]: Accepted publickey for core from 68.220.241.50 port 56672 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:06.318770 sshd[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:06.327926 systemd-logind[1988]: New session 3 of user core. Mar 14 00:14:06.334116 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:14:06.687443 sshd[2272]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:06.694819 systemd[1]: sshd@2-172.31.26.39:22-68.220.241.50:56672.service: Deactivated successfully. Mar 14 00:14:06.699601 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:14:06.701798 systemd-logind[1988]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:14:06.704549 systemd-logind[1988]: Removed session 3. Mar 14 00:14:06.782323 systemd[1]: Started sshd@3-172.31.26.39:22-68.220.241.50:56684.service - OpenSSH per-connection server daemon (68.220.241.50:56684). Mar 14 00:14:07.271875 sshd[2279]: Accepted publickey for core from 68.220.241.50 port 56684 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:07.273873 sshd[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:07.280958 systemd-logind[1988]: New session 4 of user core. Mar 14 00:14:07.293100 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:14:07.624277 sshd[2279]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:07.631269 systemd[1]: sshd@3-172.31.26.39:22-68.220.241.50:56684.service: Deactivated successfully. Mar 14 00:14:07.634723 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:14:07.636077 systemd-logind[1988]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:14:07.637715 systemd-logind[1988]: Removed session 4. Mar 14 00:14:07.724481 systemd[1]: Started sshd@4-172.31.26.39:22-68.220.241.50:56694.service - OpenSSH per-connection server daemon (68.220.241.50:56694). Mar 14 00:14:08.217873 sshd[2286]: Accepted publickey for core from 68.220.241.50 port 56694 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:08.220456 sshd[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:08.231150 systemd-logind[1988]: New session 5 of user core. Mar 14 00:14:08.239157 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:14:08.542796 sudo[2289]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:14:08.543570 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:08.563404 sudo[2289]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:08.641147 sshd[2286]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:08.649320 systemd[1]: sshd@4-172.31.26.39:22-68.220.241.50:56694.service: Deactivated successfully. Mar 14 00:14:08.653147 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:14:08.656287 systemd-logind[1988]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:14:08.659143 systemd-logind[1988]: Removed session 5. Mar 14 00:14:08.752446 systemd[1]: Started sshd@5-172.31.26.39:22-68.220.241.50:56710.service - OpenSSH per-connection server daemon (68.220.241.50:56710). Mar 14 00:14:09.290683 sshd[2294]: Accepted publickey for core from 68.220.241.50 port 56710 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:09.293749 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:09.302294 systemd-logind[1988]: New session 6 of user core. Mar 14 00:14:09.314113 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:14:09.591773 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:14:09.592460 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:09.599649 sudo[2298]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:09.610135 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:14:09.610776 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:09.634379 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:09.649673 auditctl[2301]: No rules Mar 14 00:14:09.650782 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:14:09.652899 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:09.662663 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:09.711733 augenrules[2319]: No rules Mar 14 00:14:09.714554 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:09.717284 sudo[2297]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:09.802698 sshd[2294]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:09.809401 systemd[1]: sshd@5-172.31.26.39:22-68.220.241.50:56710.service: Deactivated successfully. Mar 14 00:14:09.813639 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:14:09.815412 systemd-logind[1988]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:14:09.818806 systemd-logind[1988]: Removed session 6. Mar 14 00:14:09.899943 systemd[1]: Started sshd@6-172.31.26.39:22-68.220.241.50:56722.service - OpenSSH per-connection server daemon (68.220.241.50:56722). Mar 14 00:14:10.456953 sshd[2327]: Accepted publickey for core from 68.220.241.50 port 56722 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:10.459593 sshd[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:10.469200 systemd-logind[1988]: New session 7 of user core. Mar 14 00:14:10.473118 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:14:10.758627 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:14:10.759314 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:11.384335 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:14:11.399400 (dockerd)[2347]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:14:11.941360 dockerd[2347]: time="2026-03-14T00:14:11.941254788Z" level=info msg="Starting up" Mar 14 00:14:11.943509 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:14:11.952246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:12.189284 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport780592175-merged.mount: Deactivated successfully. Mar 14 00:14:12.239543 systemd[1]: var-lib-docker-metacopy\x2dcheck572034144-merged.mount: Deactivated successfully. Mar 14 00:14:12.266165 dockerd[2347]: time="2026-03-14T00:14:12.266095858Z" level=info msg="Loading containers: start." Mar 14 00:14:12.457996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:12.472536 (kubelet)[2400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:12.535371 kernel: Initializing XFRM netlink socket Mar 14 00:14:12.557555 kubelet[2400]: E0314 00:14:12.557465 2400 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:12.565743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:12.566133 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:12.596075 (udev-worker)[2373]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:14:12.680662 systemd-networkd[1929]: docker0: Link UP Mar 14 00:14:12.703352 dockerd[2347]: time="2026-03-14T00:14:12.703272420Z" level=info msg="Loading containers: done." Mar 14 00:14:12.730628 dockerd[2347]: time="2026-03-14T00:14:12.730539408Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:14:12.730919 dockerd[2347]: time="2026-03-14T00:14:12.730697700Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:14:12.731008 dockerd[2347]: time="2026-03-14T00:14:12.730950864Z" level=info msg="Daemon has completed initialization" Mar 14 00:14:12.793902 dockerd[2347]: time="2026-03-14T00:14:12.792455460Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:14:12.794738 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:14:13.177605 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1691189153-merged.mount: Deactivated successfully. Mar 14 00:14:13.786793 containerd[2011]: time="2026-03-14T00:14:13.786369457Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 14 00:14:14.430731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount353029827.mount: Deactivated successfully. Mar 14 00:14:16.207059 containerd[2011]: time="2026-03-14T00:14:16.206974165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:16.209523 containerd[2011]: time="2026-03-14T00:14:16.209212285Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=24583252" Mar 14 00:14:16.213221 containerd[2011]: time="2026-03-14T00:14:16.211856773Z" level=info msg="ImageCreate event name:\"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:16.218290 containerd[2011]: time="2026-03-14T00:14:16.218222377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:16.220794 containerd[2011]: time="2026-03-14T00:14:16.220725097Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"24579851\" in 2.434289772s" Mar 14 00:14:16.220946 containerd[2011]: time="2026-03-14T00:14:16.220792477Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\"" Mar 14 00:14:16.221816 containerd[2011]: time="2026-03-14T00:14:16.221754745Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 14 00:14:17.954288 containerd[2011]: time="2026-03-14T00:14:17.954205974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:17.958641 containerd[2011]: time="2026-03-14T00:14:17.958563342Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=19139641" Mar 14 00:14:17.967895 containerd[2011]: time="2026-03-14T00:14:17.967799790Z" level=info msg="ImageCreate event name:\"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:17.975884 containerd[2011]: time="2026-03-14T00:14:17.975767346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:17.978169 containerd[2011]: time="2026-03-14T00:14:17.978109146Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"20724045\" in 1.756285977s" Mar 14 00:14:17.978453 containerd[2011]: time="2026-03-14T00:14:17.978310782Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\"" Mar 14 00:14:17.980643 containerd[2011]: time="2026-03-14T00:14:17.980544966Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 14 00:14:19.239697 containerd[2011]: time="2026-03-14T00:14:19.239613088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:19.241963 containerd[2011]: time="2026-03-14T00:14:19.241898164Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=14195544" Mar 14 00:14:19.242742 containerd[2011]: time="2026-03-14T00:14:19.242685700Z" level=info msg="ImageCreate event name:\"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:19.250058 containerd[2011]: time="2026-03-14T00:14:19.249966940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:19.252899 containerd[2011]: time="2026-03-14T00:14:19.252371596Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"15779966\" in 1.271718186s" Mar 14 00:14:19.252899 containerd[2011]: time="2026-03-14T00:14:19.252431644Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\"" Mar 14 00:14:19.253118 containerd[2011]: time="2026-03-14T00:14:19.253061572Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 14 00:14:21.646446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248698966.mount: Deactivated successfully. Mar 14 00:14:22.072002 containerd[2011]: time="2026-03-14T00:14:22.071922102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:22.074552 containerd[2011]: time="2026-03-14T00:14:22.074486562Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=22697088" Mar 14 00:14:22.076101 containerd[2011]: time="2026-03-14T00:14:22.076034238Z" level=info msg="ImageCreate event name:\"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:22.080906 containerd[2011]: time="2026-03-14T00:14:22.080296470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:22.081853 containerd[2011]: time="2026-03-14T00:14:22.081776178Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"22696107\" in 2.828669102s" Mar 14 00:14:22.081937 containerd[2011]: time="2026-03-14T00:14:22.081860514Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\"" Mar 14 00:14:22.083137 containerd[2011]: time="2026-03-14T00:14:22.083088522Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 14 00:14:22.623603 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:14:22.633407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:22.682261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount274666041.mount: Deactivated successfully. Mar 14 00:14:23.056098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:23.066786 (kubelet)[2597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:23.186793 kubelet[2597]: E0314 00:14:23.186577 2597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:23.196251 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:23.196585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:24.241932 containerd[2011]: time="2026-03-14T00:14:24.240044793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:24.244492 containerd[2011]: time="2026-03-14T00:14:24.244438209Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Mar 14 00:14:24.247569 containerd[2011]: time="2026-03-14T00:14:24.247289181Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:24.256899 containerd[2011]: time="2026-03-14T00:14:24.256807305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:24.259537 containerd[2011]: time="2026-03-14T00:14:24.259449633Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 2.176139099s" Mar 14 00:14:24.260407 containerd[2011]: time="2026-03-14T00:14:24.259664193Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Mar 14 00:14:24.261817 containerd[2011]: time="2026-03-14T00:14:24.261727653Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:14:24.781027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261086670.mount: Deactivated successfully. Mar 14 00:14:24.796921 containerd[2011]: time="2026-03-14T00:14:24.796211112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:24.798682 containerd[2011]: time="2026-03-14T00:14:24.798256596Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 14 00:14:24.800982 containerd[2011]: time="2026-03-14T00:14:24.800914680Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:24.807010 containerd[2011]: time="2026-03-14T00:14:24.806958492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:24.809485 containerd[2011]: time="2026-03-14T00:14:24.808686636Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 546.895995ms" Mar 14 00:14:24.809485 containerd[2011]: time="2026-03-14T00:14:24.808760724Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 14 00:14:24.809883 containerd[2011]: time="2026-03-14T00:14:24.809598444Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 14 00:14:25.425406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1292953078.mount: Deactivated successfully. Mar 14 00:14:26.675860 containerd[2011]: time="2026-03-14T00:14:26.675763945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:26.678770 containerd[2011]: time="2026-03-14T00:14:26.678695173Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21125515" Mar 14 00:14:26.680780 containerd[2011]: time="2026-03-14T00:14:26.680702365Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:26.687231 containerd[2011]: time="2026-03-14T00:14:26.687152113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:26.689679 containerd[2011]: time="2026-03-14T00:14:26.689621965Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.879962117s" Mar 14 00:14:26.690007 containerd[2011]: time="2026-03-14T00:14:26.689859193Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Mar 14 00:14:28.744990 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 14 00:14:33.257260 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:14:33.268387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:33.620217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:33.631700 (kubelet)[2743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:33.709482 kubelet[2743]: E0314 00:14:33.709389 2743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:33.714353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:33.715246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:35.662731 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:35.675310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:35.744271 systemd[1]: Reloading requested from client PID 2757 ('systemctl') (unit session-7.scope)... Mar 14 00:14:35.744477 systemd[1]: Reloading... Mar 14 00:14:35.944877 zram_generator::config[2797]: No configuration found. Mar 14 00:14:36.224955 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:14:36.399479 systemd[1]: Reloading finished in 654 ms. Mar 14 00:14:36.486235 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:14:36.486668 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:14:36.487387 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:36.494454 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:36.838151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:36.841648 (kubelet)[2859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:14:36.915864 kubelet[2859]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:14:36.915864 kubelet[2859]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:14:36.915864 kubelet[2859]: I0314 00:14:36.913802 2859 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:14:38.032516 kubelet[2859]: I0314 00:14:38.032447 2859 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 14 00:14:38.032516 kubelet[2859]: I0314 00:14:38.032515 2859 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:14:38.033231 kubelet[2859]: I0314 00:14:38.032567 2859 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:14:38.033231 kubelet[2859]: I0314 00:14:38.032581 2859 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:14:38.033231 kubelet[2859]: I0314 00:14:38.033008 2859 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:14:38.042688 kubelet[2859]: E0314 00:14:38.042634 2859 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.26.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.39:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:14:38.047883 kubelet[2859]: I0314 00:14:38.046687 2859 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:14:38.054248 kubelet[2859]: E0314 00:14:38.054200 2859 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:14:38.054577 kubelet[2859]: I0314 00:14:38.054522 2859 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:14:38.059498 kubelet[2859]: I0314 00:14:38.059462 2859 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:14:38.060257 kubelet[2859]: I0314 00:14:38.060212 2859 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:14:38.060615 kubelet[2859]: I0314 00:14:38.060363 2859 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-39","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:14:38.060800 kubelet[2859]: I0314 00:14:38.060780 2859 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:14:38.060968 kubelet[2859]: I0314 00:14:38.060948 2859 container_manager_linux.go:306] "Creating device plugin manager" Mar 14 00:14:38.061217 kubelet[2859]: I0314 00:14:38.061195 2859 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:14:38.063386 kubelet[2859]: I0314 00:14:38.063359 2859 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:14:38.065810 kubelet[2859]: I0314 00:14:38.065780 2859 kubelet.go:475] "Attempting to sync node with API server" Mar 14 00:14:38.066005 kubelet[2859]: I0314 00:14:38.065984 2859 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:14:38.066146 kubelet[2859]: I0314 00:14:38.066127 2859 kubelet.go:387] "Adding apiserver pod source" Mar 14 00:14:38.066258 kubelet[2859]: I0314 00:14:38.066238 2859 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:14:38.068543 kubelet[2859]: E0314 00:14:38.068485 2859 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-39&limit=500&resourceVersion=0\": dial tcp 172.31.26.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:14:38.069921 kubelet[2859]: I0314 00:14:38.069229 2859 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:14:38.070443 kubelet[2859]: I0314 00:14:38.070416 2859 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:14:38.070596 kubelet[2859]: I0314 00:14:38.070576 2859 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:14:38.070760 kubelet[2859]: W0314 00:14:38.070740 2859 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:14:38.075675 kubelet[2859]: I0314 00:14:38.075643 2859 server.go:1262] "Started kubelet" Mar 14 00:14:38.076583 kubelet[2859]: E0314 00:14:38.076255 2859 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.26.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:14:38.079283 kubelet[2859]: I0314 00:14:38.078790 2859 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:14:38.080415 kubelet[2859]: I0314 00:14:38.080367 2859 server.go:310] "Adding debug handlers to kubelet server" Mar 14 00:14:38.083017 kubelet[2859]: I0314 00:14:38.082937 2859 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:14:38.083254 kubelet[2859]: I0314 00:14:38.083226 2859 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:14:38.084555 kubelet[2859]: I0314 00:14:38.083820 2859 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:14:38.086263 kubelet[2859]: E0314 00:14:38.084092 2859 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.39:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-39.189c8ce73d944ee6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-39,UID:ip-172-31-26-39,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-39,},FirstTimestamp:2026-03-14 00:14:38.075596518 +0000 UTC m=+1.225818259,LastTimestamp:2026-03-14 00:14:38.075596518 +0000 UTC m=+1.225818259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-39,}" Mar 14 00:14:38.089709 kubelet[2859]: I0314 00:14:38.089570 2859 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:14:38.092579 kubelet[2859]: I0314 00:14:38.090196 2859 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:14:38.098917 kubelet[2859]: E0314 00:14:38.098879 2859 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-26-39\" not found" Mar 14 00:14:38.100281 kubelet[2859]: I0314 00:14:38.100246 2859 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 14 00:14:38.101045 kubelet[2859]: I0314 00:14:38.101011 2859 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:14:38.102173 kubelet[2859]: I0314 00:14:38.102142 2859 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:14:38.103853 kubelet[2859]: E0314 00:14:38.103345 2859 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:14:38.103853 kubelet[2859]: E0314 00:14:38.103754 2859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-39?timeout=10s\": dial tcp 172.31.26.39:6443: connect: connection refused" interval="200ms" Mar 14 00:14:38.107507 kubelet[2859]: I0314 00:14:38.107462 2859 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:14:38.108764 kubelet[2859]: E0314 00:14:38.108722 2859 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:14:38.109766 kubelet[2859]: I0314 00:14:38.109056 2859 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:14:38.112631 kubelet[2859]: I0314 00:14:38.112590 2859 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:14:38.114544 kubelet[2859]: I0314 00:14:38.114303 2859 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:14:38.150084 kubelet[2859]: I0314 00:14:38.150048 2859 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:14:38.150608 kubelet[2859]: I0314 00:14:38.150255 2859 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:14:38.150608 kubelet[2859]: I0314 00:14:38.150292 2859 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:14:38.153486 kubelet[2859]: I0314 00:14:38.153088 2859 policy_none.go:49] "None policy: Start" Mar 14 00:14:38.153486 kubelet[2859]: I0314 00:14:38.153126 2859 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:14:38.153486 kubelet[2859]: I0314 00:14:38.153149 2859 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:14:38.156447 kubelet[2859]: I0314 00:14:38.156412 2859 policy_none.go:47] "Start" Mar 14 00:14:38.168653 kubelet[2859]: I0314 00:14:38.168598 2859 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:14:38.168653 kubelet[2859]: I0314 00:14:38.168650 2859 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 14 00:14:38.169183 kubelet[2859]: I0314 00:14:38.168690 2859 kubelet.go:2428] "Starting kubelet main sync loop" Mar 14 00:14:38.169183 kubelet[2859]: E0314 00:14:38.168762 2859 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:14:38.172258 kubelet[2859]: E0314 00:14:38.170871 2859 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:14:38.171886 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:14:38.190729 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:14:38.200611 kubelet[2859]: E0314 00:14:38.200554 2859 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-26-39\" not found" Mar 14 00:14:38.206273 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:14:38.208616 kubelet[2859]: E0314 00:14:38.208579 2859 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:14:38.211866 kubelet[2859]: I0314 00:14:38.211733 2859 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:14:38.211866 kubelet[2859]: I0314 00:14:38.211769 2859 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:14:38.212764 kubelet[2859]: I0314 00:14:38.212328 2859 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:14:38.214286 kubelet[2859]: E0314 00:14:38.213816 2859 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:14:38.214286 kubelet[2859]: E0314 00:14:38.213922 2859 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-39\" not found" Mar 14 00:14:38.291267 systemd[1]: Created slice kubepods-burstable-podb5969e81ecd379fc5e46f5e88c14adbe.slice - libcontainer container kubepods-burstable-podb5969e81ecd379fc5e46f5e88c14adbe.slice. Mar 14 00:14:38.304779 kubelet[2859]: I0314 00:14:38.303857 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5969e81ecd379fc5e46f5e88c14adbe-ca-certs\") pod \"kube-apiserver-ip-172-31-26-39\" (UID: \"b5969e81ecd379fc5e46f5e88c14adbe\") " pod="kube-system/kube-apiserver-ip-172-31-26-39" Mar 14 00:14:38.304779 kubelet[2859]: I0314 00:14:38.303912 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5969e81ecd379fc5e46f5e88c14adbe-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-39\" (UID: \"b5969e81ecd379fc5e46f5e88c14adbe\") " pod="kube-system/kube-apiserver-ip-172-31-26-39" Mar 14 00:14:38.304779 kubelet[2859]: I0314 00:14:38.303950 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f283a0aeb639f8ba5c7911f2cda0765a-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-39\" (UID: \"f283a0aeb639f8ba5c7911f2cda0765a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-39" Mar 14 00:14:38.304779 kubelet[2859]: I0314 00:14:38.304005 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f283a0aeb639f8ba5c7911f2cda0765a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-39\" (UID: \"f283a0aeb639f8ba5c7911f2cda0765a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-39" Mar 14 00:14:38.304779 kubelet[2859]: I0314 00:14:38.304047 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f283a0aeb639f8ba5c7911f2cda0765a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-39\" (UID: \"f283a0aeb639f8ba5c7911f2cda0765a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-39" Mar 14 00:14:38.305165 kubelet[2859]: I0314 00:14:38.304080 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f283a0aeb639f8ba5c7911f2cda0765a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-39\" (UID: \"f283a0aeb639f8ba5c7911f2cda0765a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-39" Mar 14 00:14:38.305165 kubelet[2859]: I0314 00:14:38.304116 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f283a0aeb639f8ba5c7911f2cda0765a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-39\" (UID: \"f283a0aeb639f8ba5c7911f2cda0765a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-39" Mar 14 00:14:38.305165 kubelet[2859]: I0314 00:14:38.304157 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5969e81ecd379fc5e46f5e88c14adbe-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-39\" (UID: \"b5969e81ecd379fc5e46f5e88c14adbe\") " pod="kube-system/kube-apiserver-ip-172-31-26-39" Mar 14 00:14:38.305165 kubelet[2859]: I0314 00:14:38.304193 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f06136cc18dd4034320bab8809e947b5-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-39\" (UID: \"f06136cc18dd4034320bab8809e947b5\") " pod="kube-system/kube-scheduler-ip-172-31-26-39" Mar 14 00:14:38.305165 kubelet[2859]: E0314 00:14:38.304721 2859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-39?timeout=10s\": dial tcp 172.31.26.39:6443: connect: connection refused" interval="400ms" Mar 14 00:14:38.308951 kubelet[2859]: E0314 00:14:38.308889 2859 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-39\" not found" node="ip-172-31-26-39" Mar 14 00:14:38.316679 kubelet[2859]: I0314 00:14:38.315850 2859 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-39" Mar 14 00:14:38.316679 kubelet[2859]: E0314 00:14:38.316501 2859 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.39:6443/api/v1/nodes\": dial tcp 172.31.26.39:6443: connect: connection refused" node="ip-172-31-26-39" Mar 14 00:14:38.317537 systemd[1]: Created slice kubepods-burstable-podf283a0aeb639f8ba5c7911f2cda0765a.slice - libcontainer container kubepods-burstable-podf283a0aeb639f8ba5c7911f2cda0765a.slice. Mar 14 00:14:38.323083 kubelet[2859]: E0314 00:14:38.323049 2859 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-39\" not found" node="ip-172-31-26-39" Mar 14 00:14:38.335415 systemd[1]: Created slice kubepods-burstable-podf06136cc18dd4034320bab8809e947b5.slice - libcontainer container kubepods-burstable-podf06136cc18dd4034320bab8809e947b5.slice. Mar 14 00:14:38.339960 kubelet[2859]: E0314 00:14:38.339906 2859 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-39\" not found" node="ip-172-31-26-39" Mar 14 00:14:38.519045 kubelet[2859]: I0314 00:14:38.518990 2859 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-39" Mar 14 00:14:38.519500 kubelet[2859]: E0314 00:14:38.519441 2859 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.39:6443/api/v1/nodes\": dial tcp 172.31.26.39:6443: connect: connection refused" node="ip-172-31-26-39" Mar 14 00:14:38.613763 containerd[2011]: time="2026-03-14T00:14:38.613596792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-39,Uid:b5969e81ecd379fc5e46f5e88c14adbe,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:38.627480 containerd[2011]: time="2026-03-14T00:14:38.627387073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-39,Uid:f283a0aeb639f8ba5c7911f2cda0765a,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:38.643177 containerd[2011]: time="2026-03-14T00:14:38.643109077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-39,Uid:f06136cc18dd4034320bab8809e947b5,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:38.707625 kubelet[2859]: E0314 00:14:38.706052 2859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-39?timeout=10s\": dial tcp 172.31.26.39:6443: connect: connection refused" interval="800ms" Mar 14 00:14:38.923008 kubelet[2859]: I0314 00:14:38.922315 2859 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-39" Mar 14 00:14:38.923008 kubelet[2859]: E0314 00:14:38.922763 2859 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.39:6443/api/v1/nodes\": dial tcp 172.31.26.39:6443: connect: connection refused" node="ip-172-31-26-39" Mar 14 00:14:39.084230 kubelet[2859]: E0314 00:14:39.083923 2859 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.26.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:14:39.112593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2023123704.mount: Deactivated successfully. Mar 14 00:14:39.117817 containerd[2011]: time="2026-03-14T00:14:39.117730835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:39.120463 containerd[2011]: time="2026-03-14T00:14:39.120395243Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 14 00:14:39.124030 containerd[2011]: time="2026-03-14T00:14:39.123960527Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:39.128311 containerd[2011]: time="2026-03-14T00:14:39.128231375Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:14:39.128952 containerd[2011]: time="2026-03-14T00:14:39.128915195Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:39.130956 containerd[2011]: time="2026-03-14T00:14:39.130724279Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:39.131862 containerd[2011]: time="2026-03-14T00:14:39.131748491Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:14:39.135352 containerd[2011]: time="2026-03-14T00:14:39.135276107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:39.141870 containerd[2011]: time="2026-03-14T00:14:39.140654159Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 497.428634ms" Mar 14 00:14:39.145565 containerd[2011]: time="2026-03-14T00:14:39.145489751Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 517.93973ms" Mar 14 00:14:39.146040 containerd[2011]: time="2026-03-14T00:14:39.146000135Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 532.285455ms" Mar 14 00:14:39.334400 containerd[2011]: time="2026-03-14T00:14:39.333885060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:39.334400 containerd[2011]: time="2026-03-14T00:14:39.334014000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:39.334400 containerd[2011]: time="2026-03-14T00:14:39.334074936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:39.334400 containerd[2011]: time="2026-03-14T00:14:39.334271484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:39.338461 containerd[2011]: time="2026-03-14T00:14:39.338290008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:39.338461 containerd[2011]: time="2026-03-14T00:14:39.338397852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:39.339585 containerd[2011]: time="2026-03-14T00:14:39.339485640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:39.340253 containerd[2011]: time="2026-03-14T00:14:39.340149672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:39.346603 containerd[2011]: time="2026-03-14T00:14:39.345616800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:39.346603 containerd[2011]: time="2026-03-14T00:14:39.345710028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:39.346603 containerd[2011]: time="2026-03-14T00:14:39.345736152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:39.346603 containerd[2011]: time="2026-03-14T00:14:39.345927288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:39.397172 systemd[1]: Started cri-containerd-30d514da4094fbb7be2717eda97d6003ddd8db84d4edb428635896628d619d29.scope - libcontainer container 30d514da4094fbb7be2717eda97d6003ddd8db84d4edb428635896628d619d29. Mar 14 00:14:39.401134 systemd[1]: Started cri-containerd-3698f47381b6873df810b8895352917d0a04b4fef6bcbc4d6c5d21683e7ffdbe.scope - libcontainer container 3698f47381b6873df810b8895352917d0a04b4fef6bcbc4d6c5d21683e7ffdbe. Mar 14 00:14:39.421418 systemd[1]: Started cri-containerd-f885cdd5a96bc151846713c1e949adaae22e3830b4738a26fa1beb3613642468.scope - libcontainer container f885cdd5a96bc151846713c1e949adaae22e3830b4738a26fa1beb3613642468. Mar 14 00:14:39.509379 kubelet[2859]: E0314 00:14:39.509110 2859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-39?timeout=10s\": dial tcp 172.31.26.39:6443: connect: connection refused" interval="1.6s" Mar 14 00:14:39.522372 containerd[2011]: time="2026-03-14T00:14:39.522150013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-39,Uid:f06136cc18dd4034320bab8809e947b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"30d514da4094fbb7be2717eda97d6003ddd8db84d4edb428635896628d619d29\"" Mar 14 00:14:39.539851 containerd[2011]: time="2026-03-14T00:14:39.539200213Z" level=info msg="CreateContainer within sandbox \"30d514da4094fbb7be2717eda97d6003ddd8db84d4edb428635896628d619d29\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:14:39.545968 containerd[2011]: time="2026-03-14T00:14:39.545910901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-39,Uid:f283a0aeb639f8ba5c7911f2cda0765a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3698f47381b6873df810b8895352917d0a04b4fef6bcbc4d6c5d21683e7ffdbe\"" Mar 14 00:14:39.554551 containerd[2011]: time="2026-03-14T00:14:39.554359105Z" level=info msg="CreateContainer within sandbox \"3698f47381b6873df810b8895352917d0a04b4fef6bcbc4d6c5d21683e7ffdbe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:14:39.563251 containerd[2011]: time="2026-03-14T00:14:39.563192077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-39,Uid:b5969e81ecd379fc5e46f5e88c14adbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"f885cdd5a96bc151846713c1e949adaae22e3830b4738a26fa1beb3613642468\"" Mar 14 00:14:39.572667 containerd[2011]: time="2026-03-14T00:14:39.572013097Z" level=info msg="CreateContainer within sandbox \"f885cdd5a96bc151846713c1e949adaae22e3830b4738a26fa1beb3613642468\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:14:39.575972 containerd[2011]: time="2026-03-14T00:14:39.575891005Z" level=info msg="CreateContainer within sandbox \"30d514da4094fbb7be2717eda97d6003ddd8db84d4edb428635896628d619d29\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1e35d4e6564137ae7bd2dfc51b1c53e9c24c8d64cf7a2a994b278746671b25b9\"" Mar 14 00:14:39.577438 containerd[2011]: time="2026-03-14T00:14:39.577369729Z" level=info msg="StartContainer for \"1e35d4e6564137ae7bd2dfc51b1c53e9c24c8d64cf7a2a994b278746671b25b9\"" Mar 14 00:14:39.581684 containerd[2011]: time="2026-03-14T00:14:39.580576513Z" level=info msg="CreateContainer within sandbox \"3698f47381b6873df810b8895352917d0a04b4fef6bcbc4d6c5d21683e7ffdbe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"34c1e0d8c9f5dea538cd329847d41b302487e9371999b8d1b04fa9956b2dadcd\"" Mar 14 00:14:39.582798 containerd[2011]: time="2026-03-14T00:14:39.582740989Z" level=info msg="StartContainer for \"34c1e0d8c9f5dea538cd329847d41b302487e9371999b8d1b04fa9956b2dadcd\"" Mar 14 00:14:39.597872 kubelet[2859]: E0314 00:14:39.597322 2859 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:14:39.600372 containerd[2011]: time="2026-03-14T00:14:39.600291673Z" level=info msg="CreateContainer within sandbox \"f885cdd5a96bc151846713c1e949adaae22e3830b4738a26fa1beb3613642468\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"747cb1dc9da1d0c8faf8b2cbf970e2933862840c2c76853dd1f7f3e8ebd83686\"" Mar 14 00:14:39.601638 containerd[2011]: time="2026-03-14T00:14:39.601595329Z" level=info msg="StartContainer for \"747cb1dc9da1d0c8faf8b2cbf970e2933862840c2c76853dd1f7f3e8ebd83686\"" Mar 14 00:14:39.615623 kubelet[2859]: E0314 00:14:39.615548 2859 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-39&limit=500&resourceVersion=0\": dial tcp 172.31.26.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:14:39.644175 systemd[1]: Started cri-containerd-1e35d4e6564137ae7bd2dfc51b1c53e9c24c8d64cf7a2a994b278746671b25b9.scope - libcontainer container 1e35d4e6564137ae7bd2dfc51b1c53e9c24c8d64cf7a2a994b278746671b25b9. Mar 14 00:14:39.690219 systemd[1]: Started cri-containerd-34c1e0d8c9f5dea538cd329847d41b302487e9371999b8d1b04fa9956b2dadcd.scope - libcontainer container 34c1e0d8c9f5dea538cd329847d41b302487e9371999b8d1b04fa9956b2dadcd. Mar 14 00:14:39.693101 systemd[1]: Started cri-containerd-747cb1dc9da1d0c8faf8b2cbf970e2933862840c2c76853dd1f7f3e8ebd83686.scope - libcontainer container 747cb1dc9da1d0c8faf8b2cbf970e2933862840c2c76853dd1f7f3e8ebd83686. Mar 14 00:14:39.727344 kubelet[2859]: I0314 00:14:39.727240 2859 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-39" Mar 14 00:14:39.730100 kubelet[2859]: E0314 00:14:39.730036 2859 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.39:6443/api/v1/nodes\": dial tcp 172.31.26.39:6443: connect: connection refused" node="ip-172-31-26-39" Mar 14 00:14:39.732610 kubelet[2859]: E0314 00:14:39.732535 2859 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:14:39.785104 containerd[2011]: time="2026-03-14T00:14:39.784000214Z" level=info msg="StartContainer for \"1e35d4e6564137ae7bd2dfc51b1c53e9c24c8d64cf7a2a994b278746671b25b9\" returns successfully" Mar 14 00:14:39.825258 containerd[2011]: time="2026-03-14T00:14:39.824785766Z" level=info msg="StartContainer for \"747cb1dc9da1d0c8faf8b2cbf970e2933862840c2c76853dd1f7f3e8ebd83686\" returns successfully" Mar 14 00:14:39.833979 containerd[2011]: time="2026-03-14T00:14:39.833876234Z" level=info msg="StartContainer for \"34c1e0d8c9f5dea538cd329847d41b302487e9371999b8d1b04fa9956b2dadcd\" returns successfully" Mar 14 00:14:40.198867 kubelet[2859]: E0314 00:14:40.196136 2859 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-39\" not found" node="ip-172-31-26-39" Mar 14 00:14:40.205854 kubelet[2859]: E0314 00:14:40.205372 2859 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-39\" not found" node="ip-172-31-26-39" Mar 14 00:14:40.217104 kubelet[2859]: E0314 00:14:40.217050 2859 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-39\" not found" node="ip-172-31-26-39" Mar 14 00:14:41.215317 kubelet[2859]: E0314 00:14:41.215264 2859 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-39\" not found" node="ip-172-31-26-39" Mar 14 00:14:41.216207 kubelet[2859]: E0314 00:14:41.216166 2859 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-39\" not found" node="ip-172-31-26-39" Mar 14 00:14:41.332816 kubelet[2859]: I0314 00:14:41.332764 2859 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-39" Mar 14 00:14:42.884876 update_engine[1989]: I20260314 00:14:42.882891 1989 update_attempter.cc:509] Updating boot flags... Mar 14 00:14:43.034985 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3158) Mar 14 00:14:43.461106 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3159) Mar 14 00:14:44.214122 kubelet[2859]: I0314 00:14:44.214068 2859 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-39" Mar 14 00:14:44.217248 kubelet[2859]: E0314 00:14:44.216914 2859 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-26-39\": node \"ip-172-31-26-39\" not found" Mar 14 00:14:44.304774 kubelet[2859]: I0314 00:14:44.303929 2859 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-39" Mar 14 00:14:44.314624 kubelet[2859]: E0314 00:14:44.314580 2859 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-39\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-26-39" Mar 14 00:14:44.314878 kubelet[2859]: I0314 00:14:44.314855 2859 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-39" Mar 14 00:14:44.321858 kubelet[2859]: E0314 00:14:44.321295 2859 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-39\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-26-39" Mar 14 00:14:44.322070 kubelet[2859]: I0314 00:14:44.322035 2859 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-39" Mar 14 00:14:44.328029 kubelet[2859]: E0314 00:14:44.327985 2859 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-39\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-26-39" Mar 14 00:14:45.072516 kubelet[2859]: I0314 00:14:45.072451 2859 apiserver.go:52] "Watching apiserver" Mar 14 00:14:45.103185 kubelet[2859]: I0314 00:14:45.103099 2859 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:14:45.112147 kubelet[2859]: I0314 00:14:45.111676 2859 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-39" Mar 14 00:14:46.207536 kubelet[2859]: I0314 00:14:46.207477 2859 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-39" Mar 14 00:14:46.339255 systemd[1]: Reloading requested from client PID 3329 ('systemctl') (unit session-7.scope)... Mar 14 00:14:46.339288 systemd[1]: Reloading... Mar 14 00:14:46.463770 kubelet[2859]: I0314 00:14:46.462807 2859 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-39" Mar 14 00:14:46.527031 zram_generator::config[3372]: No configuration found. Mar 14 00:14:46.830801 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:14:47.042400 systemd[1]: Reloading finished in 702 ms. Mar 14 00:14:47.142738 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:47.159542 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:14:47.160157 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:47.160264 systemd[1]: kubelet.service: Consumed 2.040s CPU time, 123.0M memory peak, 0B memory swap peak. Mar 14 00:14:47.172210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:47.542193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:47.559348 (kubelet)[3430]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:14:47.667712 kubelet[3430]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:14:47.667712 kubelet[3430]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:14:47.668272 kubelet[3430]: I0314 00:14:47.667706 3430 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:14:47.695025 kubelet[3430]: I0314 00:14:47.694965 3430 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 14 00:14:47.695025 kubelet[3430]: I0314 00:14:47.695013 3430 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:14:47.695243 kubelet[3430]: I0314 00:14:47.695065 3430 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:14:47.695243 kubelet[3430]: I0314 00:14:47.695081 3430 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:14:47.696784 kubelet[3430]: I0314 00:14:47.695460 3430 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:14:47.697949 kubelet[3430]: I0314 00:14:47.697892 3430 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:14:47.702521 kubelet[3430]: I0314 00:14:47.702448 3430 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:14:47.709371 kubelet[3430]: E0314 00:14:47.708144 3430 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:14:47.709371 kubelet[3430]: I0314 00:14:47.708242 3430 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:14:47.722239 kubelet[3430]: I0314 00:14:47.720965 3430 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:14:47.722239 kubelet[3430]: I0314 00:14:47.721345 3430 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:14:47.722239 kubelet[3430]: I0314 00:14:47.721383 3430 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-39","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:14:47.722239 kubelet[3430]: I0314 00:14:47.721697 3430 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:14:47.722879 kubelet[3430]: I0314 00:14:47.721716 3430 container_manager_linux.go:306] "Creating device plugin manager" Mar 14 00:14:47.722879 kubelet[3430]: I0314 00:14:47.721777 3430 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:14:47.722879 kubelet[3430]: I0314 00:14:47.722149 3430 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:14:47.723401 kubelet[3430]: I0314 00:14:47.723371 3430 kubelet.go:475] "Attempting to sync node with API server" Mar 14 00:14:47.723546 kubelet[3430]: I0314 00:14:47.723526 3430 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:14:47.723702 kubelet[3430]: I0314 00:14:47.723682 3430 kubelet.go:387] "Adding apiserver pod source" Mar 14 00:14:47.723843 kubelet[3430]: I0314 00:14:47.723804 3430 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:14:47.740143 kubelet[3430]: I0314 00:14:47.740069 3430 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:14:47.745212 kubelet[3430]: I0314 00:14:47.745165 3430 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:14:47.745880 kubelet[3430]: I0314 00:14:47.745482 3430 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:14:47.764298 kubelet[3430]: I0314 00:14:47.764250 3430 server.go:1262] "Started kubelet" Mar 14 00:14:47.775855 kubelet[3430]: I0314 00:14:47.772326 3430 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:14:47.776434 kubelet[3430]: I0314 00:14:47.776405 3430 server.go:310] "Adding debug handlers to kubelet server" Mar 14 00:14:47.786947 kubelet[3430]: I0314 00:14:47.786867 3430 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:14:47.787186 kubelet[3430]: I0314 00:14:47.787160 3430 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:14:47.787540 kubelet[3430]: I0314 00:14:47.787517 3430 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:14:47.794911 kubelet[3430]: I0314 00:14:47.793595 3430 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:14:47.803177 kubelet[3430]: I0314 00:14:47.803140 3430 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:14:47.809590 kubelet[3430]: I0314 00:14:47.804637 3430 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 14 00:14:47.815551 kubelet[3430]: I0314 00:14:47.804679 3430 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:14:47.817429 kubelet[3430]: E0314 00:14:47.805392 3430 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-26-39\" not found" Mar 14 00:14:47.818762 kubelet[3430]: I0314 00:14:47.818052 3430 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:14:47.828180 kubelet[3430]: I0314 00:14:47.828125 3430 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:14:47.833417 kubelet[3430]: I0314 00:14:47.833324 3430 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:14:47.839095 kubelet[3430]: E0314 00:14:47.838951 3430 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:14:47.843550 kubelet[3430]: I0314 00:14:47.843382 3430 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:14:47.876706 kubelet[3430]: I0314 00:14:47.876604 3430 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:14:47.885790 kubelet[3430]: I0314 00:14:47.885729 3430 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:14:47.885790 kubelet[3430]: I0314 00:14:47.885777 3430 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 14 00:14:47.886032 kubelet[3430]: I0314 00:14:47.885814 3430 kubelet.go:2428] "Starting kubelet main sync loop" Mar 14 00:14:47.886032 kubelet[3430]: E0314 00:14:47.885925 3430 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:14:47.982327 kubelet[3430]: I0314 00:14:47.982280 3430 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:14:47.982327 kubelet[3430]: I0314 00:14:47.982314 3430 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:14:47.982566 kubelet[3430]: I0314 00:14:47.982355 3430 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:14:47.982650 kubelet[3430]: I0314 00:14:47.982585 3430 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:14:47.982650 kubelet[3430]: I0314 00:14:47.982605 3430 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:14:47.982650 kubelet[3430]: I0314 00:14:47.982636 3430 policy_none.go:49] "None policy: Start" Mar 14 00:14:47.982811 kubelet[3430]: I0314 00:14:47.982654 3430 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:14:47.982811 kubelet[3430]: I0314 00:14:47.982673 3430 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:14:47.982980 kubelet[3430]: I0314 00:14:47.982945 3430 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:14:47.982980 kubelet[3430]: I0314 00:14:47.982965 3430 policy_none.go:47] "Start" Mar 14 00:14:47.986376 kubelet[3430]: E0314 00:14:47.986142 3430 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 00:14:47.997407 kubelet[3430]: E0314 00:14:47.997035 3430 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:14:47.997407 kubelet[3430]: I0314 00:14:47.997321 3430 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:14:47.997407 kubelet[3430]: I0314 00:14:47.997338 3430 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:14:47.998929 kubelet[3430]: I0314 00:14:47.998290 3430 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:14:48.001505 kubelet[3430]: E0314 00:14:48.001380 3430 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:14:48.132933 kubelet[3430]: I0314 00:14:48.130465 3430 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-39" Mar 14 00:14:48.142763 kubelet[3430]: I0314 00:14:48.142726 3430 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-26-39" Mar 14 00:14:48.143187 kubelet[3430]: I0314 00:14:48.143056 3430 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-39" Mar 14 00:14:48.188915 kubelet[3430]: I0314 00:14:48.188658 3430 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-39" Mar 14 00:14:48.189972 kubelet[3430]: I0314 00:14:48.189328 3430 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-39" Mar 14 00:14:48.191990 kubelet[3430]: I0314 00:14:48.189817 3430 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-39" Mar 14 00:14:48.200103 kubelet[3430]: E0314 00:14:48.200048 3430 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-39\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-39" Mar 14 00:14:48.201675 kubelet[3430]: E0314 00:14:48.201625 3430 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-39\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-39" Mar 14 00:14:48.202960 kubelet[3430]: E0314 00:14:48.202910 3430 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-39\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-26-39" Mar 14 00:14:48.223351 kubelet[3430]: I0314 00:14:48.223299 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5969e81ecd379fc5e46f5e88c14adbe-ca-certs\") pod \"kube-apiserver-ip-172-31-26-39\" (UID: \"b5969e81ecd379fc5e46f5e88c14adbe\") " pod="kube-system/kube-apiserver-ip-172-31-26-39" Mar 14 00:14:48.223498 kubelet[3430]: I0314 00:14:48.223390 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5969e81ecd379fc5e46f5e88c14adbe-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-39\" (UID: \"b5969e81ecd379fc5e46f5e88c14adbe\") " pod="kube-system/kube-apiserver-ip-172-31-26-39" Mar 14 00:14:48.223498 kubelet[3430]: I0314 00:14:48.223438 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5969e81ecd379fc5e46f5e88c14adbe-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-39\" (UID: \"b5969e81ecd379fc5e46f5e88c14adbe\") " pod="kube-system/kube-apiserver-ip-172-31-26-39" Mar 14 00:14:48.223498 kubelet[3430]: I0314 00:14:48.223474 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f283a0aeb639f8ba5c7911f2cda0765a-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-39\" (UID: \"f283a0aeb639f8ba5c7911f2cda0765a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-39" Mar 14 00:14:48.223700 kubelet[3430]: I0314 00:14:48.223514 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f283a0aeb639f8ba5c7911f2cda0765a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-39\" (UID: \"f283a0aeb639f8ba5c7911f2cda0765a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-39" Mar 14 00:14:48.223700 kubelet[3430]: I0314 00:14:48.223551 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f283a0aeb639f8ba5c7911f2cda0765a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-39\" (UID: \"f283a0aeb639f8ba5c7911f2cda0765a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-39" Mar 14 00:14:48.223700 kubelet[3430]: I0314 00:14:48.223587 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f06136cc18dd4034320bab8809e947b5-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-39\" (UID: \"f06136cc18dd4034320bab8809e947b5\") " pod="kube-system/kube-scheduler-ip-172-31-26-39" Mar 14 00:14:48.223700 kubelet[3430]: I0314 00:14:48.223621 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f283a0aeb639f8ba5c7911f2cda0765a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-39\" (UID: \"f283a0aeb639f8ba5c7911f2cda0765a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-39" Mar 14 00:14:48.223700 kubelet[3430]: I0314 00:14:48.223671 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f283a0aeb639f8ba5c7911f2cda0765a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-39\" (UID: \"f283a0aeb639f8ba5c7911f2cda0765a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-39" Mar 14 00:14:48.732076 kubelet[3430]: I0314 00:14:48.732004 3430 apiserver.go:52] "Watching apiserver" Mar 14 00:14:48.817781 kubelet[3430]: I0314 00:14:48.817722 3430 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:14:48.922046 kubelet[3430]: I0314 00:14:48.921921 3430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-39" podStartSLOduration=2.921898128 podStartE2EDuration="2.921898128s" podCreationTimestamp="2026-03-14 00:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:48.91703604 +0000 UTC m=+1.347935408" watchObservedRunningTime="2026-03-14 00:14:48.921898128 +0000 UTC m=+1.352797484" Mar 14 00:14:48.922257 kubelet[3430]: I0314 00:14:48.922127 3430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-39" podStartSLOduration=3.922116636 podStartE2EDuration="3.922116636s" podCreationTimestamp="2026-03-14 00:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:48.895515372 +0000 UTC m=+1.326414752" watchObservedRunningTime="2026-03-14 00:14:48.922116636 +0000 UTC m=+1.353015980" Mar 14 00:14:48.944901 kubelet[3430]: I0314 00:14:48.943042 3430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-39" podStartSLOduration=2.943011648 podStartE2EDuration="2.943011648s" podCreationTimestamp="2026-03-14 00:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:48.942728256 +0000 UTC m=+1.373627624" watchObservedRunningTime="2026-03-14 00:14:48.943011648 +0000 UTC m=+1.373911004" Mar 14 00:14:48.966519 kubelet[3430]: I0314 00:14:48.964641 3430 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-39" Mar 14 00:14:48.978239 kubelet[3430]: E0314 00:14:48.978149 3430 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-39\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-39" Mar 14 00:14:51.610461 kubelet[3430]: I0314 00:14:51.610408 3430 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:14:51.611155 containerd[2011]: time="2026-03-14T00:14:51.610970401Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:14:51.615017 kubelet[3430]: I0314 00:14:51.611306 3430 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:14:52.723973 systemd[1]: Created slice kubepods-besteffort-podc085b18b_60da_41af_a323_756a90764f4a.slice - libcontainer container kubepods-besteffort-podc085b18b_60da_41af_a323_756a90764f4a.slice. Mar 14 00:14:52.753359 kubelet[3430]: I0314 00:14:52.752964 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c085b18b-60da-41af-a323-756a90764f4a-kube-proxy\") pod \"kube-proxy-2lb7m\" (UID: \"c085b18b-60da-41af-a323-756a90764f4a\") " pod="kube-system/kube-proxy-2lb7m" Mar 14 00:14:52.753359 kubelet[3430]: I0314 00:14:52.753085 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c085b18b-60da-41af-a323-756a90764f4a-xtables-lock\") pod \"kube-proxy-2lb7m\" (UID: \"c085b18b-60da-41af-a323-756a90764f4a\") " pod="kube-system/kube-proxy-2lb7m" Mar 14 00:14:52.753359 kubelet[3430]: I0314 00:14:52.753212 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c085b18b-60da-41af-a323-756a90764f4a-lib-modules\") pod \"kube-proxy-2lb7m\" (UID: \"c085b18b-60da-41af-a323-756a90764f4a\") " pod="kube-system/kube-proxy-2lb7m" Mar 14 00:14:52.753359 kubelet[3430]: I0314 00:14:52.753277 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhczv\" (UniqueName: \"kubernetes.io/projected/c085b18b-60da-41af-a323-756a90764f4a-kube-api-access-nhczv\") pod \"kube-proxy-2lb7m\" (UID: \"c085b18b-60da-41af-a323-756a90764f4a\") " pod="kube-system/kube-proxy-2lb7m" Mar 14 00:14:52.937284 systemd[1]: Created slice kubepods-besteffort-pod77df9720_625d_4800_9a52_abe91363bfc2.slice - libcontainer container kubepods-besteffort-pod77df9720_625d_4800_9a52_abe91363bfc2.slice. Mar 14 00:14:52.954436 kubelet[3430]: I0314 00:14:52.954382 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/77df9720-625d-4800-9a52-abe91363bfc2-var-lib-calico\") pod \"tigera-operator-5588576f44-rg8rq\" (UID: \"77df9720-625d-4800-9a52-abe91363bfc2\") " pod="tigera-operator/tigera-operator-5588576f44-rg8rq" Mar 14 00:14:52.954785 kubelet[3430]: I0314 00:14:52.954691 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnq2x\" (UniqueName: \"kubernetes.io/projected/77df9720-625d-4800-9a52-abe91363bfc2-kube-api-access-pnq2x\") pod \"tigera-operator-5588576f44-rg8rq\" (UID: \"77df9720-625d-4800-9a52-abe91363bfc2\") " pod="tigera-operator/tigera-operator-5588576f44-rg8rq" Mar 14 00:14:53.040007 containerd[2011]: time="2026-03-14T00:14:53.039914472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2lb7m,Uid:c085b18b-60da-41af-a323-756a90764f4a,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:53.095538 containerd[2011]: time="2026-03-14T00:14:53.095029704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:53.095538 containerd[2011]: time="2026-03-14T00:14:53.095136300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:53.095538 containerd[2011]: time="2026-03-14T00:14:53.095171952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:53.095538 containerd[2011]: time="2026-03-14T00:14:53.095345568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:53.137200 systemd[1]: Started cri-containerd-ffddb8b4de36ff4c31c51d3eb1e64ce58a56b43239b4806f453aa14d2d278b65.scope - libcontainer container ffddb8b4de36ff4c31c51d3eb1e64ce58a56b43239b4806f453aa14d2d278b65. Mar 14 00:14:53.190025 containerd[2011]: time="2026-03-14T00:14:53.189967945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2lb7m,Uid:c085b18b-60da-41af-a323-756a90764f4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffddb8b4de36ff4c31c51d3eb1e64ce58a56b43239b4806f453aa14d2d278b65\"" Mar 14 00:14:53.201804 containerd[2011]: time="2026-03-14T00:14:53.201736801Z" level=info msg="CreateContainer within sandbox \"ffddb8b4de36ff4c31c51d3eb1e64ce58a56b43239b4806f453aa14d2d278b65\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:14:53.219915 containerd[2011]: time="2026-03-14T00:14:53.219804469Z" level=info msg="CreateContainer within sandbox \"ffddb8b4de36ff4c31c51d3eb1e64ce58a56b43239b4806f453aa14d2d278b65\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d015229bd1a7f16537f334d39fdc3c7d0bfdbd070f47d0907fa5a143c72ec3b1\"" Mar 14 00:14:53.220690 containerd[2011]: time="2026-03-14T00:14:53.220634905Z" level=info msg="StartContainer for \"d015229bd1a7f16537f334d39fdc3c7d0bfdbd070f47d0907fa5a143c72ec3b1\"" Mar 14 00:14:53.254653 containerd[2011]: time="2026-03-14T00:14:53.254043433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-rg8rq,Uid:77df9720-625d-4800-9a52-abe91363bfc2,Namespace:tigera-operator,Attempt:0,}" Mar 14 00:14:53.272120 systemd[1]: Started cri-containerd-d015229bd1a7f16537f334d39fdc3c7d0bfdbd070f47d0907fa5a143c72ec3b1.scope - libcontainer container d015229bd1a7f16537f334d39fdc3c7d0bfdbd070f47d0907fa5a143c72ec3b1. Mar 14 00:14:53.309099 containerd[2011]: time="2026-03-14T00:14:53.306496105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:53.309099 containerd[2011]: time="2026-03-14T00:14:53.306659533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:53.309099 containerd[2011]: time="2026-03-14T00:14:53.306698761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:53.309099 containerd[2011]: time="2026-03-14T00:14:53.307023409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:53.358303 systemd[1]: Started cri-containerd-5d16732c52a5449d80b62a4c983d04b585f139be73a5c56da762c6f5392b5c8a.scope - libcontainer container 5d16732c52a5449d80b62a4c983d04b585f139be73a5c56da762c6f5392b5c8a. Mar 14 00:14:53.365287 containerd[2011]: time="2026-03-14T00:14:53.365166302Z" level=info msg="StartContainer for \"d015229bd1a7f16537f334d39fdc3c7d0bfdbd070f47d0907fa5a143c72ec3b1\" returns successfully" Mar 14 00:14:53.453583 containerd[2011]: time="2026-03-14T00:14:53.453504638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-rg8rq,Uid:77df9720-625d-4800-9a52-abe91363bfc2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5d16732c52a5449d80b62a4c983d04b585f139be73a5c56da762c6f5392b5c8a\"" Mar 14 00:14:53.456899 containerd[2011]: time="2026-03-14T00:14:53.456536162Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 14 00:14:54.015925 kubelet[3430]: I0314 00:14:54.014418 3430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2lb7m" podStartSLOduration=2.014395477 podStartE2EDuration="2.014395477s" podCreationTimestamp="2026-03-14 00:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:54.013380841 +0000 UTC m=+6.444280197" watchObservedRunningTime="2026-03-14 00:14:54.014395477 +0000 UTC m=+6.445294821" Mar 14 00:14:54.664309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3275262314.mount: Deactivated successfully. Mar 14 00:14:55.720866 containerd[2011]: time="2026-03-14T00:14:55.719272625Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:55.722477 containerd[2011]: time="2026-03-14T00:14:55.722432369Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 14 00:14:55.724021 containerd[2011]: time="2026-03-14T00:14:55.723978749Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:55.727649 containerd[2011]: time="2026-03-14T00:14:55.727587233Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:55.729583 containerd[2011]: time="2026-03-14T00:14:55.729522773Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.272817963s" Mar 14 00:14:55.729698 containerd[2011]: time="2026-03-14T00:14:55.729579497Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 14 00:14:55.739134 containerd[2011]: time="2026-03-14T00:14:55.739077749Z" level=info msg="CreateContainer within sandbox \"5d16732c52a5449d80b62a4c983d04b585f139be73a5c56da762c6f5392b5c8a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 14 00:14:55.760466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2410727184.mount: Deactivated successfully. Mar 14 00:14:55.763105 containerd[2011]: time="2026-03-14T00:14:55.763039434Z" level=info msg="CreateContainer within sandbox \"5d16732c52a5449d80b62a4c983d04b585f139be73a5c56da762c6f5392b5c8a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7abc7397f798da1ec9c4cdc9f7341859406905abda1343d94e0e94126c9577c0\"" Mar 14 00:14:55.764267 containerd[2011]: time="2026-03-14T00:14:55.764177982Z" level=info msg="StartContainer for \"7abc7397f798da1ec9c4cdc9f7341859406905abda1343d94e0e94126c9577c0\"" Mar 14 00:14:55.819185 systemd[1]: Started cri-containerd-7abc7397f798da1ec9c4cdc9f7341859406905abda1343d94e0e94126c9577c0.scope - libcontainer container 7abc7397f798da1ec9c4cdc9f7341859406905abda1343d94e0e94126c9577c0. Mar 14 00:14:55.874811 containerd[2011]: time="2026-03-14T00:14:55.874754082Z" level=info msg="StartContainer for \"7abc7397f798da1ec9c4cdc9f7341859406905abda1343d94e0e94126c9577c0\" returns successfully" Mar 14 00:14:57.960870 kubelet[3430]: I0314 00:14:57.960764 3430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-rg8rq" podStartSLOduration=3.68495301 podStartE2EDuration="5.960744885s" podCreationTimestamp="2026-03-14 00:14:52 +0000 UTC" firstStartedPulling="2026-03-14 00:14:53.455879846 +0000 UTC m=+5.886779190" lastFinishedPulling="2026-03-14 00:14:55.731671733 +0000 UTC m=+8.162571065" observedRunningTime="2026-03-14 00:14:56.040237287 +0000 UTC m=+8.471136643" watchObservedRunningTime="2026-03-14 00:14:57.960744885 +0000 UTC m=+10.391644217" Mar 14 00:15:04.852531 sudo[2330]: pam_unix(sudo:session): session closed for user root Mar 14 00:15:04.941186 sshd[2327]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:04.948539 systemd[1]: sshd@6-172.31.26.39:22-68.220.241.50:56722.service: Deactivated successfully. Mar 14 00:15:04.955427 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:15:04.957999 systemd[1]: session-7.scope: Consumed 12.810s CPU time, 154.5M memory peak, 0B memory swap peak. Mar 14 00:15:04.963091 systemd-logind[1988]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:15:04.966619 systemd-logind[1988]: Removed session 7. Mar 14 00:15:17.504610 systemd[1]: Created slice kubepods-besteffort-pod346bdc6d_179c_4f97_b2a4_583fce1cc26c.slice - libcontainer container kubepods-besteffort-pod346bdc6d_179c_4f97_b2a4_583fce1cc26c.slice. Mar 14 00:15:17.534956 kubelet[3430]: I0314 00:15:17.534500 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/346bdc6d-179c-4f97-b2a4-583fce1cc26c-tigera-ca-bundle\") pod \"calico-typha-77879b854-g4czt\" (UID: \"346bdc6d-179c-4f97-b2a4-583fce1cc26c\") " pod="calico-system/calico-typha-77879b854-g4czt" Mar 14 00:15:17.534956 kubelet[3430]: I0314 00:15:17.534578 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/346bdc6d-179c-4f97-b2a4-583fce1cc26c-typha-certs\") pod \"calico-typha-77879b854-g4czt\" (UID: \"346bdc6d-179c-4f97-b2a4-583fce1cc26c\") " pod="calico-system/calico-typha-77879b854-g4czt" Mar 14 00:15:17.534956 kubelet[3430]: I0314 00:15:17.534618 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kbrc\" (UniqueName: \"kubernetes.io/projected/346bdc6d-179c-4f97-b2a4-583fce1cc26c-kube-api-access-7kbrc\") pod \"calico-typha-77879b854-g4czt\" (UID: \"346bdc6d-179c-4f97-b2a4-583fce1cc26c\") " pod="calico-system/calico-typha-77879b854-g4czt" Mar 14 00:15:17.735310 systemd[1]: Created slice kubepods-besteffort-pod9050044f_f50d_43f6_98c3_f394ccd86059.slice - libcontainer container kubepods-besteffort-pod9050044f_f50d_43f6_98c3_f394ccd86059.slice. Mar 14 00:15:17.825662 containerd[2011]: time="2026-03-14T00:15:17.825346875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77879b854-g4czt,Uid:346bdc6d-179c-4f97-b2a4-583fce1cc26c,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:17.838024 kubelet[3430]: I0314 00:15:17.836780 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/9050044f-f50d-43f6-98c3-f394ccd86059-bpffs\") pod \"calico-node-wr8b4\" (UID: \"9050044f-f50d-43f6-98c3-f394ccd86059\") " pod="calico-system/calico-node-wr8b4" Mar 14 00:15:17.838024 kubelet[3430]: I0314 00:15:17.836871 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9050044f-f50d-43f6-98c3-f394ccd86059-lib-modules\") pod \"calico-node-wr8b4\" (UID: \"9050044f-f50d-43f6-98c3-f394ccd86059\") " pod="calico-system/calico-node-wr8b4" Mar 14 00:15:17.838024 kubelet[3430]: I0314 00:15:17.836913 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9050044f-f50d-43f6-98c3-f394ccd86059-var-run-calico\") pod \"calico-node-wr8b4\" (UID: \"9050044f-f50d-43f6-98c3-f394ccd86059\") " pod="calico-system/calico-node-wr8b4" Mar 14 00:15:17.838024 kubelet[3430]: I0314 00:15:17.836950 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9050044f-f50d-43f6-98c3-f394ccd86059-xtables-lock\") pod \"calico-node-wr8b4\" (UID: \"9050044f-f50d-43f6-98c3-f394ccd86059\") " pod="calico-system/calico-node-wr8b4" Mar 14 00:15:17.838024 kubelet[3430]: I0314 00:15:17.836993 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9050044f-f50d-43f6-98c3-f394ccd86059-policysync\") pod \"calico-node-wr8b4\" (UID: \"9050044f-f50d-43f6-98c3-f394ccd86059\") " pod="calico-system/calico-node-wr8b4" Mar 14 00:15:17.838407 kubelet[3430]: I0314 00:15:17.837027 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9050044f-f50d-43f6-98c3-f394ccd86059-tigera-ca-bundle\") pod \"calico-node-wr8b4\" (UID: \"9050044f-f50d-43f6-98c3-f394ccd86059\") " pod="calico-system/calico-node-wr8b4" Mar 14 00:15:17.838407 kubelet[3430]: I0314 00:15:17.837067 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm5t4\" (UniqueName: \"kubernetes.io/projected/9050044f-f50d-43f6-98c3-f394ccd86059-kube-api-access-rm5t4\") pod \"calico-node-wr8b4\" (UID: \"9050044f-f50d-43f6-98c3-f394ccd86059\") " pod="calico-system/calico-node-wr8b4" Mar 14 00:15:17.838407 kubelet[3430]: I0314 00:15:17.837108 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9050044f-f50d-43f6-98c3-f394ccd86059-cni-bin-dir\") pod \"calico-node-wr8b4\" (UID: \"9050044f-f50d-43f6-98c3-f394ccd86059\") " pod="calico-system/calico-node-wr8b4" Mar 14 00:15:17.838407 kubelet[3430]: I0314 00:15:17.837146 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/9050044f-f50d-43f6-98c3-f394ccd86059-sys-fs\") pod \"calico-node-wr8b4\" (UID: \"9050044f-f50d-43f6-98c3-f394ccd86059\") " pod="calico-system/calico-node-wr8b4" Mar 14 00:15:17.838407 kubelet[3430]: I0314 00:15:17.837180 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9050044f-f50d-43f6-98c3-f394ccd86059-var-lib-calico\") pod \"calico-node-wr8b4\" (UID: \"9050044f-f50d-43f6-98c3-f394ccd86059\") " pod="calico-system/calico-node-wr8b4" Mar 14 00:15:17.838669 kubelet[3430]: I0314 00:15:17.837221 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9050044f-f50d-43f6-98c3-f394ccd86059-cni-net-dir\") pod \"calico-node-wr8b4\" (UID: \"9050044f-f50d-43f6-98c3-f394ccd86059\") " pod="calico-system/calico-node-wr8b4" Mar 14 00:15:17.838669 kubelet[3430]: I0314 00:15:17.837255 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9050044f-f50d-43f6-98c3-f394ccd86059-cni-log-dir\") pod \"calico-node-wr8b4\" (UID: \"9050044f-f50d-43f6-98c3-f394ccd86059\") " pod="calico-system/calico-node-wr8b4" Mar 14 00:15:17.838669 kubelet[3430]: I0314 00:15:17.837288 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9050044f-f50d-43f6-98c3-f394ccd86059-node-certs\") pod \"calico-node-wr8b4\" (UID: \"9050044f-f50d-43f6-98c3-f394ccd86059\") " pod="calico-system/calico-node-wr8b4" Mar 14 00:15:17.838669 kubelet[3430]: I0314 00:15:17.837358 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9050044f-f50d-43f6-98c3-f394ccd86059-flexvol-driver-host\") pod \"calico-node-wr8b4\" (UID: \"9050044f-f50d-43f6-98c3-f394ccd86059\") " pod="calico-system/calico-node-wr8b4" Mar 14 00:15:17.838669 kubelet[3430]: I0314 00:15:17.837398 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/9050044f-f50d-43f6-98c3-f394ccd86059-nodeproc\") pod \"calico-node-wr8b4\" (UID: \"9050044f-f50d-43f6-98c3-f394ccd86059\") " pod="calico-system/calico-node-wr8b4" Mar 14 00:15:17.868466 kubelet[3430]: E0314 00:15:17.868170 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xhrv" podUID="0e308c40-a8ad-497a-822b-a95b9df4915b" Mar 14 00:15:17.903656 containerd[2011]: time="2026-03-14T00:15:17.902996632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:17.903656 containerd[2011]: time="2026-03-14T00:15:17.903197188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:17.904324 containerd[2011]: time="2026-03-14T00:15:17.903481552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:17.907204 containerd[2011]: time="2026-03-14T00:15:17.904390504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:17.938556 kubelet[3430]: I0314 00:15:17.938368 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e308c40-a8ad-497a-822b-a95b9df4915b-kubelet-dir\") pod \"csi-node-driver-4xhrv\" (UID: \"0e308c40-a8ad-497a-822b-a95b9df4915b\") " pod="calico-system/csi-node-driver-4xhrv" Mar 14 00:15:17.938556 kubelet[3430]: I0314 00:15:17.938493 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0e308c40-a8ad-497a-822b-a95b9df4915b-socket-dir\") pod \"csi-node-driver-4xhrv\" (UID: \"0e308c40-a8ad-497a-822b-a95b9df4915b\") " pod="calico-system/csi-node-driver-4xhrv" Mar 14 00:15:17.939463 kubelet[3430]: I0314 00:15:17.939212 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wrx5\" (UniqueName: \"kubernetes.io/projected/0e308c40-a8ad-497a-822b-a95b9df4915b-kube-api-access-7wrx5\") pod \"csi-node-driver-4xhrv\" (UID: \"0e308c40-a8ad-497a-822b-a95b9df4915b\") " pod="calico-system/csi-node-driver-4xhrv" Mar 14 00:15:17.939702 kubelet[3430]: I0314 00:15:17.939561 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0e308c40-a8ad-497a-822b-a95b9df4915b-varrun\") pod \"csi-node-driver-4xhrv\" (UID: \"0e308c40-a8ad-497a-822b-a95b9df4915b\") " pod="calico-system/csi-node-driver-4xhrv" Mar 14 00:15:17.940335 kubelet[3430]: I0314 00:15:17.940018 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0e308c40-a8ad-497a-822b-a95b9df4915b-registration-dir\") pod \"csi-node-driver-4xhrv\" (UID: \"0e308c40-a8ad-497a-822b-a95b9df4915b\") " pod="calico-system/csi-node-driver-4xhrv" Mar 14 00:15:17.962978 kubelet[3430]: E0314 00:15:17.962380 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:17.962978 kubelet[3430]: W0314 00:15:17.962423 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:17.962978 kubelet[3430]: E0314 00:15:17.962474 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:17.970224 kubelet[3430]: E0314 00:15:17.970121 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:17.970224 kubelet[3430]: W0314 00:15:17.970162 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:17.970224 kubelet[3430]: E0314 00:15:17.970201 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:17.984070 kubelet[3430]: E0314 00:15:17.984031 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:17.985350 kubelet[3430]: W0314 00:15:17.984953 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:17.985350 kubelet[3430]: E0314 00:15:17.985009 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:17.990239 systemd[1]: Started cri-containerd-05fdefcd5835d57af858b94fe2a4f5a8f66eb7279fff21602899bd7cdbdcdb84.scope - libcontainer container 05fdefcd5835d57af858b94fe2a4f5a8f66eb7279fff21602899bd7cdbdcdb84. Mar 14 00:15:18.041662 kubelet[3430]: E0314 00:15:18.041607 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.041662 kubelet[3430]: W0314 00:15:18.041648 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.041946 kubelet[3430]: E0314 00:15:18.041706 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.042330 kubelet[3430]: E0314 00:15:18.042262 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.042330 kubelet[3430]: W0314 00:15:18.042309 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.043142 kubelet[3430]: E0314 00:15:18.042337 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.043142 kubelet[3430]: E0314 00:15:18.042810 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.043142 kubelet[3430]: W0314 00:15:18.043046 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.043142 kubelet[3430]: E0314 00:15:18.043084 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.044277 kubelet[3430]: E0314 00:15:18.044217 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.044277 kubelet[3430]: W0314 00:15:18.044279 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.044484 kubelet[3430]: E0314 00:15:18.044314 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.046262 kubelet[3430]: E0314 00:15:18.046205 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.046262 kubelet[3430]: W0314 00:15:18.046247 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.048632 kubelet[3430]: E0314 00:15:18.046283 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.048632 kubelet[3430]: E0314 00:15:18.047034 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.048632 kubelet[3430]: W0314 00:15:18.047062 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.048632 kubelet[3430]: E0314 00:15:18.047092 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.048632 kubelet[3430]: E0314 00:15:18.047867 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.048632 kubelet[3430]: W0314 00:15:18.047895 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.048632 kubelet[3430]: E0314 00:15:18.047927 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.049626 kubelet[3430]: E0314 00:15:18.049101 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.049626 kubelet[3430]: W0314 00:15:18.049145 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.049626 kubelet[3430]: E0314 00:15:18.049180 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.050973 kubelet[3430]: E0314 00:15:18.050922 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.051089 kubelet[3430]: W0314 00:15:18.050974 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.051089 kubelet[3430]: E0314 00:15:18.051009 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.053190 containerd[2011]: time="2026-03-14T00:15:18.053104440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wr8b4,Uid:9050044f-f50d-43f6-98c3-f394ccd86059,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:18.053930 kubelet[3430]: E0314 00:15:18.053878 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.053930 kubelet[3430]: W0314 00:15:18.053921 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.054137 kubelet[3430]: E0314 00:15:18.053957 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.058039 kubelet[3430]: E0314 00:15:18.057952 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.058039 kubelet[3430]: W0314 00:15:18.058014 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.058273 kubelet[3430]: E0314 00:15:18.058051 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.058732 kubelet[3430]: E0314 00:15:18.058680 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.059002 kubelet[3430]: W0314 00:15:18.058946 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.059071 kubelet[3430]: E0314 00:15:18.058995 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.060393 kubelet[3430]: E0314 00:15:18.060329 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.061212 kubelet[3430]: W0314 00:15:18.060374 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.062738 kubelet[3430]: E0314 00:15:18.061221 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.063245 kubelet[3430]: E0314 00:15:18.063182 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.063245 kubelet[3430]: W0314 00:15:18.063224 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.063387 kubelet[3430]: E0314 00:15:18.063264 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.063877 kubelet[3430]: E0314 00:15:18.063803 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.063877 kubelet[3430]: W0314 00:15:18.063851 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.064199 kubelet[3430]: E0314 00:15:18.063887 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.068440 kubelet[3430]: E0314 00:15:18.067997 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.068440 kubelet[3430]: W0314 00:15:18.068034 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.068440 kubelet[3430]: E0314 00:15:18.068068 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.069594 kubelet[3430]: E0314 00:15:18.069507 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.069594 kubelet[3430]: W0314 00:15:18.069534 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.069594 kubelet[3430]: E0314 00:15:18.069566 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.070653 kubelet[3430]: E0314 00:15:18.070487 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.070653 kubelet[3430]: W0314 00:15:18.070527 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.070653 kubelet[3430]: E0314 00:15:18.070559 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.072669 kubelet[3430]: E0314 00:15:18.072597 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.072801 kubelet[3430]: W0314 00:15:18.072641 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.072801 kubelet[3430]: E0314 00:15:18.072746 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.076078 kubelet[3430]: E0314 00:15:18.075022 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.076078 kubelet[3430]: W0314 00:15:18.075067 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.076078 kubelet[3430]: E0314 00:15:18.075102 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.076078 kubelet[3430]: E0314 00:15:18.075585 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.076078 kubelet[3430]: W0314 00:15:18.075607 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.076078 kubelet[3430]: E0314 00:15:18.075629 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.076461 kubelet[3430]: E0314 00:15:18.076136 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.076461 kubelet[3430]: W0314 00:15:18.076158 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.076461 kubelet[3430]: E0314 00:15:18.076182 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.080286 kubelet[3430]: E0314 00:15:18.078911 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.080286 kubelet[3430]: W0314 00:15:18.078952 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.080286 kubelet[3430]: E0314 00:15:18.078988 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.084135 kubelet[3430]: E0314 00:15:18.084077 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.084135 kubelet[3430]: W0314 00:15:18.084115 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.084336 kubelet[3430]: E0314 00:15:18.084150 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.090361 kubelet[3430]: E0314 00:15:18.088181 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.090361 kubelet[3430]: W0314 00:15:18.088223 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.090361 kubelet[3430]: E0314 00:15:18.088265 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.135049 containerd[2011]: time="2026-03-14T00:15:18.133902229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:18.135049 containerd[2011]: time="2026-03-14T00:15:18.134007997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:18.135049 containerd[2011]: time="2026-03-14T00:15:18.134044561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:18.135049 containerd[2011]: time="2026-03-14T00:15:18.134246365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:18.162657 kubelet[3430]: E0314 00:15:18.162513 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.162657 kubelet[3430]: W0314 00:15:18.162555 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.162657 kubelet[3430]: E0314 00:15:18.162589 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.187200 systemd[1]: Started cri-containerd-22fd1d1849e8fe1b640e59e8290d089e418e8391e1827f85e277ecf7c65d4cb8.scope - libcontainer container 22fd1d1849e8fe1b640e59e8290d089e418e8391e1827f85e277ecf7c65d4cb8. Mar 14 00:15:18.267999 containerd[2011]: time="2026-03-14T00:15:18.267731101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77879b854-g4czt,Uid:346bdc6d-179c-4f97-b2a4-583fce1cc26c,Namespace:calico-system,Attempt:0,} returns sandbox id \"05fdefcd5835d57af858b94fe2a4f5a8f66eb7279fff21602899bd7cdbdcdb84\"" Mar 14 00:15:18.273891 containerd[2011]: time="2026-03-14T00:15:18.273640069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 14 00:15:18.341543 containerd[2011]: time="2026-03-14T00:15:18.341215118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wr8b4,Uid:9050044f-f50d-43f6-98c3-f394ccd86059,Namespace:calico-system,Attempt:0,} returns sandbox id \"22fd1d1849e8fe1b640e59e8290d089e418e8391e1827f85e277ecf7c65d4cb8\"" Mar 14 00:15:18.663390 systemd[1]: run-containerd-runc-k8s.io-05fdefcd5835d57af858b94fe2a4f5a8f66eb7279fff21602899bd7cdbdcdb84-runc.aUA1P2.mount: Deactivated successfully. Mar 14 00:15:19.638521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3870134725.mount: Deactivated successfully. Mar 14 00:15:19.887693 kubelet[3430]: E0314 00:15:19.887622 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xhrv" podUID="0e308c40-a8ad-497a-822b-a95b9df4915b" Mar 14 00:15:20.758865 containerd[2011]: time="2026-03-14T00:15:20.758087118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:20.760291 containerd[2011]: time="2026-03-14T00:15:20.760243530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 14 00:15:20.761416 containerd[2011]: time="2026-03-14T00:15:20.761371686Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:20.765862 containerd[2011]: time="2026-03-14T00:15:20.765604242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:20.767566 containerd[2011]: time="2026-03-14T00:15:20.767389350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.493685069s" Mar 14 00:15:20.767566 containerd[2011]: time="2026-03-14T00:15:20.767443038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 14 00:15:20.770696 containerd[2011]: time="2026-03-14T00:15:20.770470722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 14 00:15:20.804902 containerd[2011]: time="2026-03-14T00:15:20.802858290Z" level=info msg="CreateContainer within sandbox \"05fdefcd5835d57af858b94fe2a4f5a8f66eb7279fff21602899bd7cdbdcdb84\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 14 00:15:20.852740 containerd[2011]: time="2026-03-14T00:15:20.852679014Z" level=info msg="CreateContainer within sandbox \"05fdefcd5835d57af858b94fe2a4f5a8f66eb7279fff21602899bd7cdbdcdb84\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1ac9e13c31f8a4c6fcae19c25121e412ffa3378d024443792e5ed870dd7b6f28\"" Mar 14 00:15:20.858632 containerd[2011]: time="2026-03-14T00:15:20.854954934Z" level=info msg="StartContainer for \"1ac9e13c31f8a4c6fcae19c25121e412ffa3378d024443792e5ed870dd7b6f28\"" Mar 14 00:15:20.926172 systemd[1]: Started cri-containerd-1ac9e13c31f8a4c6fcae19c25121e412ffa3378d024443792e5ed870dd7b6f28.scope - libcontainer container 1ac9e13c31f8a4c6fcae19c25121e412ffa3378d024443792e5ed870dd7b6f28. Mar 14 00:15:20.990998 containerd[2011]: time="2026-03-14T00:15:20.990717259Z" level=info msg="StartContainer for \"1ac9e13c31f8a4c6fcae19c25121e412ffa3378d024443792e5ed870dd7b6f28\" returns successfully" Mar 14 00:15:21.128976 kubelet[3430]: E0314 00:15:21.127694 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.128976 kubelet[3430]: W0314 00:15:21.127796 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.128976 kubelet[3430]: E0314 00:15:21.127862 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.135416 kubelet[3430]: E0314 00:15:21.133423 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.135416 kubelet[3430]: W0314 00:15:21.133461 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.135416 kubelet[3430]: E0314 00:15:21.133546 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.137350 kubelet[3430]: E0314 00:15:21.136164 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.137350 kubelet[3430]: W0314 00:15:21.136312 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.137350 kubelet[3430]: E0314 00:15:21.136349 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.141049 kubelet[3430]: E0314 00:15:21.137973 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.141049 kubelet[3430]: W0314 00:15:21.138004 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.141049 kubelet[3430]: E0314 00:15:21.138034 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.142685 kubelet[3430]: E0314 00:15:21.142026 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.142685 kubelet[3430]: W0314 00:15:21.142061 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.142685 kubelet[3430]: E0314 00:15:21.142095 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.143981 kubelet[3430]: E0314 00:15:21.143942 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.146214 kubelet[3430]: W0314 00:15:21.145676 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.146214 kubelet[3430]: E0314 00:15:21.145739 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.148040 kubelet[3430]: E0314 00:15:21.147996 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.148276 kubelet[3430]: W0314 00:15:21.148246 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.148580 kubelet[3430]: E0314 00:15:21.148366 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.149360 kubelet[3430]: E0314 00:15:21.149328 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.149762 kubelet[3430]: W0314 00:15:21.149732 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.151099 kubelet[3430]: E0314 00:15:21.150106 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.152575 kubelet[3430]: E0314 00:15:21.152293 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.152575 kubelet[3430]: W0314 00:15:21.152341 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.152575 kubelet[3430]: E0314 00:15:21.152376 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.153332 kubelet[3430]: E0314 00:15:21.153181 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.153332 kubelet[3430]: W0314 00:15:21.153212 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.153332 kubelet[3430]: E0314 00:15:21.153242 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.154291 kubelet[3430]: E0314 00:15:21.154042 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.154291 kubelet[3430]: W0314 00:15:21.154070 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.154291 kubelet[3430]: E0314 00:15:21.154098 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.154617 kubelet[3430]: E0314 00:15:21.154593 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.154746 kubelet[3430]: W0314 00:15:21.154721 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.155050 kubelet[3430]: E0314 00:15:21.154861 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.155457 kubelet[3430]: E0314 00:15:21.155426 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.156492 kubelet[3430]: W0314 00:15:21.155591 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.156492 kubelet[3430]: E0314 00:15:21.155628 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.157933 kubelet[3430]: E0314 00:15:21.157111 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.157933 kubelet[3430]: W0314 00:15:21.157146 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.157933 kubelet[3430]: E0314 00:15:21.157179 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.158756 kubelet[3430]: E0314 00:15:21.158585 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.158756 kubelet[3430]: W0314 00:15:21.158617 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.158756 kubelet[3430]: E0314 00:15:21.158653 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.188521 kubelet[3430]: E0314 00:15:21.188424 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.188521 kubelet[3430]: W0314 00:15:21.188467 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.188521 kubelet[3430]: E0314 00:15:21.188504 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.190309 kubelet[3430]: E0314 00:15:21.190249 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.190309 kubelet[3430]: W0314 00:15:21.190296 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.190309 kubelet[3430]: E0314 00:15:21.190330 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.191906 kubelet[3430]: E0314 00:15:21.191818 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.191906 kubelet[3430]: W0314 00:15:21.191896 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.192124 kubelet[3430]: E0314 00:15:21.191931 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.193304 kubelet[3430]: E0314 00:15:21.193233 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.193304 kubelet[3430]: W0314 00:15:21.193299 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.193533 kubelet[3430]: E0314 00:15:21.193338 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.194892 kubelet[3430]: E0314 00:15:21.194811 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.195108 kubelet[3430]: W0314 00:15:21.194899 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.195108 kubelet[3430]: E0314 00:15:21.195013 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.196331 kubelet[3430]: E0314 00:15:21.196270 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.196331 kubelet[3430]: W0314 00:15:21.196310 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.197957 kubelet[3430]: E0314 00:15:21.196346 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.197957 kubelet[3430]: E0314 00:15:21.196907 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.197957 kubelet[3430]: W0314 00:15:21.196931 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.197957 kubelet[3430]: E0314 00:15:21.196959 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.197957 kubelet[3430]: E0314 00:15:21.197393 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.197957 kubelet[3430]: W0314 00:15:21.197416 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.197957 kubelet[3430]: E0314 00:15:21.197441 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.199111 kubelet[3430]: E0314 00:15:21.199044 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.199111 kubelet[3430]: W0314 00:15:21.199086 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.199346 kubelet[3430]: E0314 00:15:21.199120 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.199652 kubelet[3430]: E0314 00:15:21.199610 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.199652 kubelet[3430]: W0314 00:15:21.199645 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.199796 kubelet[3430]: E0314 00:15:21.199675 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.200571 kubelet[3430]: E0314 00:15:21.200193 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.200571 kubelet[3430]: W0314 00:15:21.200218 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.200571 kubelet[3430]: E0314 00:15:21.200244 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.200786 kubelet[3430]: E0314 00:15:21.200624 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.200786 kubelet[3430]: W0314 00:15:21.200644 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.200786 kubelet[3430]: E0314 00:15:21.200668 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.201375 kubelet[3430]: E0314 00:15:21.201158 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.201375 kubelet[3430]: W0314 00:15:21.201193 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.201375 kubelet[3430]: E0314 00:15:21.201221 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.202795 kubelet[3430]: E0314 00:15:21.202670 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.202795 kubelet[3430]: W0314 00:15:21.202710 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.202795 kubelet[3430]: E0314 00:15:21.202743 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.203556 kubelet[3430]: E0314 00:15:21.203482 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.203556 kubelet[3430]: W0314 00:15:21.203521 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.203556 kubelet[3430]: E0314 00:15:21.203549 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.205312 kubelet[3430]: E0314 00:15:21.205254 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.205312 kubelet[3430]: W0314 00:15:21.205308 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.205470 kubelet[3430]: E0314 00:15:21.205344 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.207543 kubelet[3430]: E0314 00:15:21.207480 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.207543 kubelet[3430]: W0314 00:15:21.207526 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.208103 kubelet[3430]: E0314 00:15:21.207562 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.208441 kubelet[3430]: E0314 00:15:21.208399 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.208441 kubelet[3430]: W0314 00:15:21.208436 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.208713 kubelet[3430]: E0314 00:15:21.208468 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.783497 systemd[1]: run-containerd-runc-k8s.io-1ac9e13c31f8a4c6fcae19c25121e412ffa3378d024443792e5ed870dd7b6f28-runc.6dYasL.mount: Deactivated successfully. Mar 14 00:15:21.889303 kubelet[3430]: E0314 00:15:21.889213 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xhrv" podUID="0e308c40-a8ad-497a-822b-a95b9df4915b" Mar 14 00:15:22.010622 containerd[2011]: time="2026-03-14T00:15:22.010529764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:22.012660 containerd[2011]: time="2026-03-14T00:15:22.012310432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 14 00:15:22.015022 containerd[2011]: time="2026-03-14T00:15:22.014756152Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:22.020208 containerd[2011]: time="2026-03-14T00:15:22.019782448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:22.021887 containerd[2011]: time="2026-03-14T00:15:22.021538936Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.250999238s" Mar 14 00:15:22.021887 containerd[2011]: time="2026-03-14T00:15:22.021596908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 14 00:15:22.031724 containerd[2011]: time="2026-03-14T00:15:22.031530616Z" level=info msg="CreateContainer within sandbox \"22fd1d1849e8fe1b640e59e8290d089e418e8391e1827f85e277ecf7c65d4cb8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 14 00:15:22.057780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3635388275.mount: Deactivated successfully. Mar 14 00:15:22.065055 containerd[2011]: time="2026-03-14T00:15:22.064982860Z" level=info msg="CreateContainer within sandbox \"22fd1d1849e8fe1b640e59e8290d089e418e8391e1827f85e277ecf7c65d4cb8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e6f439b74baaba4187138682a14c209405c3d43039d69f48a68c91c4f94e3d52\"" Mar 14 00:15:22.067291 containerd[2011]: time="2026-03-14T00:15:22.067136344Z" level=info msg="StartContainer for \"e6f439b74baaba4187138682a14c209405c3d43039d69f48a68c91c4f94e3d52\"" Mar 14 00:15:22.133668 kubelet[3430]: I0314 00:15:22.132189 3430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77879b854-g4czt" podStartSLOduration=2.63535216 podStartE2EDuration="5.132166085s" podCreationTimestamp="2026-03-14 00:15:17 +0000 UTC" firstStartedPulling="2026-03-14 00:15:18.272389357 +0000 UTC m=+30.703288689" lastFinishedPulling="2026-03-14 00:15:20.769203282 +0000 UTC m=+33.200102614" observedRunningTime="2026-03-14 00:15:21.124540624 +0000 UTC m=+33.555439992" watchObservedRunningTime="2026-03-14 00:15:22.132166085 +0000 UTC m=+34.563065429" Mar 14 00:15:22.145136 systemd[1]: Started cri-containerd-e6f439b74baaba4187138682a14c209405c3d43039d69f48a68c91c4f94e3d52.scope - libcontainer container e6f439b74baaba4187138682a14c209405c3d43039d69f48a68c91c4f94e3d52. Mar 14 00:15:22.170700 kubelet[3430]: E0314 00:15:22.170413 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.170700 kubelet[3430]: W0314 00:15:22.170441 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.170700 kubelet[3430]: E0314 00:15:22.170490 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.172188 kubelet[3430]: E0314 00:15:22.172068 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.172188 kubelet[3430]: W0314 00:15:22.172128 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.172471 kubelet[3430]: E0314 00:15:22.172293 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.174461 kubelet[3430]: E0314 00:15:22.174226 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.174461 kubelet[3430]: W0314 00:15:22.174260 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.174461 kubelet[3430]: E0314 00:15:22.174319 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.176399 kubelet[3430]: E0314 00:15:22.175187 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.176399 kubelet[3430]: W0314 00:15:22.175213 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.176399 kubelet[3430]: E0314 00:15:22.175266 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.177758 kubelet[3430]: E0314 00:15:22.177700 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.178098 kubelet[3430]: W0314 00:15:22.177968 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.178364 kubelet[3430]: E0314 00:15:22.178215 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.182190 kubelet[3430]: E0314 00:15:22.181567 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.182190 kubelet[3430]: W0314 00:15:22.181604 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.182190 kubelet[3430]: E0314 00:15:22.181637 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.183533 kubelet[3430]: E0314 00:15:22.183021 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.183533 kubelet[3430]: W0314 00:15:22.183052 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.183533 kubelet[3430]: E0314 00:15:22.183083 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.184708 kubelet[3430]: E0314 00:15:22.184423 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.184708 kubelet[3430]: W0314 00:15:22.184454 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.185134 kubelet[3430]: E0314 00:15:22.184485 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.185780 kubelet[3430]: E0314 00:15:22.185617 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.185780 kubelet[3430]: W0314 00:15:22.185649 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.185780 kubelet[3430]: E0314 00:15:22.185684 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.186548 kubelet[3430]: E0314 00:15:22.186416 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.186548 kubelet[3430]: W0314 00:15:22.186443 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.186548 kubelet[3430]: E0314 00:15:22.186470 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.187306 kubelet[3430]: E0314 00:15:22.187093 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.187306 kubelet[3430]: W0314 00:15:22.187119 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.187306 kubelet[3430]: E0314 00:15:22.187144 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.188216 kubelet[3430]: E0314 00:15:22.187700 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.188216 kubelet[3430]: W0314 00:15:22.187724 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.188216 kubelet[3430]: E0314 00:15:22.187749 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.189613 kubelet[3430]: E0314 00:15:22.189576 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.191076 kubelet[3430]: W0314 00:15:22.190894 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.191076 kubelet[3430]: E0314 00:15:22.190948 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.192140 kubelet[3430]: E0314 00:15:22.192068 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.193085 kubelet[3430]: W0314 00:15:22.192102 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.193085 kubelet[3430]: E0314 00:15:22.192798 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.194985 kubelet[3430]: E0314 00:15:22.194643 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.194985 kubelet[3430]: W0314 00:15:22.194679 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.194985 kubelet[3430]: E0314 00:15:22.194758 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.199863 kubelet[3430]: E0314 00:15:22.198994 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.200294 kubelet[3430]: W0314 00:15:22.200085 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.200294 kubelet[3430]: E0314 00:15:22.200157 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.201933 kubelet[3430]: E0314 00:15:22.201197 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.201933 kubelet[3430]: W0314 00:15:22.201226 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.201933 kubelet[3430]: E0314 00:15:22.201255 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.204019 kubelet[3430]: E0314 00:15:22.203931 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.204019 kubelet[3430]: W0314 00:15:22.204064 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.204019 kubelet[3430]: E0314 00:15:22.204097 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.204782 kubelet[3430]: E0314 00:15:22.204758 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.205088 kubelet[3430]: W0314 00:15:22.204993 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.205088 kubelet[3430]: E0314 00:15:22.205060 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.207177 kubelet[3430]: E0314 00:15:22.206713 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.207177 kubelet[3430]: W0314 00:15:22.206767 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.207177 kubelet[3430]: E0314 00:15:22.206802 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.210207 kubelet[3430]: E0314 00:15:22.209880 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.210207 kubelet[3430]: W0314 00:15:22.209915 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.210207 kubelet[3430]: E0314 00:15:22.209974 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.210958 kubelet[3430]: E0314 00:15:22.210891 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.211753 kubelet[3430]: W0314 00:15:22.211122 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.211753 kubelet[3430]: E0314 00:15:22.211565 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.213268 kubelet[3430]: E0314 00:15:22.212923 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.213268 kubelet[3430]: W0314 00:15:22.212961 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.213268 kubelet[3430]: E0314 00:15:22.212993 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.214387 kubelet[3430]: E0314 00:15:22.214024 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.214387 kubelet[3430]: W0314 00:15:22.214056 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.214387 kubelet[3430]: E0314 00:15:22.214088 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.216422 kubelet[3430]: E0314 00:15:22.215905 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.216422 kubelet[3430]: W0314 00:15:22.215935 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.216422 kubelet[3430]: E0314 00:15:22.215967 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.216979 kubelet[3430]: E0314 00:15:22.216736 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.216979 kubelet[3430]: W0314 00:15:22.216765 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.216979 kubelet[3430]: E0314 00:15:22.216792 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.218698 kubelet[3430]: E0314 00:15:22.218368 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.218698 kubelet[3430]: W0314 00:15:22.218429 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.218698 kubelet[3430]: E0314 00:15:22.218462 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.221495 kubelet[3430]: E0314 00:15:22.220290 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.221495 kubelet[3430]: W0314 00:15:22.220337 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.221495 kubelet[3430]: E0314 00:15:22.220371 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.222787 kubelet[3430]: E0314 00:15:22.222679 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.223622 kubelet[3430]: W0314 00:15:22.223142 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.223622 kubelet[3430]: E0314 00:15:22.223214 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.225533 kubelet[3430]: E0314 00:15:22.225225 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.225533 kubelet[3430]: W0314 00:15:22.225287 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.225533 kubelet[3430]: E0314 00:15:22.225322 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.227759 kubelet[3430]: E0314 00:15:22.227495 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.227759 kubelet[3430]: W0314 00:15:22.227531 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.227759 kubelet[3430]: E0314 00:15:22.227565 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.230212 kubelet[3430]: E0314 00:15:22.229959 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.230212 kubelet[3430]: W0314 00:15:22.230009 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.230212 kubelet[3430]: E0314 00:15:22.230041 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.230976 kubelet[3430]: E0314 00:15:22.230765 3430 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:22.230976 kubelet[3430]: W0314 00:15:22.230793 3430 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:22.230976 kubelet[3430]: E0314 00:15:22.230819 3430 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.244978 containerd[2011]: time="2026-03-14T00:15:22.244736513Z" level=info msg="StartContainer for \"e6f439b74baaba4187138682a14c209405c3d43039d69f48a68c91c4f94e3d52\" returns successfully" Mar 14 00:15:22.274519 systemd[1]: cri-containerd-e6f439b74baaba4187138682a14c209405c3d43039d69f48a68c91c4f94e3d52.scope: Deactivated successfully. Mar 14 00:15:22.769474 containerd[2011]: time="2026-03-14T00:15:22.769371116Z" level=info msg="shim disconnected" id=e6f439b74baaba4187138682a14c209405c3d43039d69f48a68c91c4f94e3d52 namespace=k8s.io Mar 14 00:15:22.769474 containerd[2011]: time="2026-03-14T00:15:22.769453004Z" level=warning msg="cleaning up after shim disconnected" id=e6f439b74baaba4187138682a14c209405c3d43039d69f48a68c91c4f94e3d52 namespace=k8s.io Mar 14 00:15:22.769474 containerd[2011]: time="2026-03-14T00:15:22.769475600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:22.783273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6f439b74baaba4187138682a14c209405c3d43039d69f48a68c91c4f94e3d52-rootfs.mount: Deactivated successfully. Mar 14 00:15:23.112431 containerd[2011]: time="2026-03-14T00:15:23.110965277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 14 00:15:23.888361 kubelet[3430]: E0314 00:15:23.887798 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xhrv" podUID="0e308c40-a8ad-497a-822b-a95b9df4915b" Mar 14 00:15:25.886976 kubelet[3430]: E0314 00:15:25.886873 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xhrv" podUID="0e308c40-a8ad-497a-822b-a95b9df4915b" Mar 14 00:15:27.888879 kubelet[3430]: E0314 00:15:27.888602 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xhrv" podUID="0e308c40-a8ad-497a-822b-a95b9df4915b" Mar 14 00:15:29.203781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021642861.mount: Deactivated successfully. Mar 14 00:15:29.264876 containerd[2011]: time="2026-03-14T00:15:29.264779652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:29.268120 containerd[2011]: time="2026-03-14T00:15:29.268062432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 14 00:15:29.271851 containerd[2011]: time="2026-03-14T00:15:29.270979224Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:29.275664 containerd[2011]: time="2026-03-14T00:15:29.275599764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:29.280506 containerd[2011]: time="2026-03-14T00:15:29.280435800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.169383991s" Mar 14 00:15:29.280744 containerd[2011]: time="2026-03-14T00:15:29.280709772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 14 00:15:29.294659 containerd[2011]: time="2026-03-14T00:15:29.294602424Z" level=info msg="CreateContainer within sandbox \"22fd1d1849e8fe1b640e59e8290d089e418e8391e1827f85e277ecf7c65d4cb8\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 14 00:15:29.327766 containerd[2011]: time="2026-03-14T00:15:29.327708192Z" level=info msg="CreateContainer within sandbox \"22fd1d1849e8fe1b640e59e8290d089e418e8391e1827f85e277ecf7c65d4cb8\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"a90a0d594d04e0fae1a98f9b482e5c583f700cf4c1ccfbbb8bcd9b7d1b9c180f\"" Mar 14 00:15:29.329467 containerd[2011]: time="2026-03-14T00:15:29.329170452Z" level=info msg="StartContainer for \"a90a0d594d04e0fae1a98f9b482e5c583f700cf4c1ccfbbb8bcd9b7d1b9c180f\"" Mar 14 00:15:29.392158 systemd[1]: Started cri-containerd-a90a0d594d04e0fae1a98f9b482e5c583f700cf4c1ccfbbb8bcd9b7d1b9c180f.scope - libcontainer container a90a0d594d04e0fae1a98f9b482e5c583f700cf4c1ccfbbb8bcd9b7d1b9c180f. Mar 14 00:15:29.447421 containerd[2011]: time="2026-03-14T00:15:29.447258925Z" level=info msg="StartContainer for \"a90a0d594d04e0fae1a98f9b482e5c583f700cf4c1ccfbbb8bcd9b7d1b9c180f\" returns successfully" Mar 14 00:15:29.631733 systemd[1]: cri-containerd-a90a0d594d04e0fae1a98f9b482e5c583f700cf4c1ccfbbb8bcd9b7d1b9c180f.scope: Deactivated successfully. Mar 14 00:15:29.888948 kubelet[3430]: E0314 00:15:29.888695 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xhrv" podUID="0e308c40-a8ad-497a-822b-a95b9df4915b" Mar 14 00:15:30.111192 containerd[2011]: time="2026-03-14T00:15:30.111056448Z" level=info msg="shim disconnected" id=a90a0d594d04e0fae1a98f9b482e5c583f700cf4c1ccfbbb8bcd9b7d1b9c180f namespace=k8s.io Mar 14 00:15:30.111192 containerd[2011]: time="2026-03-14T00:15:30.111159024Z" level=warning msg="cleaning up after shim disconnected" id=a90a0d594d04e0fae1a98f9b482e5c583f700cf4c1ccfbbb8bcd9b7d1b9c180f namespace=k8s.io Mar 14 00:15:30.111192 containerd[2011]: time="2026-03-14T00:15:30.111182352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:30.146325 containerd[2011]: time="2026-03-14T00:15:30.145909164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 14 00:15:30.204044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a90a0d594d04e0fae1a98f9b482e5c583f700cf4c1ccfbbb8bcd9b7d1b9c180f-rootfs.mount: Deactivated successfully. Mar 14 00:15:31.887351 kubelet[3430]: E0314 00:15:31.887237 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xhrv" podUID="0e308c40-a8ad-497a-822b-a95b9df4915b" Mar 14 00:15:33.127654 containerd[2011]: time="2026-03-14T00:15:33.127589559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:33.129588 containerd[2011]: time="2026-03-14T00:15:33.129530379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 14 00:15:33.130742 containerd[2011]: time="2026-03-14T00:15:33.130073187Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:33.134913 containerd[2011]: time="2026-03-14T00:15:33.134796987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:33.136876 containerd[2011]: time="2026-03-14T00:15:33.136791699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.990816175s" Mar 14 00:15:33.137028 containerd[2011]: time="2026-03-14T00:15:33.136874115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 14 00:15:33.146509 containerd[2011]: time="2026-03-14T00:15:33.146458983Z" level=info msg="CreateContainer within sandbox \"22fd1d1849e8fe1b640e59e8290d089e418e8391e1827f85e277ecf7c65d4cb8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 14 00:15:33.168087 containerd[2011]: time="2026-03-14T00:15:33.168028683Z" level=info msg="CreateContainer within sandbox \"22fd1d1849e8fe1b640e59e8290d089e418e8391e1827f85e277ecf7c65d4cb8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"544dddebad4551d4a7f028b1e9546a1d398ccdbd6eef6db32c8a712e4710d971\"" Mar 14 00:15:33.170995 containerd[2011]: time="2026-03-14T00:15:33.170940231Z" level=info msg="StartContainer for \"544dddebad4551d4a7f028b1e9546a1d398ccdbd6eef6db32c8a712e4710d971\"" Mar 14 00:15:33.247177 systemd[1]: Started cri-containerd-544dddebad4551d4a7f028b1e9546a1d398ccdbd6eef6db32c8a712e4710d971.scope - libcontainer container 544dddebad4551d4a7f028b1e9546a1d398ccdbd6eef6db32c8a712e4710d971. Mar 14 00:15:33.298070 containerd[2011]: time="2026-03-14T00:15:33.297877084Z" level=info msg="StartContainer for \"544dddebad4551d4a7f028b1e9546a1d398ccdbd6eef6db32c8a712e4710d971\" returns successfully" Mar 14 00:15:33.888290 kubelet[3430]: E0314 00:15:33.886741 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xhrv" podUID="0e308c40-a8ad-497a-822b-a95b9df4915b" Mar 14 00:15:35.179597 containerd[2011]: time="2026-03-14T00:15:35.179102249Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:15:35.184313 systemd[1]: cri-containerd-544dddebad4551d4a7f028b1e9546a1d398ccdbd6eef6db32c8a712e4710d971.scope: Deactivated successfully. Mar 14 00:15:35.187029 systemd[1]: cri-containerd-544dddebad4551d4a7f028b1e9546a1d398ccdbd6eef6db32c8a712e4710d971.scope: Consumed 1.049s CPU time. Mar 14 00:15:35.230528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-544dddebad4551d4a7f028b1e9546a1d398ccdbd6eef6db32c8a712e4710d971-rootfs.mount: Deactivated successfully. Mar 14 00:15:35.233116 containerd[2011]: time="2026-03-14T00:15:35.232977018Z" level=info msg="shim disconnected" id=544dddebad4551d4a7f028b1e9546a1d398ccdbd6eef6db32c8a712e4710d971 namespace=k8s.io Mar 14 00:15:35.233116 containerd[2011]: time="2026-03-14T00:15:35.233081238Z" level=warning msg="cleaning up after shim disconnected" id=544dddebad4551d4a7f028b1e9546a1d398ccdbd6eef6db32c8a712e4710d971 namespace=k8s.io Mar 14 00:15:35.233116 containerd[2011]: time="2026-03-14T00:15:35.233106930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:35.242098 kubelet[3430]: I0314 00:15:35.242027 3430 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 14 00:15:35.330449 systemd[1]: Created slice kubepods-burstable-pode2fe8cef_b474_4b32_815f_59d01d17b696.slice - libcontainer container kubepods-burstable-pode2fe8cef_b474_4b32_815f_59d01d17b696.slice. Mar 14 00:15:35.357553 systemd[1]: Created slice kubepods-besteffort-pod219e5ba8_ebab_482b_96b4_7af0503f271c.slice - libcontainer container kubepods-besteffort-pod219e5ba8_ebab_482b_96b4_7af0503f271c.slice. Mar 14 00:15:35.377859 systemd[1]: Created slice kubepods-burstable-pod3826a231_0f38_4b81_9031_95274d5b9189.slice - libcontainer container kubepods-burstable-pod3826a231_0f38_4b81_9031_95274d5b9189.slice. Mar 14 00:15:35.394255 systemd[1]: Created slice kubepods-besteffort-pod70f1e795_c100_4dc4_b6f8_c1179e68e9a9.slice - libcontainer container kubepods-besteffort-pod70f1e795_c100_4dc4_b6f8_c1179e68e9a9.slice. Mar 14 00:15:35.417988 kubelet[3430]: I0314 00:15:35.417909 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2fe8cef-b474-4b32-815f-59d01d17b696-config-volume\") pod \"coredns-66bc5c9577-rccd6\" (UID: \"e2fe8cef-b474-4b32-815f-59d01d17b696\") " pod="kube-system/coredns-66bc5c9577-rccd6" Mar 14 00:15:35.417988 kubelet[3430]: I0314 00:15:35.417974 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mml4c\" (UniqueName: \"kubernetes.io/projected/e2fe8cef-b474-4b32-815f-59d01d17b696-kube-api-access-mml4c\") pod \"coredns-66bc5c9577-rccd6\" (UID: \"e2fe8cef-b474-4b32-815f-59d01d17b696\") " pod="kube-system/coredns-66bc5c9577-rccd6" Mar 14 00:15:35.418197 kubelet[3430]: I0314 00:15:35.418016 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/219e5ba8-ebab-482b-96b4-7af0503f271c-tigera-ca-bundle\") pod \"calico-kube-controllers-68785684b4-j7pqm\" (UID: \"219e5ba8-ebab-482b-96b4-7af0503f271c\") " pod="calico-system/calico-kube-controllers-68785684b4-j7pqm" Mar 14 00:15:35.418197 kubelet[3430]: I0314 00:15:35.418054 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq5w2\" (UniqueName: \"kubernetes.io/projected/219e5ba8-ebab-482b-96b4-7af0503f271c-kube-api-access-zq5w2\") pod \"calico-kube-controllers-68785684b4-j7pqm\" (UID: \"219e5ba8-ebab-482b-96b4-7af0503f271c\") " pod="calico-system/calico-kube-controllers-68785684b4-j7pqm" Mar 14 00:15:35.418197 kubelet[3430]: I0314 00:15:35.418092 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/70f1e795-c100-4dc4-b6f8-c1179e68e9a9-nginx-config\") pod \"whisker-656d55fdc8-g69c4\" (UID: \"70f1e795-c100-4dc4-b6f8-c1179e68e9a9\") " pod="calico-system/whisker-656d55fdc8-g69c4" Mar 14 00:15:35.418197 kubelet[3430]: I0314 00:15:35.418128 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/70f1e795-c100-4dc4-b6f8-c1179e68e9a9-whisker-backend-key-pair\") pod \"whisker-656d55fdc8-g69c4\" (UID: \"70f1e795-c100-4dc4-b6f8-c1179e68e9a9\") " pod="calico-system/whisker-656d55fdc8-g69c4" Mar 14 00:15:35.418197 kubelet[3430]: I0314 00:15:35.418173 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95ad4bc9-cde4-4484-81ed-f6e09950a754-config\") pod \"goldmane-cccfbd5cf-pbgth\" (UID: \"95ad4bc9-cde4-4484-81ed-f6e09950a754\") " pod="calico-system/goldmane-cccfbd5cf-pbgth" Mar 14 00:15:35.418489 kubelet[3430]: I0314 00:15:35.418211 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xws8\" (UniqueName: \"kubernetes.io/projected/3826a231-0f38-4b81-9031-95274d5b9189-kube-api-access-2xws8\") pod \"coredns-66bc5c9577-whvxh\" (UID: \"3826a231-0f38-4b81-9031-95274d5b9189\") " pod="kube-system/coredns-66bc5c9577-whvxh" Mar 14 00:15:35.418489 kubelet[3430]: I0314 00:15:35.418250 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70f1e795-c100-4dc4-b6f8-c1179e68e9a9-whisker-ca-bundle\") pod \"whisker-656d55fdc8-g69c4\" (UID: \"70f1e795-c100-4dc4-b6f8-c1179e68e9a9\") " pod="calico-system/whisker-656d55fdc8-g69c4" Mar 14 00:15:35.418489 kubelet[3430]: I0314 00:15:35.418291 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd4tw\" (UniqueName: \"kubernetes.io/projected/70f1e795-c100-4dc4-b6f8-c1179e68e9a9-kube-api-access-qd4tw\") pod \"whisker-656d55fdc8-g69c4\" (UID: \"70f1e795-c100-4dc4-b6f8-c1179e68e9a9\") " pod="calico-system/whisker-656d55fdc8-g69c4" Mar 14 00:15:35.418489 kubelet[3430]: I0314 00:15:35.418328 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95ad4bc9-cde4-4484-81ed-f6e09950a754-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-pbgth\" (UID: \"95ad4bc9-cde4-4484-81ed-f6e09950a754\") " pod="calico-system/goldmane-cccfbd5cf-pbgth" Mar 14 00:15:35.418489 kubelet[3430]: I0314 00:15:35.418365 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/95ad4bc9-cde4-4484-81ed-f6e09950a754-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-pbgth\" (UID: \"95ad4bc9-cde4-4484-81ed-f6e09950a754\") " pod="calico-system/goldmane-cccfbd5cf-pbgth" Mar 14 00:15:35.420075 kubelet[3430]: I0314 00:15:35.418403 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dfe8f338-e4ed-49e1-8e8d-196f56df8d36-calico-apiserver-certs\") pod \"calico-apiserver-65b4c4f55c-x5m6q\" (UID: \"dfe8f338-e4ed-49e1-8e8d-196f56df8d36\") " pod="calico-system/calico-apiserver-65b4c4f55c-x5m6q" Mar 14 00:15:35.420075 kubelet[3430]: I0314 00:15:35.418444 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9vlt\" (UniqueName: \"kubernetes.io/projected/ef193aa7-7886-492e-84a7-367d7c11360a-kube-api-access-z9vlt\") pod \"calico-apiserver-65b4c4f55c-t5bf4\" (UID: \"ef193aa7-7886-492e-84a7-367d7c11360a\") " pod="calico-system/calico-apiserver-65b4c4f55c-t5bf4" Mar 14 00:15:35.420075 kubelet[3430]: I0314 00:15:35.418482 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ef193aa7-7886-492e-84a7-367d7c11360a-calico-apiserver-certs\") pod \"calico-apiserver-65b4c4f55c-t5bf4\" (UID: \"ef193aa7-7886-492e-84a7-367d7c11360a\") " pod="calico-system/calico-apiserver-65b4c4f55c-t5bf4" Mar 14 00:15:35.420075 kubelet[3430]: I0314 00:15:35.418518 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd8l5\" (UniqueName: \"kubernetes.io/projected/95ad4bc9-cde4-4484-81ed-f6e09950a754-kube-api-access-rd8l5\") pod \"goldmane-cccfbd5cf-pbgth\" (UID: \"95ad4bc9-cde4-4484-81ed-f6e09950a754\") " pod="calico-system/goldmane-cccfbd5cf-pbgth" Mar 14 00:15:35.420075 kubelet[3430]: I0314 00:15:35.418554 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3826a231-0f38-4b81-9031-95274d5b9189-config-volume\") pod \"coredns-66bc5c9577-whvxh\" (UID: \"3826a231-0f38-4b81-9031-95274d5b9189\") " pod="kube-system/coredns-66bc5c9577-whvxh" Mar 14 00:15:35.419815 systemd[1]: Created slice kubepods-besteffort-poddfe8f338_e4ed_49e1_8e8d_196f56df8d36.slice - libcontainer container kubepods-besteffort-poddfe8f338_e4ed_49e1_8e8d_196f56df8d36.slice. Mar 14 00:15:35.422737 kubelet[3430]: I0314 00:15:35.418594 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45w98\" (UniqueName: \"kubernetes.io/projected/dfe8f338-e4ed-49e1-8e8d-196f56df8d36-kube-api-access-45w98\") pod \"calico-apiserver-65b4c4f55c-x5m6q\" (UID: \"dfe8f338-e4ed-49e1-8e8d-196f56df8d36\") " pod="calico-system/calico-apiserver-65b4c4f55c-x5m6q" Mar 14 00:15:35.437766 systemd[1]: Created slice kubepods-besteffort-podef193aa7_7886_492e_84a7_367d7c11360a.slice - libcontainer container kubepods-besteffort-podef193aa7_7886_492e_84a7_367d7c11360a.slice. Mar 14 00:15:35.456638 systemd[1]: Created slice kubepods-besteffort-pod95ad4bc9_cde4_4484_81ed_f6e09950a754.slice - libcontainer container kubepods-besteffort-pod95ad4bc9_cde4_4484_81ed_f6e09950a754.slice. Mar 14 00:15:35.649861 containerd[2011]: time="2026-03-14T00:15:35.649775408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rccd6,Uid:e2fe8cef-b474-4b32-815f-59d01d17b696,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:35.678110 containerd[2011]: time="2026-03-14T00:15:35.677322572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68785684b4-j7pqm,Uid:219e5ba8-ebab-482b-96b4-7af0503f271c,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:35.689079 containerd[2011]: time="2026-03-14T00:15:35.688926416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-whvxh,Uid:3826a231-0f38-4b81-9031-95274d5b9189,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:35.712133 containerd[2011]: time="2026-03-14T00:15:35.712036616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-656d55fdc8-g69c4,Uid:70f1e795-c100-4dc4-b6f8-c1179e68e9a9,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:35.733184 containerd[2011]: time="2026-03-14T00:15:35.732638840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b4c4f55c-x5m6q,Uid:dfe8f338-e4ed-49e1-8e8d-196f56df8d36,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:35.751586 containerd[2011]: time="2026-03-14T00:15:35.751531424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b4c4f55c-t5bf4,Uid:ef193aa7-7886-492e-84a7-367d7c11360a,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:35.769227 containerd[2011]: time="2026-03-14T00:15:35.769144244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-pbgth,Uid:95ad4bc9-cde4-4484-81ed-f6e09950a754,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:35.912790 systemd[1]: Created slice kubepods-besteffort-pod0e308c40_a8ad_497a_822b_a95b9df4915b.slice - libcontainer container kubepods-besteffort-pod0e308c40_a8ad_497a_822b_a95b9df4915b.slice. Mar 14 00:15:35.926180 containerd[2011]: time="2026-03-14T00:15:35.926122317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4xhrv,Uid:0e308c40-a8ad-497a-822b-a95b9df4915b,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:36.329300 containerd[2011]: time="2026-03-14T00:15:36.329229475Z" level=info msg="CreateContainer within sandbox \"22fd1d1849e8fe1b640e59e8290d089e418e8391e1827f85e277ecf7c65d4cb8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 14 00:15:36.388859 containerd[2011]: time="2026-03-14T00:15:36.386620699Z" level=error msg="Failed to destroy network for sandbox \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.394893 containerd[2011]: time="2026-03-14T00:15:36.393711667Z" level=error msg="encountered an error cleaning up failed sandbox \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.394893 containerd[2011]: time="2026-03-14T00:15:36.393818323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4xhrv,Uid:0e308c40-a8ad-497a-822b-a95b9df4915b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.395594 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8-shm.mount: Deactivated successfully. Mar 14 00:15:36.408355 kubelet[3430]: E0314 00:15:36.404521 3430 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.408355 kubelet[3430]: E0314 00:15:36.404621 3430 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4xhrv" Mar 14 00:15:36.408355 kubelet[3430]: E0314 00:15:36.404653 3430 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4xhrv" Mar 14 00:15:36.410393 kubelet[3430]: E0314 00:15:36.404729 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4xhrv_calico-system(0e308c40-a8ad-497a-822b-a95b9df4915b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4xhrv_calico-system(0e308c40-a8ad-497a-822b-a95b9df4915b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4xhrv" podUID="0e308c40-a8ad-497a-822b-a95b9df4915b" Mar 14 00:15:36.414931 containerd[2011]: time="2026-03-14T00:15:36.413982644Z" level=error msg="Failed to destroy network for sandbox \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.419516 containerd[2011]: time="2026-03-14T00:15:36.419435204Z" level=error msg="encountered an error cleaning up failed sandbox \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.419694 containerd[2011]: time="2026-03-14T00:15:36.419552768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rccd6,Uid:e2fe8cef-b474-4b32-815f-59d01d17b696,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.421890 kubelet[3430]: E0314 00:15:36.420053 3430 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.421890 kubelet[3430]: E0314 00:15:36.420129 3430 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rccd6" Mar 14 00:15:36.421890 kubelet[3430]: E0314 00:15:36.420162 3430 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rccd6" Mar 14 00:15:36.422197 kubelet[3430]: E0314 00:15:36.420239 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-rccd6_kube-system(e2fe8cef-b474-4b32-815f-59d01d17b696)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-rccd6_kube-system(e2fe8cef-b474-4b32-815f-59d01d17b696)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-rccd6" podUID="e2fe8cef-b474-4b32-815f-59d01d17b696" Mar 14 00:15:36.426975 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa-shm.mount: Deactivated successfully. Mar 14 00:15:36.459089 containerd[2011]: time="2026-03-14T00:15:36.459015476Z" level=error msg="Failed to destroy network for sandbox \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.459695 containerd[2011]: time="2026-03-14T00:15:36.459630668Z" level=error msg="encountered an error cleaning up failed sandbox \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.459974 containerd[2011]: time="2026-03-14T00:15:36.459724604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b4c4f55c-x5m6q,Uid:dfe8f338-e4ed-49e1-8e8d-196f56df8d36,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.460891 kubelet[3430]: E0314 00:15:36.460326 3430 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.460891 kubelet[3430]: E0314 00:15:36.460405 3430 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-65b4c4f55c-x5m6q" Mar 14 00:15:36.460891 kubelet[3430]: E0314 00:15:36.460438 3430 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-65b4c4f55c-x5m6q" Mar 14 00:15:36.462654 kubelet[3430]: E0314 00:15:36.460984 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65b4c4f55c-x5m6q_calico-system(dfe8f338-e4ed-49e1-8e8d-196f56df8d36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65b4c4f55c-x5m6q_calico-system(dfe8f338-e4ed-49e1-8e8d-196f56df8d36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-65b4c4f55c-x5m6q" podUID="dfe8f338-e4ed-49e1-8e8d-196f56df8d36" Mar 14 00:15:36.475005 containerd[2011]: time="2026-03-14T00:15:36.474931772Z" level=info msg="CreateContainer within sandbox \"22fd1d1849e8fe1b640e59e8290d089e418e8391e1827f85e277ecf7c65d4cb8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b0d0c4f0071f3de37976c289649851f2142547545e07e65cc718e6828f02839a\"" Mar 14 00:15:36.477551 containerd[2011]: time="2026-03-14T00:15:36.477343928Z" level=info msg="StartContainer for \"b0d0c4f0071f3de37976c289649851f2142547545e07e65cc718e6828f02839a\"" Mar 14 00:15:36.486228 containerd[2011]: time="2026-03-14T00:15:36.486158348Z" level=error msg="Failed to destroy network for sandbox \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.489406 containerd[2011]: time="2026-03-14T00:15:36.489346592Z" level=error msg="Failed to destroy network for sandbox \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.489893 containerd[2011]: time="2026-03-14T00:15:36.489777356Z" level=error msg="encountered an error cleaning up failed sandbox \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.490091 containerd[2011]: time="2026-03-14T00:15:36.489957176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68785684b4-j7pqm,Uid:219e5ba8-ebab-482b-96b4-7af0503f271c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.490481 kubelet[3430]: E0314 00:15:36.490244 3430 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.490481 kubelet[3430]: E0314 00:15:36.490330 3430 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68785684b4-j7pqm" Mar 14 00:15:36.490481 kubelet[3430]: E0314 00:15:36.490366 3430 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68785684b4-j7pqm" Mar 14 00:15:36.491359 containerd[2011]: time="2026-03-14T00:15:36.489794312Z" level=error msg="Failed to destroy network for sandbox \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.491727 kubelet[3430]: E0314 00:15:36.490452 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68785684b4-j7pqm_calico-system(219e5ba8-ebab-482b-96b4-7af0503f271c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68785684b4-j7pqm_calico-system(219e5ba8-ebab-482b-96b4-7af0503f271c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68785684b4-j7pqm" podUID="219e5ba8-ebab-482b-96b4-7af0503f271c" Mar 14 00:15:36.492816 containerd[2011]: time="2026-03-14T00:15:36.491818508Z" level=error msg="Failed to destroy network for sandbox \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.494179 containerd[2011]: time="2026-03-14T00:15:36.494010296Z" level=error msg="encountered an error cleaning up failed sandbox \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.494448 containerd[2011]: time="2026-03-14T00:15:36.494133596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-whvxh,Uid:3826a231-0f38-4b81-9031-95274d5b9189,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.495526 containerd[2011]: time="2026-03-14T00:15:36.495304160Z" level=error msg="encountered an error cleaning up failed sandbox \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.495526 containerd[2011]: time="2026-03-14T00:15:36.495401300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b4c4f55c-t5bf4,Uid:ef193aa7-7886-492e-84a7-367d7c11360a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.496581 kubelet[3430]: E0314 00:15:36.496082 3430 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.496581 kubelet[3430]: E0314 00:15:36.496324 3430 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-65b4c4f55c-t5bf4" Mar 14 00:15:36.496581 kubelet[3430]: E0314 00:15:36.496387 3430 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-65b4c4f55c-t5bf4" Mar 14 00:15:36.496581 kubelet[3430]: E0314 00:15:36.496466 3430 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.496953 kubelet[3430]: E0314 00:15:36.496518 3430 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-whvxh" Mar 14 00:15:36.496953 kubelet[3430]: E0314 00:15:36.496548 3430 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-whvxh" Mar 14 00:15:36.496953 kubelet[3430]: E0314 00:15:36.496605 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-whvxh_kube-system(3826a231-0f38-4b81-9031-95274d5b9189)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-whvxh_kube-system(3826a231-0f38-4b81-9031-95274d5b9189)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-whvxh" podUID="3826a231-0f38-4b81-9031-95274d5b9189" Mar 14 00:15:36.497610 kubelet[3430]: E0314 00:15:36.496499 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65b4c4f55c-t5bf4_calico-system(ef193aa7-7886-492e-84a7-367d7c11360a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65b4c4f55c-t5bf4_calico-system(ef193aa7-7886-492e-84a7-367d7c11360a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-65b4c4f55c-t5bf4" podUID="ef193aa7-7886-492e-84a7-367d7c11360a" Mar 14 00:15:36.499258 containerd[2011]: time="2026-03-14T00:15:36.498558056Z" level=error msg="encountered an error cleaning up failed sandbox \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.500068 containerd[2011]: time="2026-03-14T00:15:36.499368776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-pbgth,Uid:95ad4bc9-cde4-4484-81ed-f6e09950a754,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.503240 kubelet[3430]: E0314 00:15:36.503176 3430 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.503810 kubelet[3430]: E0314 00:15:36.503489 3430 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-pbgth" Mar 14 00:15:36.503810 kubelet[3430]: E0314 00:15:36.503574 3430 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-pbgth" Mar 14 00:15:36.503810 kubelet[3430]: E0314 00:15:36.503748 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-pbgth_calico-system(95ad4bc9-cde4-4484-81ed-f6e09950a754)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-pbgth_calico-system(95ad4bc9-cde4-4484-81ed-f6e09950a754)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-pbgth" podUID="95ad4bc9-cde4-4484-81ed-f6e09950a754" Mar 14 00:15:36.510904 containerd[2011]: time="2026-03-14T00:15:36.510613544Z" level=error msg="Failed to destroy network for sandbox \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.511623 containerd[2011]: time="2026-03-14T00:15:36.511344944Z" level=error msg="encountered an error cleaning up failed sandbox \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.511623 containerd[2011]: time="2026-03-14T00:15:36.511445048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-656d55fdc8-g69c4,Uid:70f1e795-c100-4dc4-b6f8-c1179e68e9a9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.512876 kubelet[3430]: E0314 00:15:36.512532 3430 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:36.512876 kubelet[3430]: E0314 00:15:36.512650 3430 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-656d55fdc8-g69c4" Mar 14 00:15:36.512876 kubelet[3430]: E0314 00:15:36.512733 3430 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-656d55fdc8-g69c4" Mar 14 00:15:36.513158 kubelet[3430]: E0314 00:15:36.512985 3430 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-656d55fdc8-g69c4_calico-system(70f1e795-c100-4dc4-b6f8-c1179e68e9a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-656d55fdc8-g69c4_calico-system(70f1e795-c100-4dc4-b6f8-c1179e68e9a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-656d55fdc8-g69c4" podUID="70f1e795-c100-4dc4-b6f8-c1179e68e9a9" Mar 14 00:15:36.558141 systemd[1]: Started cri-containerd-b0d0c4f0071f3de37976c289649851f2142547545e07e65cc718e6828f02839a.scope - libcontainer container b0d0c4f0071f3de37976c289649851f2142547545e07e65cc718e6828f02839a. Mar 14 00:15:36.650377 containerd[2011]: time="2026-03-14T00:15:36.650127645Z" level=info msg="StartContainer for \"b0d0c4f0071f3de37976c289649851f2142547545e07e65cc718e6828f02839a\" returns successfully" Mar 14 00:15:37.218140 containerd[2011]: time="2026-03-14T00:15:37.218057948Z" level=info msg="StopPodSandbox for \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\"" Mar 14 00:15:37.218480 containerd[2011]: time="2026-03-14T00:15:37.218403068Z" level=info msg="Ensure that sandbox 36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e in task-service has been cleanup successfully" Mar 14 00:15:37.219027 kubelet[3430]: I0314 00:15:37.218972 3430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Mar 14 00:15:37.230221 kubelet[3430]: I0314 00:15:37.230107 3430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Mar 14 00:15:37.235311 containerd[2011]: time="2026-03-14T00:15:37.234418844Z" level=info msg="StopPodSandbox for \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\"" Mar 14 00:15:37.235311 containerd[2011]: time="2026-03-14T00:15:37.234735596Z" level=info msg="Ensure that sandbox b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259 in task-service has been cleanup successfully" Mar 14 00:15:37.236714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95494911.mount: Deactivated successfully. Mar 14 00:15:37.236930 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db-shm.mount: Deactivated successfully. Mar 14 00:15:37.237075 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539-shm.mount: Deactivated successfully. Mar 14 00:15:37.237229 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c-shm.mount: Deactivated successfully. Mar 14 00:15:37.237368 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e-shm.mount: Deactivated successfully. Mar 14 00:15:37.237515 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665-shm.mount: Deactivated successfully. Mar 14 00:15:37.238399 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259-shm.mount: Deactivated successfully. Mar 14 00:15:37.283058 kubelet[3430]: I0314 00:15:37.282619 3430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Mar 14 00:15:37.290628 containerd[2011]: time="2026-03-14T00:15:37.289123412Z" level=info msg="StopPodSandbox for \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\"" Mar 14 00:15:37.290628 containerd[2011]: time="2026-03-14T00:15:37.289426460Z" level=info msg="Ensure that sandbox d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539 in task-service has been cleanup successfully" Mar 14 00:15:37.296173 kubelet[3430]: I0314 00:15:37.296110 3430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Mar 14 00:15:37.297589 containerd[2011]: time="2026-03-14T00:15:37.297522728Z" level=info msg="StopPodSandbox for \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\"" Mar 14 00:15:37.298937 containerd[2011]: time="2026-03-14T00:15:37.298756304Z" level=info msg="Ensure that sandbox ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c in task-service has been cleanup successfully" Mar 14 00:15:37.301446 kubelet[3430]: I0314 00:15:37.301256 3430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Mar 14 00:15:37.304943 containerd[2011]: time="2026-03-14T00:15:37.304614500Z" level=info msg="StopPodSandbox for \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\"" Mar 14 00:15:37.305098 containerd[2011]: time="2026-03-14T00:15:37.304952972Z" level=info msg="Ensure that sandbox 476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665 in task-service has been cleanup successfully" Mar 14 00:15:37.330892 kubelet[3430]: I0314 00:15:37.330023 3430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Mar 14 00:15:37.333038 containerd[2011]: time="2026-03-14T00:15:37.332879252Z" level=info msg="StopPodSandbox for \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\"" Mar 14 00:15:37.333575 containerd[2011]: time="2026-03-14T00:15:37.333192884Z" level=info msg="Ensure that sandbox 069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8 in task-service has been cleanup successfully" Mar 14 00:15:37.362913 kubelet[3430]: I0314 00:15:37.362621 3430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wr8b4" podStartSLOduration=5.568135891 podStartE2EDuration="20.362598548s" podCreationTimestamp="2026-03-14 00:15:17 +0000 UTC" firstStartedPulling="2026-03-14 00:15:18.344645918 +0000 UTC m=+30.775545262" lastFinishedPulling="2026-03-14 00:15:33.139108575 +0000 UTC m=+45.570007919" observedRunningTime="2026-03-14 00:15:37.361773788 +0000 UTC m=+49.792673156" watchObservedRunningTime="2026-03-14 00:15:37.362598548 +0000 UTC m=+49.793497892" Mar 14 00:15:37.371568 kubelet[3430]: I0314 00:15:37.371399 3430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Mar 14 00:15:37.379663 containerd[2011]: time="2026-03-14T00:15:37.379577492Z" level=info msg="StopPodSandbox for \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\"" Mar 14 00:15:37.380211 containerd[2011]: time="2026-03-14T00:15:37.379917104Z" level=info msg="Ensure that sandbox 1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db in task-service has been cleanup successfully" Mar 14 00:15:37.386281 kubelet[3430]: I0314 00:15:37.386051 3430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Mar 14 00:15:37.392797 containerd[2011]: time="2026-03-14T00:15:37.392582936Z" level=info msg="StopPodSandbox for \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\"" Mar 14 00:15:37.396226 containerd[2011]: time="2026-03-14T00:15:37.396147200Z" level=info msg="Ensure that sandbox 336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa in task-service has been cleanup successfully" Mar 14 00:15:38.344525 containerd[2011]: 2026-03-14 00:15:38.063 [INFO][4645] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Mar 14 00:15:38.344525 containerd[2011]: 2026-03-14 00:15:38.063 [INFO][4645] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" iface="eth0" netns="/var/run/netns/cni-19c0785e-dd54-a875-3b77-afa6fbabf8ed" Mar 14 00:15:38.344525 containerd[2011]: 2026-03-14 00:15:38.064 [INFO][4645] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" iface="eth0" netns="/var/run/netns/cni-19c0785e-dd54-a875-3b77-afa6fbabf8ed" Mar 14 00:15:38.344525 containerd[2011]: 2026-03-14 00:15:38.065 [INFO][4645] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" iface="eth0" netns="/var/run/netns/cni-19c0785e-dd54-a875-3b77-afa6fbabf8ed" Mar 14 00:15:38.344525 containerd[2011]: 2026-03-14 00:15:38.065 [INFO][4645] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Mar 14 00:15:38.344525 containerd[2011]: 2026-03-14 00:15:38.065 [INFO][4645] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Mar 14 00:15:38.344525 containerd[2011]: 2026-03-14 00:15:38.275 [INFO][4737] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" HandleID="k8s-pod-network.069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Workload="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" Mar 14 00:15:38.344525 containerd[2011]: 2026-03-14 00:15:38.275 [INFO][4737] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:38.344525 containerd[2011]: 2026-03-14 00:15:38.304 [INFO][4737] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:38.344525 containerd[2011]: 2026-03-14 00:15:38.324 [WARNING][4737] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" HandleID="k8s-pod-network.069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Workload="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" Mar 14 00:15:38.344525 containerd[2011]: 2026-03-14 00:15:38.324 [INFO][4737] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" HandleID="k8s-pod-network.069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Workload="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" Mar 14 00:15:38.344525 containerd[2011]: 2026-03-14 00:15:38.329 [INFO][4737] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:38.344525 containerd[2011]: 2026-03-14 00:15:38.338 [INFO][4645] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Mar 14 00:15:38.349617 systemd[1]: run-netns-cni\x2d19c0785e\x2ddd54\x2da875\x2d3b77\x2dafa6fbabf8ed.mount: Deactivated successfully. Mar 14 00:15:38.357706 containerd[2011]: time="2026-03-14T00:15:38.350586573Z" level=info msg="TearDown network for sandbox \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\" successfully" Mar 14 00:15:38.357706 containerd[2011]: time="2026-03-14T00:15:38.350632785Z" level=info msg="StopPodSandbox for \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\" returns successfully" Mar 14 00:15:38.359384 containerd[2011]: time="2026-03-14T00:15:38.359212797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4xhrv,Uid:0e308c40-a8ad-497a-822b-a95b9df4915b,Namespace:calico-system,Attempt:1,}" Mar 14 00:15:38.363987 containerd[2011]: 2026-03-14 00:15:37.916 [INFO][4577] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Mar 14 00:15:38.363987 containerd[2011]: 2026-03-14 00:15:37.918 [INFO][4577] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" iface="eth0" netns="/var/run/netns/cni-2eceb5a3-6ff7-9722-f56d-efcabb8ec06d" Mar 14 00:15:38.363987 containerd[2011]: 2026-03-14 00:15:37.919 [INFO][4577] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" iface="eth0" netns="/var/run/netns/cni-2eceb5a3-6ff7-9722-f56d-efcabb8ec06d" Mar 14 00:15:38.363987 containerd[2011]: 2026-03-14 00:15:37.921 [INFO][4577] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" iface="eth0" netns="/var/run/netns/cni-2eceb5a3-6ff7-9722-f56d-efcabb8ec06d" Mar 14 00:15:38.363987 containerd[2011]: 2026-03-14 00:15:37.922 [INFO][4577] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Mar 14 00:15:38.363987 containerd[2011]: 2026-03-14 00:15:37.922 [INFO][4577] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Mar 14 00:15:38.363987 containerd[2011]: 2026-03-14 00:15:38.263 [INFO][4712] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" HandleID="k8s-pod-network.b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Workload="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" Mar 14 00:15:38.363987 containerd[2011]: 2026-03-14 00:15:38.263 [INFO][4712] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:38.363987 containerd[2011]: 2026-03-14 00:15:38.263 [INFO][4712] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:38.363987 containerd[2011]: 2026-03-14 00:15:38.294 [WARNING][4712] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" HandleID="k8s-pod-network.b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Workload="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" Mar 14 00:15:38.363987 containerd[2011]: 2026-03-14 00:15:38.295 [INFO][4712] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" HandleID="k8s-pod-network.b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Workload="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" Mar 14 00:15:38.363987 containerd[2011]: 2026-03-14 00:15:38.304 [INFO][4712] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:38.363987 containerd[2011]: 2026-03-14 00:15:38.340 [INFO][4577] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Mar 14 00:15:38.370855 containerd[2011]: time="2026-03-14T00:15:38.369819585Z" level=info msg="TearDown network for sandbox \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\" successfully" Mar 14 00:15:38.370855 containerd[2011]: time="2026-03-14T00:15:38.369924897Z" level=info msg="StopPodSandbox for \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\" returns successfully" Mar 14 00:15:38.373719 systemd[1]: run-netns-cni\x2d2eceb5a3\x2d6ff7\x2d9722\x2df56d\x2defcabb8ec06d.mount: Deactivated successfully. Mar 14 00:15:38.382655 containerd[2011]: time="2026-03-14T00:15:38.381389961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68785684b4-j7pqm,Uid:219e5ba8-ebab-482b-96b4-7af0503f271c,Namespace:calico-system,Attempt:1,}" Mar 14 00:15:38.411665 containerd[2011]: 2026-03-14 00:15:37.941 [INFO][4576] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Mar 14 00:15:38.411665 containerd[2011]: 2026-03-14 00:15:37.942 [INFO][4576] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" iface="eth0" netns="/var/run/netns/cni-3be9e201-cfd5-a61d-3114-437f7f6cd4f1" Mar 14 00:15:38.411665 containerd[2011]: 2026-03-14 00:15:37.943 [INFO][4576] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" iface="eth0" netns="/var/run/netns/cni-3be9e201-cfd5-a61d-3114-437f7f6cd4f1" Mar 14 00:15:38.411665 containerd[2011]: 2026-03-14 00:15:37.944 [INFO][4576] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" iface="eth0" netns="/var/run/netns/cni-3be9e201-cfd5-a61d-3114-437f7f6cd4f1" Mar 14 00:15:38.411665 containerd[2011]: 2026-03-14 00:15:37.944 [INFO][4576] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Mar 14 00:15:38.411665 containerd[2011]: 2026-03-14 00:15:37.945 [INFO][4576] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Mar 14 00:15:38.411665 containerd[2011]: 2026-03-14 00:15:38.264 [INFO][4718] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" HandleID="k8s-pod-network.36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Workload="ip--172--31--26--39-k8s-whisker--656d55fdc8--g69c4-eth0" Mar 14 00:15:38.411665 containerd[2011]: 2026-03-14 00:15:38.275 [INFO][4718] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:38.411665 containerd[2011]: 2026-03-14 00:15:38.332 [INFO][4718] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:38.411665 containerd[2011]: 2026-03-14 00:15:38.379 [WARNING][4718] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" HandleID="k8s-pod-network.36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Workload="ip--172--31--26--39-k8s-whisker--656d55fdc8--g69c4-eth0" Mar 14 00:15:38.411665 containerd[2011]: 2026-03-14 00:15:38.379 [INFO][4718] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" HandleID="k8s-pod-network.36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Workload="ip--172--31--26--39-k8s-whisker--656d55fdc8--g69c4-eth0" Mar 14 00:15:38.411665 containerd[2011]: 2026-03-14 00:15:38.387 [INFO][4718] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:38.411665 containerd[2011]: 2026-03-14 00:15:38.400 [INFO][4576] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Mar 14 00:15:38.413799 containerd[2011]: time="2026-03-14T00:15:38.413443941Z" level=info msg="TearDown network for sandbox \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\" successfully" Mar 14 00:15:38.415738 containerd[2011]: time="2026-03-14T00:15:38.415687317Z" level=info msg="StopPodSandbox for \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\" returns successfully" Mar 14 00:15:38.422114 systemd[1]: run-netns-cni\x2d3be9e201\x2dcfd5\x2da61d\x2d3114\x2d437f7f6cd4f1.mount: Deactivated successfully. Mar 14 00:15:38.508022 containerd[2011]: 2026-03-14 00:15:37.958 [INFO][4611] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Mar 14 00:15:38.508022 containerd[2011]: 2026-03-14 00:15:37.958 [INFO][4611] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" iface="eth0" netns="/var/run/netns/cni-a96663dd-0da5-e9fb-f5f2-cc64203b5c2a" Mar 14 00:15:38.508022 containerd[2011]: 2026-03-14 00:15:37.960 [INFO][4611] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" iface="eth0" netns="/var/run/netns/cni-a96663dd-0da5-e9fb-f5f2-cc64203b5c2a" Mar 14 00:15:38.508022 containerd[2011]: 2026-03-14 00:15:37.966 [INFO][4611] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" iface="eth0" netns="/var/run/netns/cni-a96663dd-0da5-e9fb-f5f2-cc64203b5c2a" Mar 14 00:15:38.508022 containerd[2011]: 2026-03-14 00:15:37.969 [INFO][4611] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Mar 14 00:15:38.508022 containerd[2011]: 2026-03-14 00:15:37.970 [INFO][4611] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Mar 14 00:15:38.508022 containerd[2011]: 2026-03-14 00:15:38.297 [INFO][4723] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" HandleID="k8s-pod-network.d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" Mar 14 00:15:38.508022 containerd[2011]: 2026-03-14 00:15:38.300 [INFO][4723] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:38.508022 containerd[2011]: 2026-03-14 00:15:38.390 [INFO][4723] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:38.508022 containerd[2011]: 2026-03-14 00:15:38.436 [WARNING][4723] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" HandleID="k8s-pod-network.d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" Mar 14 00:15:38.508022 containerd[2011]: 2026-03-14 00:15:38.439 [INFO][4723] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" HandleID="k8s-pod-network.d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" Mar 14 00:15:38.508022 containerd[2011]: 2026-03-14 00:15:38.452 [INFO][4723] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:38.508022 containerd[2011]: 2026-03-14 00:15:38.488 [INFO][4611] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Mar 14 00:15:38.510909 containerd[2011]: time="2026-03-14T00:15:38.509695822Z" level=info msg="TearDown network for sandbox \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\" successfully" Mar 14 00:15:38.511149 containerd[2011]: time="2026-03-14T00:15:38.509746954Z" level=info msg="StopPodSandbox for \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\" returns successfully" Mar 14 00:15:38.517233 containerd[2011]: time="2026-03-14T00:15:38.517153150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b4c4f55c-t5bf4,Uid:ef193aa7-7886-492e-84a7-367d7c11360a,Namespace:calico-system,Attempt:1,}" Mar 14 00:15:38.557595 containerd[2011]: 2026-03-14 00:15:38.084 [INFO][4664] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Mar 14 00:15:38.557595 containerd[2011]: 2026-03-14 00:15:38.084 [INFO][4664] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" iface="eth0" netns="/var/run/netns/cni-f1946640-e616-f06c-c462-50c34837a591" Mar 14 00:15:38.557595 containerd[2011]: 2026-03-14 00:15:38.084 [INFO][4664] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" iface="eth0" netns="/var/run/netns/cni-f1946640-e616-f06c-c462-50c34837a591" Mar 14 00:15:38.557595 containerd[2011]: 2026-03-14 00:15:38.093 [INFO][4664] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" iface="eth0" netns="/var/run/netns/cni-f1946640-e616-f06c-c462-50c34837a591" Mar 14 00:15:38.557595 containerd[2011]: 2026-03-14 00:15:38.094 [INFO][4664] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Mar 14 00:15:38.557595 containerd[2011]: 2026-03-14 00:15:38.094 [INFO][4664] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Mar 14 00:15:38.557595 containerd[2011]: 2026-03-14 00:15:38.335 [INFO][4740] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" HandleID="k8s-pod-network.1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Workload="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" Mar 14 00:15:38.557595 containerd[2011]: 2026-03-14 00:15:38.337 [INFO][4740] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:38.557595 containerd[2011]: 2026-03-14 00:15:38.453 [INFO][4740] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:38.557595 containerd[2011]: 2026-03-14 00:15:38.517 [WARNING][4740] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" HandleID="k8s-pod-network.1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Workload="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" Mar 14 00:15:38.557595 containerd[2011]: 2026-03-14 00:15:38.517 [INFO][4740] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" HandleID="k8s-pod-network.1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Workload="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" Mar 14 00:15:38.557595 containerd[2011]: 2026-03-14 00:15:38.525 [INFO][4740] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:38.557595 containerd[2011]: 2026-03-14 00:15:38.541 [INFO][4664] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Mar 14 00:15:38.559801 containerd[2011]: time="2026-03-14T00:15:38.559258834Z" level=info msg="TearDown network for sandbox \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\" successfully" Mar 14 00:15:38.559801 containerd[2011]: time="2026-03-14T00:15:38.559304410Z" level=info msg="StopPodSandbox for \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\" returns successfully" Mar 14 00:15:38.563751 kubelet[3430]: I0314 00:15:38.563437 3430 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/70f1e795-c100-4dc4-b6f8-c1179e68e9a9-whisker-backend-key-pair\") pod \"70f1e795-c100-4dc4-b6f8-c1179e68e9a9\" (UID: \"70f1e795-c100-4dc4-b6f8-c1179e68e9a9\") " Mar 14 00:15:38.563751 kubelet[3430]: I0314 00:15:38.563518 3430 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/70f1e795-c100-4dc4-b6f8-c1179e68e9a9-nginx-config\") pod \"70f1e795-c100-4dc4-b6f8-c1179e68e9a9\" (UID: \"70f1e795-c100-4dc4-b6f8-c1179e68e9a9\") " Mar 14 00:15:38.563751 kubelet[3430]: I0314 00:15:38.563581 3430 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70f1e795-c100-4dc4-b6f8-c1179e68e9a9-whisker-ca-bundle\") pod \"70f1e795-c100-4dc4-b6f8-c1179e68e9a9\" (UID: \"70f1e795-c100-4dc4-b6f8-c1179e68e9a9\") " Mar 14 00:15:38.563751 kubelet[3430]: I0314 00:15:38.563621 3430 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd4tw\" (UniqueName: \"kubernetes.io/projected/70f1e795-c100-4dc4-b6f8-c1179e68e9a9-kube-api-access-qd4tw\") pod \"70f1e795-c100-4dc4-b6f8-c1179e68e9a9\" (UID: \"70f1e795-c100-4dc4-b6f8-c1179e68e9a9\") " Mar 14 00:15:38.567369 containerd[2011]: time="2026-03-14T00:15:38.566211226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-pbgth,Uid:95ad4bc9-cde4-4484-81ed-f6e09950a754,Namespace:calico-system,Attempt:1,}" Mar 14 00:15:38.583904 kubelet[3430]: I0314 00:15:38.583507 3430 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f1e795-c100-4dc4-b6f8-c1179e68e9a9-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "70f1e795-c100-4dc4-b6f8-c1179e68e9a9" (UID: "70f1e795-c100-4dc4-b6f8-c1179e68e9a9"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:15:38.586851 kubelet[3430]: I0314 00:15:38.586759 3430 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f1e795-c100-4dc4-b6f8-c1179e68e9a9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "70f1e795-c100-4dc4-b6f8-c1179e68e9a9" (UID: "70f1e795-c100-4dc4-b6f8-c1179e68e9a9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:15:38.590249 kubelet[3430]: I0314 00:15:38.590179 3430 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f1e795-c100-4dc4-b6f8-c1179e68e9a9-kube-api-access-qd4tw" (OuterVolumeSpecName: "kube-api-access-qd4tw") pod "70f1e795-c100-4dc4-b6f8-c1179e68e9a9" (UID: "70f1e795-c100-4dc4-b6f8-c1179e68e9a9"). InnerVolumeSpecName "kube-api-access-qd4tw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:15:38.615965 kubelet[3430]: I0314 00:15:38.615694 3430 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f1e795-c100-4dc4-b6f8-c1179e68e9a9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "70f1e795-c100-4dc4-b6f8-c1179e68e9a9" (UID: "70f1e795-c100-4dc4-b6f8-c1179e68e9a9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:15:38.631874 containerd[2011]: 2026-03-14 00:15:37.984 [INFO][4623] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Mar 14 00:15:38.631874 containerd[2011]: 2026-03-14 00:15:37.984 [INFO][4623] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" iface="eth0" netns="/var/run/netns/cni-210b0169-288c-522d-7561-effeebccbb6d" Mar 14 00:15:38.631874 containerd[2011]: 2026-03-14 00:15:37.985 [INFO][4623] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" iface="eth0" netns="/var/run/netns/cni-210b0169-288c-522d-7561-effeebccbb6d" Mar 14 00:15:38.631874 containerd[2011]: 2026-03-14 00:15:37.989 [INFO][4623] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" iface="eth0" netns="/var/run/netns/cni-210b0169-288c-522d-7561-effeebccbb6d" Mar 14 00:15:38.631874 containerd[2011]: 2026-03-14 00:15:37.992 [INFO][4623] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Mar 14 00:15:38.631874 containerd[2011]: 2026-03-14 00:15:37.992 [INFO][4623] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Mar 14 00:15:38.631874 containerd[2011]: 2026-03-14 00:15:38.363 [INFO][4725] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" HandleID="k8s-pod-network.476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" Mar 14 00:15:38.631874 containerd[2011]: 2026-03-14 00:15:38.363 [INFO][4725] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:38.631874 containerd[2011]: 2026-03-14 00:15:38.529 [INFO][4725] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:38.631874 containerd[2011]: 2026-03-14 00:15:38.568 [WARNING][4725] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" HandleID="k8s-pod-network.476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" Mar 14 00:15:38.631874 containerd[2011]: 2026-03-14 00:15:38.569 [INFO][4725] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" HandleID="k8s-pod-network.476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" Mar 14 00:15:38.631874 containerd[2011]: 2026-03-14 00:15:38.582 [INFO][4725] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:38.631874 containerd[2011]: 2026-03-14 00:15:38.603 [INFO][4623] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Mar 14 00:15:38.633952 containerd[2011]: time="2026-03-14T00:15:38.633564503Z" level=info msg="TearDown network for sandbox \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\" successfully" Mar 14 00:15:38.633952 containerd[2011]: time="2026-03-14T00:15:38.633624755Z" level=info msg="StopPodSandbox for \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\" returns successfully" Mar 14 00:15:38.638995 containerd[2011]: time="2026-03-14T00:15:38.638583263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-whvxh,Uid:3826a231-0f38-4b81-9031-95274d5b9189,Namespace:kube-system,Attempt:1,}" Mar 14 00:15:38.666014 kubelet[3430]: I0314 00:15:38.664285 3430 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70f1e795-c100-4dc4-b6f8-c1179e68e9a9-whisker-ca-bundle\") on node \"ip-172-31-26-39\" DevicePath \"\"" Mar 14 00:15:38.666014 kubelet[3430]: I0314 00:15:38.664343 3430 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qd4tw\" (UniqueName: \"kubernetes.io/projected/70f1e795-c100-4dc4-b6f8-c1179e68e9a9-kube-api-access-qd4tw\") on node \"ip-172-31-26-39\" DevicePath \"\"" Mar 14 00:15:38.666014 kubelet[3430]: I0314 00:15:38.664367 3430 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/70f1e795-c100-4dc4-b6f8-c1179e68e9a9-whisker-backend-key-pair\") on node \"ip-172-31-26-39\" DevicePath \"\"" Mar 14 00:15:38.666014 kubelet[3430]: I0314 00:15:38.664393 3430 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/70f1e795-c100-4dc4-b6f8-c1179e68e9a9-nginx-config\") on node \"ip-172-31-26-39\" DevicePath \"\"" Mar 14 00:15:38.676864 containerd[2011]: 2026-03-14 00:15:38.046 [INFO][4663] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Mar 14 00:15:38.676864 containerd[2011]: 2026-03-14 00:15:38.052 [INFO][4663] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" iface="eth0" netns="/var/run/netns/cni-9864430c-409c-5c9e-bd85-e2b53f90ff18" Mar 14 00:15:38.676864 containerd[2011]: 2026-03-14 00:15:38.056 [INFO][4663] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" iface="eth0" netns="/var/run/netns/cni-9864430c-409c-5c9e-bd85-e2b53f90ff18" Mar 14 00:15:38.676864 containerd[2011]: 2026-03-14 00:15:38.060 [INFO][4663] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" iface="eth0" netns="/var/run/netns/cni-9864430c-409c-5c9e-bd85-e2b53f90ff18" Mar 14 00:15:38.676864 containerd[2011]: 2026-03-14 00:15:38.060 [INFO][4663] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Mar 14 00:15:38.676864 containerd[2011]: 2026-03-14 00:15:38.060 [INFO][4663] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Mar 14 00:15:38.676864 containerd[2011]: 2026-03-14 00:15:38.398 [INFO][4735] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" HandleID="k8s-pod-network.336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" Mar 14 00:15:38.676864 containerd[2011]: 2026-03-14 00:15:38.398 [INFO][4735] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:38.676864 containerd[2011]: 2026-03-14 00:15:38.585 [INFO][4735] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:38.676864 containerd[2011]: 2026-03-14 00:15:38.630 [WARNING][4735] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" HandleID="k8s-pod-network.336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" Mar 14 00:15:38.676864 containerd[2011]: 2026-03-14 00:15:38.630 [INFO][4735] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" HandleID="k8s-pod-network.336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" Mar 14 00:15:38.676864 containerd[2011]: 2026-03-14 00:15:38.636 [INFO][4735] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:38.676864 containerd[2011]: 2026-03-14 00:15:38.658 [INFO][4663] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Mar 14 00:15:38.676864 containerd[2011]: time="2026-03-14T00:15:38.675191795Z" level=info msg="TearDown network for sandbox \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\" successfully" Mar 14 00:15:38.676864 containerd[2011]: time="2026-03-14T00:15:38.675233291Z" level=info msg="StopPodSandbox for \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\" returns successfully" Mar 14 00:15:38.688292 containerd[2011]: time="2026-03-14T00:15:38.686386487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rccd6,Uid:e2fe8cef-b474-4b32-815f-59d01d17b696,Namespace:kube-system,Attempt:1,}" Mar 14 00:15:38.790382 containerd[2011]: 2026-03-14 00:15:38.045 [INFO][4635] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Mar 14 00:15:38.790382 containerd[2011]: 2026-03-14 00:15:38.055 [INFO][4635] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" iface="eth0" netns="/var/run/netns/cni-b5d2b75f-5d0e-5427-ba09-8b27678c2329" Mar 14 00:15:38.790382 containerd[2011]: 2026-03-14 00:15:38.055 [INFO][4635] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" iface="eth0" netns="/var/run/netns/cni-b5d2b75f-5d0e-5427-ba09-8b27678c2329" Mar 14 00:15:38.790382 containerd[2011]: 2026-03-14 00:15:38.062 [INFO][4635] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" iface="eth0" netns="/var/run/netns/cni-b5d2b75f-5d0e-5427-ba09-8b27678c2329" Mar 14 00:15:38.790382 containerd[2011]: 2026-03-14 00:15:38.062 [INFO][4635] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Mar 14 00:15:38.790382 containerd[2011]: 2026-03-14 00:15:38.062 [INFO][4635] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Mar 14 00:15:38.790382 containerd[2011]: 2026-03-14 00:15:38.472 [INFO][4734] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" HandleID="k8s-pod-network.ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" Mar 14 00:15:38.790382 containerd[2011]: 2026-03-14 00:15:38.473 [INFO][4734] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:38.790382 containerd[2011]: 2026-03-14 00:15:38.639 [INFO][4734] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:38.790382 containerd[2011]: 2026-03-14 00:15:38.700 [WARNING][4734] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" HandleID="k8s-pod-network.ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" Mar 14 00:15:38.790382 containerd[2011]: 2026-03-14 00:15:38.701 [INFO][4734] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" HandleID="k8s-pod-network.ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" Mar 14 00:15:38.790382 containerd[2011]: 2026-03-14 00:15:38.709 [INFO][4734] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:38.790382 containerd[2011]: 2026-03-14 00:15:38.743 [INFO][4635] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Mar 14 00:15:38.811176 containerd[2011]: time="2026-03-14T00:15:38.810328487Z" level=info msg="TearDown network for sandbox \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\" successfully" Mar 14 00:15:38.811176 containerd[2011]: time="2026-03-14T00:15:38.810394403Z" level=info msg="StopPodSandbox for \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\" returns successfully" Mar 14 00:15:38.819480 containerd[2011]: time="2026-03-14T00:15:38.819269927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b4c4f55c-x5m6q,Uid:dfe8f338-e4ed-49e1-8e8d-196f56df8d36,Namespace:calico-system,Attempt:1,}" Mar 14 00:15:39.414146 systemd[1]: run-netns-cni\x2df1946640\x2de616\x2df06c\x2dc462\x2d50c34837a591.mount: Deactivated successfully. Mar 14 00:15:39.414354 systemd[1]: run-netns-cni\x2da96663dd\x2d0da5\x2de9fb\x2df5f2\x2dcc64203b5c2a.mount: Deactivated successfully. Mar 14 00:15:39.414484 systemd[1]: run-netns-cni\x2db5d2b75f\x2d5d0e\x2d5427\x2dba09\x2d8b27678c2329.mount: Deactivated successfully. Mar 14 00:15:39.414608 systemd[1]: run-netns-cni\x2d210b0169\x2d288c\x2d522d\x2d7561\x2deffeebccbb6d.mount: Deactivated successfully. Mar 14 00:15:39.414728 systemd[1]: run-netns-cni\x2d9864430c\x2d409c\x2d5c9e\x2dbd85\x2de2b53f90ff18.mount: Deactivated successfully. Mar 14 00:15:39.414877 systemd[1]: var-lib-kubelet-pods-70f1e795\x2dc100\x2d4dc4\x2db6f8\x2dc1179e68e9a9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqd4tw.mount: Deactivated successfully. Mar 14 00:15:39.415465 systemd[1]: var-lib-kubelet-pods-70f1e795\x2dc100\x2d4dc4\x2db6f8\x2dc1179e68e9a9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 14 00:15:39.471683 systemd[1]: Removed slice kubepods-besteffort-pod70f1e795_c100_4dc4_b6f8_c1179e68e9a9.slice - libcontainer container kubepods-besteffort-pod70f1e795_c100_4dc4_b6f8_c1179e68e9a9.slice. Mar 14 00:15:39.746918 (udev-worker)[5002]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:15:39.758733 systemd-networkd[1929]: cali338be02951d: Link UP Mar 14 00:15:39.760789 systemd-networkd[1929]: cali338be02951d: Gained carrier Mar 14 00:15:39.813161 systemd[1]: Created slice kubepods-besteffort-pod46dc9080_52ae_4f6b_9a08_239d7ba3b05b.slice - libcontainer container kubepods-besteffort-pod46dc9080_52ae_4f6b_9a08_239d7ba3b05b.slice. Mar 14 00:15:39.880876 kubelet[3430]: I0314 00:15:39.879715 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46dc9080-52ae-4f6b-9a08-239d7ba3b05b-whisker-ca-bundle\") pod \"whisker-7b9d4484f9-xvjs4\" (UID: \"46dc9080-52ae-4f6b-9a08-239d7ba3b05b\") " pod="calico-system/whisker-7b9d4484f9-xvjs4" Mar 14 00:15:39.880876 kubelet[3430]: I0314 00:15:39.879988 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhqxn\" (UniqueName: \"kubernetes.io/projected/46dc9080-52ae-4f6b-9a08-239d7ba3b05b-kube-api-access-zhqxn\") pod \"whisker-7b9d4484f9-xvjs4\" (UID: \"46dc9080-52ae-4f6b-9a08-239d7ba3b05b\") " pod="calico-system/whisker-7b9d4484f9-xvjs4" Mar 14 00:15:39.880876 kubelet[3430]: I0314 00:15:39.880073 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/46dc9080-52ae-4f6b-9a08-239d7ba3b05b-whisker-backend-key-pair\") pod \"whisker-7b9d4484f9-xvjs4\" (UID: \"46dc9080-52ae-4f6b-9a08-239d7ba3b05b\") " pod="calico-system/whisker-7b9d4484f9-xvjs4" Mar 14 00:15:39.880876 kubelet[3430]: I0314 00:15:39.880129 3430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/46dc9080-52ae-4f6b-9a08-239d7ba3b05b-nginx-config\") pod \"whisker-7b9d4484f9-xvjs4\" (UID: \"46dc9080-52ae-4f6b-9a08-239d7ba3b05b\") " pod="calico-system/whisker-7b9d4484f9-xvjs4" Mar 14 00:15:39.905778 kubelet[3430]: I0314 00:15:39.905706 3430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70f1e795-c100-4dc4-b6f8-c1179e68e9a9" path="/var/lib/kubelet/pods/70f1e795-c100-4dc4-b6f8-c1179e68e9a9/volumes" Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:38.720 [ERROR][4770] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:38.794 [INFO][4770] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0 csi-node-driver- calico-system 0e308c40-a8ad-497a-822b-a95b9df4915b 952 0 2026-03-14 00:15:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-26-39 csi-node-driver-4xhrv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali338be02951d [] [] }} ContainerID="0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" Namespace="calico-system" Pod="csi-node-driver-4xhrv" WorkloadEndpoint="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-" Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:38.794 [INFO][4770] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" Namespace="calico-system" Pod="csi-node-driver-4xhrv" WorkloadEndpoint="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.215 [INFO][4849] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" HandleID="k8s-pod-network.0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" Workload="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.316 [INFO][4849] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" HandleID="k8s-pod-network.0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" Workload="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400040cb90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-39", "pod":"csi-node-driver-4xhrv", "timestamp":"2026-03-14 00:15:39.215662245 +0000 UTC"}, Hostname:"ip-172-31-26-39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000166160)} Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.319 [INFO][4849] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.329 [INFO][4849] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.329 [INFO][4849] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-39' Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.360 [INFO][4849] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" host="ip-172-31-26-39" Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.490 [INFO][4849] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-39" Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.544 [INFO][4849] ipam/ipam.go 526: Trying affinity for 192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.557 [INFO][4849] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.571 [INFO][4849] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.571 [INFO][4849] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" host="ip-172-31-26-39" Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.580 [INFO][4849] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397 Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.634 [INFO][4849] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" host="ip-172-31-26-39" Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.681 [INFO][4849] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.129/26] block=192.168.64.128/26 handle="k8s-pod-network.0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" host="ip-172-31-26-39" Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.681 [INFO][4849] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.129/26] handle="k8s-pod-network.0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" host="ip-172-31-26-39" Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.681 [INFO][4849] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:39.922166 containerd[2011]: 2026-03-14 00:15:39.681 [INFO][4849] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.129/26] IPv6=[] ContainerID="0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" HandleID="k8s-pod-network.0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" Workload="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" Mar 14 00:15:39.925924 containerd[2011]: 2026-03-14 00:15:39.696 [INFO][4770] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" Namespace="calico-system" Pod="csi-node-driver-4xhrv" WorkloadEndpoint="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e308c40-a8ad-497a-822b-a95b9df4915b", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"", Pod:"csi-node-driver-4xhrv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali338be02951d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:39.925924 containerd[2011]: 2026-03-14 00:15:39.696 [INFO][4770] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.129/32] ContainerID="0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" Namespace="calico-system" Pod="csi-node-driver-4xhrv" WorkloadEndpoint="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" Mar 14 00:15:39.925924 containerd[2011]: 2026-03-14 00:15:39.696 [INFO][4770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali338be02951d ContainerID="0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" Namespace="calico-system" Pod="csi-node-driver-4xhrv" WorkloadEndpoint="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" Mar 14 00:15:39.925924 containerd[2011]: 2026-03-14 00:15:39.783 [INFO][4770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" Namespace="calico-system" Pod="csi-node-driver-4xhrv" WorkloadEndpoint="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" Mar 14 00:15:39.925924 containerd[2011]: 2026-03-14 00:15:39.787 [INFO][4770] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" Namespace="calico-system" Pod="csi-node-driver-4xhrv" WorkloadEndpoint="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e308c40-a8ad-497a-822b-a95b9df4915b", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397", Pod:"csi-node-driver-4xhrv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali338be02951d", MAC:"1e:77:9a:a2:7c:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:39.925924 containerd[2011]: 2026-03-14 00:15:39.909 [INFO][4770] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397" Namespace="calico-system" Pod="csi-node-driver-4xhrv" WorkloadEndpoint="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" Mar 14 00:15:40.013110 containerd[2011]: time="2026-03-14T00:15:40.010577373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:40.013110 containerd[2011]: time="2026-03-14T00:15:40.010859133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:40.013110 containerd[2011]: time="2026-03-14T00:15:40.010914009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:40.013110 containerd[2011]: time="2026-03-14T00:15:40.011083041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:40.039655 (udev-worker)[5001]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:15:40.053344 systemd-networkd[1929]: cali832e8274aea: Link UP Mar 14 00:15:40.061028 systemd-networkd[1929]: cali832e8274aea: Gained carrier Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:38.703 [ERROR][4785] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:38.812 [INFO][4785] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0 calico-kube-controllers-68785684b4- calico-system 219e5ba8-ebab-482b-96b4-7af0503f271c 948 0 2026-03-14 00:15:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68785684b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-39 calico-kube-controllers-68785684b4-j7pqm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali832e8274aea [] [] }} ContainerID="7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" Namespace="calico-system" Pod="calico-kube-controllers-68785684b4-j7pqm" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-" Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:38.818 [INFO][4785] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" Namespace="calico-system" Pod="calico-kube-controllers-68785684b4-j7pqm" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:39.408 [INFO][4854] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" HandleID="k8s-pod-network.7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" Workload="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:39.540 [INFO][4854] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" HandleID="k8s-pod-network.7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" Workload="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000120d40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-39", "pod":"calico-kube-controllers-68785684b4-j7pqm", "timestamp":"2026-03-14 00:15:39.40873627 +0000 UTC"}, Hostname:"ip-172-31-26-39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002d8c60)} Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:39.541 [INFO][4854] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:39.683 [INFO][4854] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:39.683 [INFO][4854] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-39' Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:39.741 [INFO][4854] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" host="ip-172-31-26-39" Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:39.920 [INFO][4854] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-39" Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:39.943 [INFO][4854] ipam/ipam.go 526: Trying affinity for 192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:39.951 [INFO][4854] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:39.956 [INFO][4854] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:39.956 [INFO][4854] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" host="ip-172-31-26-39" Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:39.964 [INFO][4854] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648 Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:39.976 [INFO][4854] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" host="ip-172-31-26-39" Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:40.017 [INFO][4854] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.130/26] block=192.168.64.128/26 handle="k8s-pod-network.7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" host="ip-172-31-26-39" Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:40.017 [INFO][4854] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.130/26] handle="k8s-pod-network.7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" host="ip-172-31-26-39" Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:40.017 [INFO][4854] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:40.134744 containerd[2011]: 2026-03-14 00:15:40.017 [INFO][4854] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.130/26] IPv6=[] ContainerID="7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" HandleID="k8s-pod-network.7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" Workload="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" Mar 14 00:15:40.140184 containerd[2011]: 2026-03-14 00:15:40.027 [INFO][4785] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" Namespace="calico-system" Pod="calico-kube-controllers-68785684b4-j7pqm" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0", GenerateName:"calico-kube-controllers-68785684b4-", Namespace:"calico-system", SelfLink:"", UID:"219e5ba8-ebab-482b-96b4-7af0503f271c", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68785684b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"", Pod:"calico-kube-controllers-68785684b4-j7pqm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali832e8274aea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.140184 containerd[2011]: 2026-03-14 00:15:40.027 [INFO][4785] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.130/32] ContainerID="7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" Namespace="calico-system" Pod="calico-kube-controllers-68785684b4-j7pqm" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" Mar 14 00:15:40.140184 containerd[2011]: 2026-03-14 00:15:40.029 [INFO][4785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali832e8274aea ContainerID="7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" Namespace="calico-system" Pod="calico-kube-controllers-68785684b4-j7pqm" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" Mar 14 00:15:40.140184 containerd[2011]: 2026-03-14 00:15:40.072 [INFO][4785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" Namespace="calico-system" Pod="calico-kube-controllers-68785684b4-j7pqm" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" Mar 14 00:15:40.140184 containerd[2011]: 2026-03-14 00:15:40.081 [INFO][4785] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" Namespace="calico-system" Pod="calico-kube-controllers-68785684b4-j7pqm" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0", GenerateName:"calico-kube-controllers-68785684b4-", Namespace:"calico-system", SelfLink:"", UID:"219e5ba8-ebab-482b-96b4-7af0503f271c", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68785684b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648", Pod:"calico-kube-controllers-68785684b4-j7pqm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali832e8274aea", MAC:"2e:93:a5:6e:43:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.140184 containerd[2011]: 2026-03-14 00:15:40.121 [INFO][4785] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648" Namespace="calico-system" Pod="calico-kube-controllers-68785684b4-j7pqm" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" Mar 14 00:15:40.145527 containerd[2011]: time="2026-03-14T00:15:40.141099094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9d4484f9-xvjs4,Uid:46dc9080-52ae-4f6b-9a08-239d7ba3b05b,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:40.172709 systemd[1]: Started cri-containerd-0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397.scope - libcontainer container 0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397. Mar 14 00:15:40.289241 systemd-networkd[1929]: cali37c1525e69f: Link UP Mar 14 00:15:40.292138 systemd-networkd[1929]: cali37c1525e69f: Gained carrier Mar 14 00:15:40.342514 containerd[2011]: time="2026-03-14T00:15:40.342231671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:40.342514 containerd[2011]: time="2026-03-14T00:15:40.342367295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:40.342514 containerd[2011]: time="2026-03-14T00:15:40.342414011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:40.343003 containerd[2011]: time="2026-03-14T00:15:40.342611555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:40.372355 systemd[1]: run-containerd-runc-k8s.io-0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397-runc.hqDNym.mount: Deactivated successfully. Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:38.968 [ERROR][4825] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:39.047 [INFO][4825] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0 goldmane-cccfbd5cf- calico-system 95ad4bc9-cde4-4484-81ed-f6e09950a754 956 0 2026-03-14 00:15:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-26-39 goldmane-cccfbd5cf-pbgth eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali37c1525e69f [] [] }} ContainerID="1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pbgth" WorkloadEndpoint="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-" Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:39.047 [INFO][4825] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pbgth" WorkloadEndpoint="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:39.591 [INFO][4914] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" HandleID="k8s-pod-network.1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" Workload="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:39.678 [INFO][4914] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" HandleID="k8s-pod-network.1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" Workload="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000381cc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-39", "pod":"goldmane-cccfbd5cf-pbgth", "timestamp":"2026-03-14 00:15:39.591151151 +0000 UTC"}, Hostname:"ip-172-31-26-39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186dc0)} Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:39.678 [INFO][4914] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:40.018 [INFO][4914] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:40.018 [INFO][4914] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-39' Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:40.046 [INFO][4914] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" host="ip-172-31-26-39" Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:40.104 [INFO][4914] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-39" Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:40.124 [INFO][4914] ipam/ipam.go 526: Trying affinity for 192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:40.145 [INFO][4914] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:40.158 [INFO][4914] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:40.159 [INFO][4914] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" host="ip-172-31-26-39" Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:40.171 [INFO][4914] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:40.197 [INFO][4914] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" host="ip-172-31-26-39" Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:40.219 [INFO][4914] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.131/26] block=192.168.64.128/26 handle="k8s-pod-network.1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" host="ip-172-31-26-39" Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:40.219 [INFO][4914] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.131/26] handle="k8s-pod-network.1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" host="ip-172-31-26-39" Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:40.220 [INFO][4914] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:40.421423 containerd[2011]: 2026-03-14 00:15:40.223 [INFO][4914] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.131/26] IPv6=[] ContainerID="1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" HandleID="k8s-pod-network.1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" Workload="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" Mar 14 00:15:40.422800 containerd[2011]: 2026-03-14 00:15:40.252 [INFO][4825] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pbgth" WorkloadEndpoint="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"95ad4bc9-cde4-4484-81ed-f6e09950a754", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"", Pod:"goldmane-cccfbd5cf-pbgth", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali37c1525e69f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.422800 containerd[2011]: 2026-03-14 00:15:40.254 [INFO][4825] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.131/32] ContainerID="1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pbgth" WorkloadEndpoint="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" Mar 14 00:15:40.422800 containerd[2011]: 2026-03-14 00:15:40.254 [INFO][4825] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37c1525e69f ContainerID="1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pbgth" WorkloadEndpoint="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" Mar 14 00:15:40.422800 containerd[2011]: 2026-03-14 00:15:40.315 [INFO][4825] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pbgth" WorkloadEndpoint="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" Mar 14 00:15:40.422800 containerd[2011]: 2026-03-14 00:15:40.322 [INFO][4825] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pbgth" WorkloadEndpoint="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"95ad4bc9-cde4-4484-81ed-f6e09950a754", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b", Pod:"goldmane-cccfbd5cf-pbgth", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali37c1525e69f", MAC:"92:9e:b8:4b:89:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.422800 containerd[2011]: 2026-03-14 00:15:40.402 [INFO][4825] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pbgth" WorkloadEndpoint="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" Mar 14 00:15:40.472494 systemd-networkd[1929]: cali6f4491b4956: Link UP Mar 14 00:15:40.477992 systemd-networkd[1929]: cali6f4491b4956: Gained carrier Mar 14 00:15:40.497310 systemd[1]: Started cri-containerd-7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648.scope - libcontainer container 7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648. Mar 14 00:15:40.565997 containerd[2011]: time="2026-03-14T00:15:40.563802540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:40.565997 containerd[2011]: time="2026-03-14T00:15:40.563951532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:40.565997 containerd[2011]: time="2026-03-14T00:15:40.563981304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:40.565997 containerd[2011]: time="2026-03-14T00:15:40.564170208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:38.915 [ERROR][4808] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:39.004 [INFO][4808] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0 calico-apiserver-65b4c4f55c- calico-system ef193aa7-7886-492e-84a7-367d7c11360a 949 0 2026-03-14 00:15:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65b4c4f55c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-39 calico-apiserver-65b4c4f55c-t5bf4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali6f4491b4956 [] [] }} ContainerID="ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" Namespace="calico-system" Pod="calico-apiserver-65b4c4f55c-t5bf4" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-" Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:39.014 [INFO][4808] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" Namespace="calico-system" Pod="calico-apiserver-65b4c4f55c-t5bf4" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:39.578 [INFO][4901] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" HandleID="k8s-pod-network.ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:39.714 [INFO][4901] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" HandleID="k8s-pod-network.ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004caa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-39", "pod":"calico-apiserver-65b4c4f55c-t5bf4", "timestamp":"2026-03-14 00:15:39.578312771 +0000 UTC"}, Hostname:"ip-172-31-26-39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000636160)} Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:39.765 [INFO][4901] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:40.222 [INFO][4901] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:40.224 [INFO][4901] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-39' Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:40.242 [INFO][4901] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" host="ip-172-31-26-39" Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:40.270 [INFO][4901] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-39" Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:40.302 [INFO][4901] ipam/ipam.go 526: Trying affinity for 192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:40.324 [INFO][4901] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:40.333 [INFO][4901] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:40.333 [INFO][4901] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" host="ip-172-31-26-39" Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:40.349 [INFO][4901] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:40.365 [INFO][4901] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" host="ip-172-31-26-39" Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:40.415 [INFO][4901] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.132/26] block=192.168.64.128/26 handle="k8s-pod-network.ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" host="ip-172-31-26-39" Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:40.415 [INFO][4901] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.132/26] handle="k8s-pod-network.ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" host="ip-172-31-26-39" Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:40.415 [INFO][4901] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:40.570903 containerd[2011]: 2026-03-14 00:15:40.415 [INFO][4901] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.132/26] IPv6=[] ContainerID="ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" HandleID="k8s-pod-network.ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" Mar 14 00:15:40.576492 containerd[2011]: 2026-03-14 00:15:40.450 [INFO][4808] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" Namespace="calico-system" Pod="calico-apiserver-65b4c4f55c-t5bf4" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0", GenerateName:"calico-apiserver-65b4c4f55c-", Namespace:"calico-system", SelfLink:"", UID:"ef193aa7-7886-492e-84a7-367d7c11360a", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b4c4f55c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"", Pod:"calico-apiserver-65b4c4f55c-t5bf4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6f4491b4956", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.576492 containerd[2011]: 2026-03-14 00:15:40.452 [INFO][4808] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.132/32] ContainerID="ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" Namespace="calico-system" Pod="calico-apiserver-65b4c4f55c-t5bf4" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" Mar 14 00:15:40.576492 containerd[2011]: 2026-03-14 00:15:40.452 [INFO][4808] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f4491b4956 ContainerID="ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" Namespace="calico-system" Pod="calico-apiserver-65b4c4f55c-t5bf4" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" Mar 14 00:15:40.576492 containerd[2011]: 2026-03-14 00:15:40.487 [INFO][4808] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" Namespace="calico-system" Pod="calico-apiserver-65b4c4f55c-t5bf4" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" Mar 14 00:15:40.576492 containerd[2011]: 2026-03-14 00:15:40.495 [INFO][4808] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" Namespace="calico-system" Pod="calico-apiserver-65b4c4f55c-t5bf4" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0", GenerateName:"calico-apiserver-65b4c4f55c-", Namespace:"calico-system", SelfLink:"", UID:"ef193aa7-7886-492e-84a7-367d7c11360a", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b4c4f55c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a", Pod:"calico-apiserver-65b4c4f55c-t5bf4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6f4491b4956", MAC:"9a:e3:44:4d:22:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.576492 containerd[2011]: 2026-03-14 00:15:40.549 [INFO][4808] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a" Namespace="calico-system" Pod="calico-apiserver-65b4c4f55c-t5bf4" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" Mar 14 00:15:40.647914 containerd[2011]: time="2026-03-14T00:15:40.647298157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4xhrv,Uid:0e308c40-a8ad-497a-822b-a95b9df4915b,Namespace:calico-system,Attempt:1,} returns sandbox id \"0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397\"" Mar 14 00:15:40.653936 containerd[2011]: time="2026-03-14T00:15:40.653491381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 14 00:15:40.669277 systemd[1]: Started cri-containerd-1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b.scope - libcontainer container 1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b. Mar 14 00:15:40.712696 systemd-networkd[1929]: calib901f2abe28: Link UP Mar 14 00:15:40.729878 systemd-networkd[1929]: calib901f2abe28: Gained carrier Mar 14 00:15:40.809865 containerd[2011]: time="2026-03-14T00:15:40.809223001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:40.809865 containerd[2011]: time="2026-03-14T00:15:40.809339917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:40.810151 containerd[2011]: time="2026-03-14T00:15:40.809367013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:40.810151 containerd[2011]: time="2026-03-14T00:15:40.809522701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:38.996 [ERROR][4834] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:39.104 [INFO][4834] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0 coredns-66bc5c9577- kube-system 3826a231-0f38-4b81-9031-95274d5b9189 951 0 2026-03-14 00:14:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-39 coredns-66bc5c9577-whvxh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib901f2abe28 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" Namespace="kube-system" Pod="coredns-66bc5c9577-whvxh" WorkloadEndpoint="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-" Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:39.106 [INFO][4834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" Namespace="kube-system" Pod="coredns-66bc5c9577-whvxh" WorkloadEndpoint="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:39.634 [INFO][4928] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" HandleID="k8s-pod-network.e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:39.729 [INFO][4928] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" HandleID="k8s-pod-network.e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a46a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-39", "pod":"coredns-66bc5c9577-whvxh", "timestamp":"2026-03-14 00:15:39.634889748 +0000 UTC"}, Hostname:"ip-172-31-26-39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400051e580)} Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:39.772 [INFO][4928] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:40.416 [INFO][4928] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:40.434 [INFO][4928] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-39' Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:40.447 [INFO][4928] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" host="ip-172-31-26-39" Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:40.480 [INFO][4928] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-39" Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:40.536 [INFO][4928] ipam/ipam.go 526: Trying affinity for 192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:40.552 [INFO][4928] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:40.561 [INFO][4928] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:40.561 [INFO][4928] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" host="ip-172-31-26-39" Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:40.579 [INFO][4928] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:40.593 [INFO][4928] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" host="ip-172-31-26-39" Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:40.617 [INFO][4928] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.133/26] block=192.168.64.128/26 handle="k8s-pod-network.e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" host="ip-172-31-26-39" Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:40.617 [INFO][4928] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.133/26] handle="k8s-pod-network.e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" host="ip-172-31-26-39" Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:40.617 [INFO][4928] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:40.820672 containerd[2011]: 2026-03-14 00:15:40.617 [INFO][4928] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.133/26] IPv6=[] ContainerID="e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" HandleID="k8s-pod-network.e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" Mar 14 00:15:40.826429 containerd[2011]: 2026-03-14 00:15:40.677 [INFO][4834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" Namespace="kube-system" Pod="coredns-66bc5c9577-whvxh" WorkloadEndpoint="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3826a231-0f38-4b81-9031-95274d5b9189", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"", Pod:"coredns-66bc5c9577-whvxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib901f2abe28", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.826429 containerd[2011]: 2026-03-14 00:15:40.681 [INFO][4834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.133/32] ContainerID="e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" Namespace="kube-system" Pod="coredns-66bc5c9577-whvxh" WorkloadEndpoint="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" Mar 14 00:15:40.826429 containerd[2011]: 2026-03-14 00:15:40.682 [INFO][4834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib901f2abe28 ContainerID="e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" Namespace="kube-system" Pod="coredns-66bc5c9577-whvxh" WorkloadEndpoint="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" Mar 14 00:15:40.826429 containerd[2011]: 2026-03-14 00:15:40.728 [INFO][4834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" Namespace="kube-system" Pod="coredns-66bc5c9577-whvxh" WorkloadEndpoint="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" Mar 14 00:15:40.827789 containerd[2011]: 2026-03-14 00:15:40.735 [INFO][4834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" Namespace="kube-system" Pod="coredns-66bc5c9577-whvxh" WorkloadEndpoint="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3826a231-0f38-4b81-9031-95274d5b9189", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a", Pod:"coredns-66bc5c9577-whvxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib901f2abe28", MAC:"9a:23:e1:10:c2:6f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.827789 containerd[2011]: 2026-03-14 00:15:40.791 [INFO][4834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a" Namespace="kube-system" Pod="coredns-66bc5c9577-whvxh" WorkloadEndpoint="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" Mar 14 00:15:40.912784 systemd[1]: Started cri-containerd-ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a.scope - libcontainer container ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a. Mar 14 00:15:40.933741 systemd-networkd[1929]: cali3693feb2e58: Link UP Mar 14 00:15:40.958987 systemd-networkd[1929]: cali3693feb2e58: Gained carrier Mar 14 00:15:40.991506 systemd-networkd[1929]: cali338be02951d: Gained IPv6LL Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:39.246 [ERROR][4850] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:39.433 [INFO][4850] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0 coredns-66bc5c9577- kube-system e2fe8cef-b474-4b32-815f-59d01d17b696 953 0 2026-03-14 00:14:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-39 coredns-66bc5c9577-rccd6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3693feb2e58 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" Namespace="kube-system" Pod="coredns-66bc5c9577-rccd6" WorkloadEndpoint="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-" Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:39.434 [INFO][4850] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" Namespace="kube-system" Pod="coredns-66bc5c9577-rccd6" WorkloadEndpoint="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:39.716 [INFO][4965] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" HandleID="k8s-pod-network.a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:39.896 [INFO][4965] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" HandleID="k8s-pod-network.a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003abae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-39", "pod":"coredns-66bc5c9577-rccd6", "timestamp":"2026-03-14 00:15:39.716849124 +0000 UTC"}, Hostname:"ip-172-31-26-39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400027e000)} Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:39.896 [INFO][4965] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:40.618 [INFO][4965] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:40.618 [INFO][4965] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-39' Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:40.628 [INFO][4965] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" host="ip-172-31-26-39" Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:40.674 [INFO][4965] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-39" Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:40.734 [INFO][4965] ipam/ipam.go 526: Trying affinity for 192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:40.743 [INFO][4965] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:40.752 [INFO][4965] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:40.752 [INFO][4965] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" host="ip-172-31-26-39" Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:40.763 [INFO][4965] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2 Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:40.782 [INFO][4965] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" host="ip-172-31-26-39" Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:40.836 [INFO][4965] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.134/26] block=192.168.64.128/26 handle="k8s-pod-network.a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" host="ip-172-31-26-39" Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:40.839 [INFO][4965] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.134/26] handle="k8s-pod-network.a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" host="ip-172-31-26-39" Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:40.842 [INFO][4965] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:41.039475 containerd[2011]: 2026-03-14 00:15:40.844 [INFO][4965] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.134/26] IPv6=[] ContainerID="a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" HandleID="k8s-pod-network.a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" Mar 14 00:15:41.043799 containerd[2011]: 2026-03-14 00:15:40.869 [INFO][4850] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" Namespace="kube-system" Pod="coredns-66bc5c9577-rccd6" WorkloadEndpoint="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e2fe8cef-b474-4b32-815f-59d01d17b696", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"", Pod:"coredns-66bc5c9577-rccd6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3693feb2e58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:41.043799 containerd[2011]: 2026-03-14 00:15:40.869 [INFO][4850] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.134/32] ContainerID="a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" Namespace="kube-system" Pod="coredns-66bc5c9577-rccd6" WorkloadEndpoint="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" Mar 14 00:15:41.043799 containerd[2011]: 2026-03-14 00:15:40.869 [INFO][4850] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3693feb2e58 ContainerID="a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" Namespace="kube-system" Pod="coredns-66bc5c9577-rccd6" WorkloadEndpoint="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" Mar 14 00:15:41.043799 containerd[2011]: 2026-03-14 00:15:40.979 [INFO][4850] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" Namespace="kube-system" Pod="coredns-66bc5c9577-rccd6" WorkloadEndpoint="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" Mar 14 00:15:41.045318 containerd[2011]: 2026-03-14 00:15:40.981 [INFO][4850] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" Namespace="kube-system" Pod="coredns-66bc5c9577-rccd6" WorkloadEndpoint="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e2fe8cef-b474-4b32-815f-59d01d17b696", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2", Pod:"coredns-66bc5c9577-rccd6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3693feb2e58", MAC:"d6:f4:0b:91:1f:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:41.045318 containerd[2011]: 2026-03-14 00:15:41.025 [INFO][4850] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2" Namespace="kube-system" Pod="coredns-66bc5c9577-rccd6" WorkloadEndpoint="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" Mar 14 00:15:41.121868 containerd[2011]: time="2026-03-14T00:15:41.120411527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68785684b4-j7pqm,Uid:219e5ba8-ebab-482b-96b4-7af0503f271c,Namespace:calico-system,Attempt:1,} returns sandbox id \"7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648\"" Mar 14 00:15:41.166039 systemd-networkd[1929]: cali7cf270fc9be: Link UP Mar 14 00:15:41.168948 systemd-networkd[1929]: cali7cf270fc9be: Gained carrier Mar 14 00:15:41.202478 containerd[2011]: time="2026-03-14T00:15:41.202315403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:41.202478 containerd[2011]: time="2026-03-14T00:15:41.202428023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:41.202952 containerd[2011]: time="2026-03-14T00:15:41.202704023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:41.203788 containerd[2011]: time="2026-03-14T00:15:41.203176919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:41.239935 containerd[2011]: time="2026-03-14T00:15:41.239425439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-pbgth,Uid:95ad4bc9-cde4-4484-81ed-f6e09950a754,Namespace:calico-system,Attempt:1,} returns sandbox id \"1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b\"" Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:39.306 [ERROR][4869] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:39.572 [INFO][4869] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0 calico-apiserver-65b4c4f55c- calico-system dfe8f338-e4ed-49e1-8e8d-196f56df8d36 955 0 2026-03-14 00:15:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65b4c4f55c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-39 calico-apiserver-65b4c4f55c-x5m6q eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali7cf270fc9be [] [] }} ContainerID="09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" Namespace="calico-system" Pod="calico-apiserver-65b4c4f55c-x5m6q" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-" Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:39.573 [INFO][4869] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" Namespace="calico-system" Pod="calico-apiserver-65b4c4f55c-x5m6q" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:39.843 [INFO][4989] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" HandleID="k8s-pod-network.09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:39.945 [INFO][4989] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" HandleID="k8s-pod-network.09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000361910), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-39", "pod":"calico-apiserver-65b4c4f55c-x5m6q", "timestamp":"2026-03-14 00:15:39.843150397 +0000 UTC"}, Hostname:"ip-172-31-26-39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002234a0)} Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:39.946 [INFO][4989] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:40.840 [INFO][4989] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:40.840 [INFO][4989] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-39' Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:40.853 [INFO][4989] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" host="ip-172-31-26-39" Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:40.903 [INFO][4989] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-39" Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:40.971 [INFO][4989] ipam/ipam.go 526: Trying affinity for 192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:40.986 [INFO][4989] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:41.008 [INFO][4989] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:41.008 [INFO][4989] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" host="ip-172-31-26-39" Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:41.024 [INFO][4989] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163 Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:41.052 [INFO][4989] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" host="ip-172-31-26-39" Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:41.093 [INFO][4989] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.135/26] block=192.168.64.128/26 handle="k8s-pod-network.09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" host="ip-172-31-26-39" Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:41.093 [INFO][4989] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.135/26] handle="k8s-pod-network.09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" host="ip-172-31-26-39" Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:41.093 [INFO][4989] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:41.256655 containerd[2011]: 2026-03-14 00:15:41.093 [INFO][4989] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.135/26] IPv6=[] ContainerID="09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" HandleID="k8s-pod-network.09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" Mar 14 00:15:41.257939 containerd[2011]: 2026-03-14 00:15:41.138 [INFO][4869] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" Namespace="calico-system" Pod="calico-apiserver-65b4c4f55c-x5m6q" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0", GenerateName:"calico-apiserver-65b4c4f55c-", Namespace:"calico-system", SelfLink:"", UID:"dfe8f338-e4ed-49e1-8e8d-196f56df8d36", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b4c4f55c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"", Pod:"calico-apiserver-65b4c4f55c-x5m6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7cf270fc9be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:41.257939 containerd[2011]: 2026-03-14 00:15:41.138 [INFO][4869] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.135/32] ContainerID="09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" Namespace="calico-system" Pod="calico-apiserver-65b4c4f55c-x5m6q" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" Mar 14 00:15:41.257939 containerd[2011]: 2026-03-14 00:15:41.138 [INFO][4869] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cf270fc9be ContainerID="09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" Namespace="calico-system" Pod="calico-apiserver-65b4c4f55c-x5m6q" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" Mar 14 00:15:41.257939 containerd[2011]: 2026-03-14 00:15:41.188 [INFO][4869] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" Namespace="calico-system" Pod="calico-apiserver-65b4c4f55c-x5m6q" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" Mar 14 00:15:41.257939 containerd[2011]: 2026-03-14 00:15:41.191 [INFO][4869] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" Namespace="calico-system" Pod="calico-apiserver-65b4c4f55c-x5m6q" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0", GenerateName:"calico-apiserver-65b4c4f55c-", Namespace:"calico-system", SelfLink:"", UID:"dfe8f338-e4ed-49e1-8e8d-196f56df8d36", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b4c4f55c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163", Pod:"calico-apiserver-65b4c4f55c-x5m6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7cf270fc9be", MAC:"ea:1f:02:be:54:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:41.257939 containerd[2011]: 2026-03-14 00:15:41.231 [INFO][4869] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163" Namespace="calico-system" Pod="calico-apiserver-65b4c4f55c-x5m6q" WorkloadEndpoint="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" Mar 14 00:15:41.281864 containerd[2011]: time="2026-03-14T00:15:41.278953308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:41.285141 containerd[2011]: time="2026-03-14T00:15:41.284444364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:41.285141 containerd[2011]: time="2026-03-14T00:15:41.284531736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:41.289388 containerd[2011]: time="2026-03-14T00:15:41.288297468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:41.320769 systemd[1]: Started cri-containerd-e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a.scope - libcontainer container e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a. Mar 14 00:15:41.373571 systemd[1]: run-containerd-runc-k8s.io-1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b-runc.BtxzaG.mount: Deactivated successfully. Mar 14 00:15:41.424516 containerd[2011]: time="2026-03-14T00:15:41.416473608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:41.424516 containerd[2011]: time="2026-03-14T00:15:41.416593620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:41.424516 containerd[2011]: time="2026-03-14T00:15:41.416631864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:41.428989 containerd[2011]: time="2026-03-14T00:15:41.425773248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:41.514147 systemd-networkd[1929]: cali22d0f195cfd: Link UP Mar 14 00:15:41.517136 systemd-networkd[1929]: cali22d0f195cfd: Gained carrier Mar 14 00:15:41.521283 systemd[1]: Started cri-containerd-a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2.scope - libcontainer container a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2. Mar 14 00:15:41.554217 containerd[2011]: time="2026-03-14T00:15:41.553351405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b4c4f55c-t5bf4,Uid:ef193aa7-7886-492e-84a7-367d7c11360a,Namespace:calico-system,Attempt:1,} returns sandbox id \"ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a\"" Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:40.535 [ERROR][5067] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:40.610 [INFO][5067] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--39-k8s-whisker--7b9d4484f9--xvjs4-eth0 whisker-7b9d4484f9- calico-system 46dc9080-52ae-4f6b-9a08-239d7ba3b05b 978 0 2026-03-14 00:15:39 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b9d4484f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-26-39 whisker-7b9d4484f9-xvjs4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali22d0f195cfd [] [] }} ContainerID="514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" Namespace="calico-system" Pod="whisker-7b9d4484f9-xvjs4" WorkloadEndpoint="ip--172--31--26--39-k8s-whisker--7b9d4484f9--xvjs4-" Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:40.610 [INFO][5067] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" Namespace="calico-system" Pod="whisker-7b9d4484f9-xvjs4" WorkloadEndpoint="ip--172--31--26--39-k8s-whisker--7b9d4484f9--xvjs4-eth0" Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.180 [INFO][5168] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" HandleID="k8s-pod-network.514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" Workload="ip--172--31--26--39-k8s-whisker--7b9d4484f9--xvjs4-eth0" Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.229 [INFO][5168] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" HandleID="k8s-pod-network.514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" Workload="ip--172--31--26--39-k8s-whisker--7b9d4484f9--xvjs4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-39", "pod":"whisker-7b9d4484f9-xvjs4", "timestamp":"2026-03-14 00:15:41.179999147 +0000 UTC"}, Hostname:"ip-172-31-26-39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000222580)} Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.229 [INFO][5168] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.230 [INFO][5168] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.232 [INFO][5168] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-39' Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.248 [INFO][5168] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" host="ip-172-31-26-39" Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.267 [INFO][5168] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-39" Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.290 [INFO][5168] ipam/ipam.go 526: Trying affinity for 192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.299 [INFO][5168] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.322 [INFO][5168] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ip-172-31-26-39" Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.325 [INFO][5168] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" host="ip-172-31-26-39" Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.333 [INFO][5168] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3 Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.350 [INFO][5168] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" host="ip-172-31-26-39" Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.404 [INFO][5168] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.136/26] block=192.168.64.128/26 handle="k8s-pod-network.514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" host="ip-172-31-26-39" Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.404 [INFO][5168] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.136/26] handle="k8s-pod-network.514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" host="ip-172-31-26-39" Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.405 [INFO][5168] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:41.626785 containerd[2011]: 2026-03-14 00:15:41.406 [INFO][5168] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.136/26] IPv6=[] ContainerID="514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" HandleID="k8s-pod-network.514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" Workload="ip--172--31--26--39-k8s-whisker--7b9d4484f9--xvjs4-eth0" Mar 14 00:15:41.634293 containerd[2011]: 2026-03-14 00:15:41.449 [INFO][5067] cni-plugin/k8s.go 418: Populated endpoint ContainerID="514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" Namespace="calico-system" Pod="whisker-7b9d4484f9-xvjs4" WorkloadEndpoint="ip--172--31--26--39-k8s-whisker--7b9d4484f9--xvjs4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-whisker--7b9d4484f9--xvjs4-eth0", GenerateName:"whisker-7b9d4484f9-", Namespace:"calico-system", SelfLink:"", UID:"46dc9080-52ae-4f6b-9a08-239d7ba3b05b", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b9d4484f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"", Pod:"whisker-7b9d4484f9-xvjs4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali22d0f195cfd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:41.634293 containerd[2011]: 2026-03-14 00:15:41.449 [INFO][5067] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.136/32] ContainerID="514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" Namespace="calico-system" Pod="whisker-7b9d4484f9-xvjs4" WorkloadEndpoint="ip--172--31--26--39-k8s-whisker--7b9d4484f9--xvjs4-eth0" Mar 14 00:15:41.634293 containerd[2011]: 2026-03-14 00:15:41.449 [INFO][5067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22d0f195cfd ContainerID="514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" Namespace="calico-system" Pod="whisker-7b9d4484f9-xvjs4" WorkloadEndpoint="ip--172--31--26--39-k8s-whisker--7b9d4484f9--xvjs4-eth0" Mar 14 00:15:41.634293 containerd[2011]: 2026-03-14 00:15:41.543 [INFO][5067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" Namespace="calico-system" Pod="whisker-7b9d4484f9-xvjs4" WorkloadEndpoint="ip--172--31--26--39-k8s-whisker--7b9d4484f9--xvjs4-eth0" Mar 14 00:15:41.634293 containerd[2011]: 2026-03-14 00:15:41.558 [INFO][5067] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" Namespace="calico-system" Pod="whisker-7b9d4484f9-xvjs4" WorkloadEndpoint="ip--172--31--26--39-k8s-whisker--7b9d4484f9--xvjs4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-whisker--7b9d4484f9--xvjs4-eth0", GenerateName:"whisker-7b9d4484f9-", Namespace:"calico-system", SelfLink:"", UID:"46dc9080-52ae-4f6b-9a08-239d7ba3b05b", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b9d4484f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3", Pod:"whisker-7b9d4484f9-xvjs4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali22d0f195cfd", MAC:"96:fc:65:80:84:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:41.634293 containerd[2011]: 2026-03-14 00:15:41.603 [INFO][5067] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3" Namespace="calico-system" Pod="whisker-7b9d4484f9-xvjs4" WorkloadEndpoint="ip--172--31--26--39-k8s-whisker--7b9d4484f9--xvjs4-eth0" Mar 14 00:15:41.639451 systemd[1]: run-containerd-runc-k8s.io-09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163-runc.chJtel.mount: Deactivated successfully. Mar 14 00:15:41.666611 systemd[1]: Started cri-containerd-09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163.scope - libcontainer container 09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163. Mar 14 00:15:41.684209 containerd[2011]: time="2026-03-14T00:15:41.684152018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-whvxh,Uid:3826a231-0f38-4b81-9031-95274d5b9189,Namespace:kube-system,Attempt:1,} returns sandbox id \"e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a\"" Mar 14 00:15:41.709542 containerd[2011]: time="2026-03-14T00:15:41.709013582Z" level=info msg="CreateContainer within sandbox \"e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:15:41.744935 containerd[2011]: time="2026-03-14T00:15:41.743817662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:41.745646 containerd[2011]: time="2026-03-14T00:15:41.744029426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:41.745646 containerd[2011]: time="2026-03-14T00:15:41.745068698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:41.746854 containerd[2011]: time="2026-03-14T00:15:41.746235602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:41.759221 systemd-networkd[1929]: cali832e8274aea: Gained IPv6LL Mar 14 00:15:41.826416 containerd[2011]: time="2026-03-14T00:15:41.826180106Z" level=info msg="CreateContainer within sandbox \"e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6fa3a2ae7ff92a889bb27afaffefb5b326ad69fc5c66c638d126eb2ee75aac2a\"" Mar 14 00:15:41.829011 containerd[2011]: time="2026-03-14T00:15:41.828646586Z" level=info msg="StartContainer for \"6fa3a2ae7ff92a889bb27afaffefb5b326ad69fc5c66c638d126eb2ee75aac2a\"" Mar 14 00:15:41.842454 containerd[2011]: time="2026-03-14T00:15:41.842136866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rccd6,Uid:e2fe8cef-b474-4b32-815f-59d01d17b696,Namespace:kube-system,Attempt:1,} returns sandbox id \"a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2\"" Mar 14 00:15:41.882962 containerd[2011]: time="2026-03-14T00:15:41.880271055Z" level=info msg="CreateContainer within sandbox \"a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:15:41.891460 systemd[1]: Started cri-containerd-514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3.scope - libcontainer container 514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3. Mar 14 00:15:42.014302 systemd[1]: Started cri-containerd-6fa3a2ae7ff92a889bb27afaffefb5b326ad69fc5c66c638d126eb2ee75aac2a.scope - libcontainer container 6fa3a2ae7ff92a889bb27afaffefb5b326ad69fc5c66c638d126eb2ee75aac2a. Mar 14 00:15:42.017030 systemd-networkd[1929]: cali6f4491b4956: Gained IPv6LL Mar 14 00:15:42.020401 containerd[2011]: time="2026-03-14T00:15:42.020097983Z" level=info msg="CreateContainer within sandbox \"a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68fb402ffccdeb3a025b1bbc47616e4cf51028c1b007a669241769542352f0cd\"" Mar 14 00:15:42.024860 containerd[2011]: time="2026-03-14T00:15:42.024242663Z" level=info msg="StartContainer for \"68fb402ffccdeb3a025b1bbc47616e4cf51028c1b007a669241769542352f0cd\"" Mar 14 00:15:42.160394 systemd[1]: Started cri-containerd-68fb402ffccdeb3a025b1bbc47616e4cf51028c1b007a669241769542352f0cd.scope - libcontainer container 68fb402ffccdeb3a025b1bbc47616e4cf51028c1b007a669241769542352f0cd. Mar 14 00:15:42.184675 containerd[2011]: time="2026-03-14T00:15:42.184457520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b4c4f55c-x5m6q,Uid:dfe8f338-e4ed-49e1-8e8d-196f56df8d36,Namespace:calico-system,Attempt:1,} returns sandbox id \"09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163\"" Mar 14 00:15:42.200173 containerd[2011]: time="2026-03-14T00:15:42.199955952Z" level=info msg="StartContainer for \"6fa3a2ae7ff92a889bb27afaffefb5b326ad69fc5c66c638d126eb2ee75aac2a\" returns successfully" Mar 14 00:15:42.272285 systemd-networkd[1929]: cali37c1525e69f: Gained IPv6LL Mar 14 00:15:42.294537 containerd[2011]: time="2026-03-14T00:15:42.294451813Z" level=info msg="StartContainer for \"68fb402ffccdeb3a025b1bbc47616e4cf51028c1b007a669241769542352f0cd\" returns successfully" Mar 14 00:15:42.330853 containerd[2011]: time="2026-03-14T00:15:42.330734977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9d4484f9-xvjs4,Uid:46dc9080-52ae-4f6b-9a08-239d7ba3b05b,Namespace:calico-system,Attempt:0,} returns sandbox id \"514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3\"" Mar 14 00:15:42.527231 systemd-networkd[1929]: cali3693feb2e58: Gained IPv6LL Mar 14 00:15:42.592473 systemd-networkd[1929]: calib901f2abe28: Gained IPv6LL Mar 14 00:15:42.695177 kubelet[3430]: I0314 00:15:42.694072 3430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-whvxh" podStartSLOduration=50.694048275 podStartE2EDuration="50.694048275s" podCreationTimestamp="2026-03-14 00:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:42.638157566 +0000 UTC m=+55.069056934" watchObservedRunningTime="2026-03-14 00:15:42.694048275 +0000 UTC m=+55.124947607" Mar 14 00:15:42.930353 systemd[1]: Started sshd@7-172.31.26.39:22-68.220.241.50:55208.service - OpenSSH per-connection server daemon (68.220.241.50:55208). Mar 14 00:15:43.078869 kernel: calico-node[4921]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 14 00:15:43.168382 systemd-networkd[1929]: cali7cf270fc9be: Gained IPv6LL Mar 14 00:15:43.170935 systemd-networkd[1929]: cali22d0f195cfd: Gained IPv6LL Mar 14 00:15:43.462061 sshd[5534]: Accepted publickey for core from 68.220.241.50 port 55208 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:43.465588 sshd[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:43.476752 systemd-logind[1988]: New session 8 of user core. Mar 14 00:15:43.485301 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:15:44.012874 systemd-networkd[1929]: vxlan.calico: Link UP Mar 14 00:15:44.012889 systemd-networkd[1929]: vxlan.calico: Gained carrier Mar 14 00:15:44.208500 sshd[5534]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:44.229400 systemd[1]: sshd@7-172.31.26.39:22-68.220.241.50:55208.service: Deactivated successfully. Mar 14 00:15:44.244665 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:15:44.255965 systemd-logind[1988]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:15:44.265497 systemd-logind[1988]: Removed session 8. Mar 14 00:15:44.577158 containerd[2011]: time="2026-03-14T00:15:44.576975796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:44.582887 containerd[2011]: time="2026-03-14T00:15:44.582619348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 14 00:15:44.585879 containerd[2011]: time="2026-03-14T00:15:44.585365704Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:44.592170 containerd[2011]: time="2026-03-14T00:15:44.592115860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:44.595802 containerd[2011]: time="2026-03-14T00:15:44.595742488Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 3.941944951s" Mar 14 00:15:44.596303 containerd[2011]: time="2026-03-14T00:15:44.595998724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 14 00:15:44.600055 containerd[2011]: time="2026-03-14T00:15:44.599031352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 14 00:15:44.607398 containerd[2011]: time="2026-03-14T00:15:44.607057600Z" level=info msg="CreateContainer within sandbox \"0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 14 00:15:44.670162 containerd[2011]: time="2026-03-14T00:15:44.670083785Z" level=info msg="CreateContainer within sandbox \"0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"15549a8125d25b31772783a33da18f243d0a3ca92e9946939bc7f475bfca354d\"" Mar 14 00:15:44.671923 containerd[2011]: time="2026-03-14T00:15:44.671485265Z" level=info msg="StartContainer for \"15549a8125d25b31772783a33da18f243d0a3ca92e9946939bc7f475bfca354d\"" Mar 14 00:15:44.825135 systemd[1]: Started cri-containerd-15549a8125d25b31772783a33da18f243d0a3ca92e9946939bc7f475bfca354d.scope - libcontainer container 15549a8125d25b31772783a33da18f243d0a3ca92e9946939bc7f475bfca354d. Mar 14 00:15:44.907386 containerd[2011]: time="2026-03-14T00:15:44.907333026Z" level=info msg="StartContainer for \"15549a8125d25b31772783a33da18f243d0a3ca92e9946939bc7f475bfca354d\" returns successfully" Mar 14 00:15:46.047278 systemd-networkd[1929]: vxlan.calico: Gained IPv6LL Mar 14 00:15:47.841844 containerd[2011]: time="2026-03-14T00:15:47.841778972Z" level=info msg="StopPodSandbox for \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\"" Mar 14 00:15:48.214980 containerd[2011]: 2026-03-14 00:15:47.990 [WARNING][5730] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0", GenerateName:"calico-apiserver-65b4c4f55c-", Namespace:"calico-system", SelfLink:"", UID:"dfe8f338-e4ed-49e1-8e8d-196f56df8d36", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b4c4f55c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163", Pod:"calico-apiserver-65b4c4f55c-x5m6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7cf270fc9be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:48.214980 containerd[2011]: 2026-03-14 00:15:47.994 [INFO][5730] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Mar 14 00:15:48.214980 containerd[2011]: 2026-03-14 00:15:47.994 [INFO][5730] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" iface="eth0" netns="" Mar 14 00:15:48.214980 containerd[2011]: 2026-03-14 00:15:47.994 [INFO][5730] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Mar 14 00:15:48.214980 containerd[2011]: 2026-03-14 00:15:47.997 [INFO][5730] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Mar 14 00:15:48.214980 containerd[2011]: 2026-03-14 00:15:48.162 [INFO][5741] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" HandleID="k8s-pod-network.ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" Mar 14 00:15:48.214980 containerd[2011]: 2026-03-14 00:15:48.162 [INFO][5741] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:48.214980 containerd[2011]: 2026-03-14 00:15:48.162 [INFO][5741] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:48.214980 containerd[2011]: 2026-03-14 00:15:48.194 [WARNING][5741] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" HandleID="k8s-pod-network.ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" Mar 14 00:15:48.214980 containerd[2011]: 2026-03-14 00:15:48.194 [INFO][5741] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" HandleID="k8s-pod-network.ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" Mar 14 00:15:48.214980 containerd[2011]: 2026-03-14 00:15:48.202 [INFO][5741] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:48.214980 containerd[2011]: 2026-03-14 00:15:48.207 [INFO][5730] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Mar 14 00:15:48.216999 containerd[2011]: time="2026-03-14T00:15:48.216079542Z" level=info msg="TearDown network for sandbox \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\" successfully" Mar 14 00:15:48.216999 containerd[2011]: time="2026-03-14T00:15:48.216126078Z" level=info msg="StopPodSandbox for \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\" returns successfully" Mar 14 00:15:48.217207 containerd[2011]: time="2026-03-14T00:15:48.217127190Z" level=info msg="RemovePodSandbox for \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\"" Mar 14 00:15:48.217207 containerd[2011]: time="2026-03-14T00:15:48.217181226Z" level=info msg="Forcibly stopping sandbox \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\"" Mar 14 00:15:48.441002 containerd[2011]: 2026-03-14 00:15:48.311 [WARNING][5755] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0", GenerateName:"calico-apiserver-65b4c4f55c-", Namespace:"calico-system", SelfLink:"", UID:"dfe8f338-e4ed-49e1-8e8d-196f56df8d36", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b4c4f55c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163", Pod:"calico-apiserver-65b4c4f55c-x5m6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7cf270fc9be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:48.441002 containerd[2011]: 2026-03-14 00:15:48.312 [INFO][5755] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Mar 14 00:15:48.441002 containerd[2011]: 2026-03-14 00:15:48.312 [INFO][5755] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" iface="eth0" netns="" Mar 14 00:15:48.441002 containerd[2011]: 2026-03-14 00:15:48.312 [INFO][5755] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Mar 14 00:15:48.441002 containerd[2011]: 2026-03-14 00:15:48.312 [INFO][5755] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Mar 14 00:15:48.441002 containerd[2011]: 2026-03-14 00:15:48.390 [INFO][5763] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" HandleID="k8s-pod-network.ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" Mar 14 00:15:48.441002 containerd[2011]: 2026-03-14 00:15:48.390 [INFO][5763] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:48.441002 containerd[2011]: 2026-03-14 00:15:48.391 [INFO][5763] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:48.441002 containerd[2011]: 2026-03-14 00:15:48.423 [WARNING][5763] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" HandleID="k8s-pod-network.ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" Mar 14 00:15:48.441002 containerd[2011]: 2026-03-14 00:15:48.424 [INFO][5763] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" HandleID="k8s-pod-network.ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--x5m6q-eth0" Mar 14 00:15:48.441002 containerd[2011]: 2026-03-14 00:15:48.428 [INFO][5763] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:48.441002 containerd[2011]: 2026-03-14 00:15:48.431 [INFO][5755] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c" Mar 14 00:15:48.441002 containerd[2011]: time="2026-03-14T00:15:48.440916475Z" level=info msg="TearDown network for sandbox \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\" successfully" Mar 14 00:15:48.455127 containerd[2011]: time="2026-03-14T00:15:48.454876039Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:48.455359 containerd[2011]: time="2026-03-14T00:15:48.455009095Z" level=info msg="RemovePodSandbox \"ab91b3f26cf9d675053e47545a59025b8ae3c29aa0628faa8b6572e10233867c\" returns successfully" Mar 14 00:15:48.456874 containerd[2011]: time="2026-03-14T00:15:48.456549235Z" level=info msg="StopPodSandbox for \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\"" Mar 14 00:15:48.482788 containerd[2011]: time="2026-03-14T00:15:48.482600299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:48.486663 containerd[2011]: time="2026-03-14T00:15:48.486578599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 14 00:15:48.487499 containerd[2011]: time="2026-03-14T00:15:48.487425775Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:48.496081 containerd[2011]: time="2026-03-14T00:15:48.495983744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:48.498996 containerd[2011]: time="2026-03-14T00:15:48.498921296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.899548748s" Mar 14 00:15:48.498996 containerd[2011]: time="2026-03-14T00:15:48.498989504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 14 00:15:48.504150 containerd[2011]: time="2026-03-14T00:15:48.503970092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 14 00:15:48.541782 containerd[2011]: time="2026-03-14T00:15:48.540968012Z" level=info msg="CreateContainer within sandbox \"7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 14 00:15:48.585606 containerd[2011]: time="2026-03-14T00:15:48.585526796Z" level=info msg="CreateContainer within sandbox \"7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"182a97b429813246f1fea5e7dee79de7a58318280525f8714354bde8d5825c9a\"" Mar 14 00:15:48.593420 containerd[2011]: time="2026-03-14T00:15:48.590714540Z" level=info msg="StartContainer for \"182a97b429813246f1fea5e7dee79de7a58318280525f8714354bde8d5825c9a\"" Mar 14 00:15:48.603327 ntpd[1983]: Listen normally on 7 vxlan.calico 192.168.64.128:123 Mar 14 00:15:48.604467 ntpd[1983]: 14 Mar 00:15:48 ntpd[1983]: Listen normally on 7 vxlan.calico 192.168.64.128:123 Mar 14 00:15:48.604467 ntpd[1983]: 14 Mar 00:15:48 ntpd[1983]: Listen normally on 8 cali338be02951d [fe80::ecee:eeff:feee:eeee%4]:123 Mar 14 00:15:48.604467 ntpd[1983]: 14 Mar 00:15:48 ntpd[1983]: Listen normally on 9 cali832e8274aea [fe80::ecee:eeff:feee:eeee%5]:123 Mar 14 00:15:48.604467 ntpd[1983]: 14 Mar 00:15:48 ntpd[1983]: Listen normally on 10 cali37c1525e69f [fe80::ecee:eeff:feee:eeee%6]:123 Mar 14 00:15:48.604467 ntpd[1983]: 14 Mar 00:15:48 ntpd[1983]: Listen normally on 11 cali6f4491b4956 [fe80::ecee:eeff:feee:eeee%7]:123 Mar 14 00:15:48.603458 ntpd[1983]: Listen normally on 8 cali338be02951d [fe80::ecee:eeff:feee:eeee%4]:123 Mar 14 00:15:48.603550 ntpd[1983]: Listen normally on 9 cali832e8274aea [fe80::ecee:eeff:feee:eeee%5]:123 Mar 14 00:15:48.603624 ntpd[1983]: Listen normally on 10 cali37c1525e69f [fe80::ecee:eeff:feee:eeee%6]:123 Mar 14 00:15:48.603694 ntpd[1983]: Listen normally on 11 cali6f4491b4956 [fe80::ecee:eeff:feee:eeee%7]:123 Mar 14 00:15:48.606192 ntpd[1983]: Listen normally on 12 calib901f2abe28 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 14 00:15:48.607531 ntpd[1983]: 14 Mar 00:15:48 ntpd[1983]: Listen normally on 12 calib901f2abe28 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 14 00:15:48.607531 ntpd[1983]: 14 Mar 00:15:48 ntpd[1983]: Listen normally on 13 cali3693feb2e58 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 14 00:15:48.607531 ntpd[1983]: 14 Mar 00:15:48 ntpd[1983]: Listen normally on 14 cali7cf270fc9be [fe80::ecee:eeff:feee:eeee%10]:123 Mar 14 00:15:48.607531 ntpd[1983]: 14 Mar 00:15:48 ntpd[1983]: Listen normally on 15 cali22d0f195cfd [fe80::ecee:eeff:feee:eeee%11]:123 Mar 14 00:15:48.607531 ntpd[1983]: 14 Mar 00:15:48 ntpd[1983]: Listen normally on 16 vxlan.calico [fe80::6476:68ff:feb6:e34c%12]:123 Mar 14 00:15:48.606334 ntpd[1983]: Listen normally on 13 cali3693feb2e58 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 14 00:15:48.606406 ntpd[1983]: Listen normally on 14 cali7cf270fc9be [fe80::ecee:eeff:feee:eeee%10]:123 Mar 14 00:15:48.606484 ntpd[1983]: Listen normally on 15 cali22d0f195cfd [fe80::ecee:eeff:feee:eeee%11]:123 Mar 14 00:15:48.606556 ntpd[1983]: Listen normally on 16 vxlan.calico [fe80::6476:68ff:feb6:e34c%12]:123 Mar 14 00:15:48.767193 systemd[1]: Started cri-containerd-182a97b429813246f1fea5e7dee79de7a58318280525f8714354bde8d5825c9a.scope - libcontainer container 182a97b429813246f1fea5e7dee79de7a58318280525f8714354bde8d5825c9a. Mar 14 00:15:48.810990 containerd[2011]: 2026-03-14 00:15:48.626 [WARNING][5778] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3826a231-0f38-4b81-9031-95274d5b9189", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a", Pod:"coredns-66bc5c9577-whvxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib901f2abe28", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:48.810990 containerd[2011]: 2026-03-14 00:15:48.639 [INFO][5778] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Mar 14 00:15:48.810990 containerd[2011]: 2026-03-14 00:15:48.639 [INFO][5778] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" iface="eth0" netns="" Mar 14 00:15:48.810990 containerd[2011]: 2026-03-14 00:15:48.639 [INFO][5778] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Mar 14 00:15:48.810990 containerd[2011]: 2026-03-14 00:15:48.640 [INFO][5778] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Mar 14 00:15:48.810990 containerd[2011]: 2026-03-14 00:15:48.781 [INFO][5791] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" HandleID="k8s-pod-network.476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" Mar 14 00:15:48.810990 containerd[2011]: 2026-03-14 00:15:48.781 [INFO][5791] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:48.810990 containerd[2011]: 2026-03-14 00:15:48.782 [INFO][5791] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:48.810990 containerd[2011]: 2026-03-14 00:15:48.798 [WARNING][5791] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" HandleID="k8s-pod-network.476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" Mar 14 00:15:48.810990 containerd[2011]: 2026-03-14 00:15:48.799 [INFO][5791] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" HandleID="k8s-pod-network.476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" Mar 14 00:15:48.810990 containerd[2011]: 2026-03-14 00:15:48.803 [INFO][5791] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:48.810990 containerd[2011]: 2026-03-14 00:15:48.807 [INFO][5778] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Mar 14 00:15:48.813043 containerd[2011]: time="2026-03-14T00:15:48.811061673Z" level=info msg="TearDown network for sandbox \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\" successfully" Mar 14 00:15:48.813043 containerd[2011]: time="2026-03-14T00:15:48.811125465Z" level=info msg="StopPodSandbox for \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\" returns successfully" Mar 14 00:15:48.813043 containerd[2011]: time="2026-03-14T00:15:48.812253849Z" level=info msg="RemovePodSandbox for \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\"" Mar 14 00:15:48.813043 containerd[2011]: time="2026-03-14T00:15:48.812305113Z" level=info msg="Forcibly stopping sandbox \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\"" Mar 14 00:15:48.912358 containerd[2011]: time="2026-03-14T00:15:48.911252182Z" level=info msg="StartContainer for \"182a97b429813246f1fea5e7dee79de7a58318280525f8714354bde8d5825c9a\" returns successfully" Mar 14 00:15:49.049327 containerd[2011]: 2026-03-14 00:15:48.924 [WARNING][5826] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3826a231-0f38-4b81-9031-95274d5b9189", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"e97b33b1b115aa93fe07860e5abfb1ab50cd84cd0fc501d4bd2543ea41cf535a", Pod:"coredns-66bc5c9577-whvxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib901f2abe28", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:49.049327 containerd[2011]: 2026-03-14 00:15:48.925 [INFO][5826] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Mar 14 00:15:49.049327 containerd[2011]: 2026-03-14 00:15:48.925 [INFO][5826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" iface="eth0" netns="" Mar 14 00:15:49.049327 containerd[2011]: 2026-03-14 00:15:48.925 [INFO][5826] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Mar 14 00:15:49.049327 containerd[2011]: 2026-03-14 00:15:48.925 [INFO][5826] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Mar 14 00:15:49.049327 containerd[2011]: 2026-03-14 00:15:49.021 [INFO][5843] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" HandleID="k8s-pod-network.476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" Mar 14 00:15:49.049327 containerd[2011]: 2026-03-14 00:15:49.021 [INFO][5843] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:49.049327 containerd[2011]: 2026-03-14 00:15:49.021 [INFO][5843] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:49.049327 containerd[2011]: 2026-03-14 00:15:49.037 [WARNING][5843] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" HandleID="k8s-pod-network.476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" Mar 14 00:15:49.049327 containerd[2011]: 2026-03-14 00:15:49.038 [INFO][5843] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" HandleID="k8s-pod-network.476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--whvxh-eth0" Mar 14 00:15:49.049327 containerd[2011]: 2026-03-14 00:15:49.041 [INFO][5843] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:49.049327 containerd[2011]: 2026-03-14 00:15:49.044 [INFO][5826] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665" Mar 14 00:15:49.050521 containerd[2011]: time="2026-03-14T00:15:49.048795522Z" level=info msg="TearDown network for sandbox \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\" successfully" Mar 14 00:15:49.066892 containerd[2011]: time="2026-03-14T00:15:49.066784554Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:49.067070 containerd[2011]: time="2026-03-14T00:15:49.066936450Z" level=info msg="RemovePodSandbox \"476eac9c240f050932e38f77f31dcd62df3c6632f260f51ca3ad553acf199665\" returns successfully" Mar 14 00:15:49.068193 containerd[2011]: time="2026-03-14T00:15:49.067748178Z" level=info msg="StopPodSandbox for \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\"" Mar 14 00:15:49.257292 containerd[2011]: 2026-03-14 00:15:49.158 [WARNING][5865] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0", GenerateName:"calico-kube-controllers-68785684b4-", Namespace:"calico-system", SelfLink:"", UID:"219e5ba8-ebab-482b-96b4-7af0503f271c", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68785684b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648", Pod:"calico-kube-controllers-68785684b4-j7pqm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali832e8274aea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:49.257292 containerd[2011]: 2026-03-14 00:15:49.159 [INFO][5865] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Mar 14 00:15:49.257292 containerd[2011]: 2026-03-14 00:15:49.159 [INFO][5865] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" iface="eth0" netns="" Mar 14 00:15:49.257292 containerd[2011]: 2026-03-14 00:15:49.159 [INFO][5865] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Mar 14 00:15:49.257292 containerd[2011]: 2026-03-14 00:15:49.159 [INFO][5865] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Mar 14 00:15:49.257292 containerd[2011]: 2026-03-14 00:15:49.218 [INFO][5872] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" HandleID="k8s-pod-network.b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Workload="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" Mar 14 00:15:49.257292 containerd[2011]: 2026-03-14 00:15:49.218 [INFO][5872] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:49.257292 containerd[2011]: 2026-03-14 00:15:49.219 [INFO][5872] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:49.257292 containerd[2011]: 2026-03-14 00:15:49.242 [WARNING][5872] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" HandleID="k8s-pod-network.b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Workload="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" Mar 14 00:15:49.257292 containerd[2011]: 2026-03-14 00:15:49.242 [INFO][5872] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" HandleID="k8s-pod-network.b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Workload="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" Mar 14 00:15:49.257292 containerd[2011]: 2026-03-14 00:15:49.245 [INFO][5872] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:49.257292 containerd[2011]: 2026-03-14 00:15:49.251 [INFO][5865] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Mar 14 00:15:49.260693 containerd[2011]: time="2026-03-14T00:15:49.257968495Z" level=info msg="TearDown network for sandbox \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\" successfully" Mar 14 00:15:49.260693 containerd[2011]: time="2026-03-14T00:15:49.258014983Z" level=info msg="StopPodSandbox for \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\" returns successfully" Mar 14 00:15:49.260693 containerd[2011]: time="2026-03-14T00:15:49.259312651Z" level=info msg="RemovePodSandbox for \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\"" Mar 14 00:15:49.260693 containerd[2011]: time="2026-03-14T00:15:49.259372195Z" level=info msg="Forcibly stopping sandbox \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\"" Mar 14 00:15:49.311741 systemd[1]: Started sshd@8-172.31.26.39:22-68.220.241.50:55214.service - OpenSSH per-connection server daemon (68.220.241.50:55214). Mar 14 00:15:49.443966 containerd[2011]: 2026-03-14 00:15:49.377 [WARNING][5889] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0", GenerateName:"calico-kube-controllers-68785684b4-", Namespace:"calico-system", SelfLink:"", UID:"219e5ba8-ebab-482b-96b4-7af0503f271c", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68785684b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"7aab1b49e4dc0ed987dd79260750d7cd3ec9c4bfe16375c3c3470a9aa68f3648", Pod:"calico-kube-controllers-68785684b4-j7pqm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali832e8274aea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:49.443966 containerd[2011]: 2026-03-14 00:15:49.378 [INFO][5889] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Mar 14 00:15:49.443966 containerd[2011]: 2026-03-14 00:15:49.378 [INFO][5889] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" iface="eth0" netns="" Mar 14 00:15:49.443966 containerd[2011]: 2026-03-14 00:15:49.378 [INFO][5889] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Mar 14 00:15:49.443966 containerd[2011]: 2026-03-14 00:15:49.378 [INFO][5889] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Mar 14 00:15:49.443966 containerd[2011]: 2026-03-14 00:15:49.419 [INFO][5900] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" HandleID="k8s-pod-network.b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Workload="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" Mar 14 00:15:49.443966 containerd[2011]: 2026-03-14 00:15:49.420 [INFO][5900] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:49.443966 containerd[2011]: 2026-03-14 00:15:49.420 [INFO][5900] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:49.443966 containerd[2011]: 2026-03-14 00:15:49.434 [WARNING][5900] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" HandleID="k8s-pod-network.b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Workload="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" Mar 14 00:15:49.443966 containerd[2011]: 2026-03-14 00:15:49.434 [INFO][5900] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" HandleID="k8s-pod-network.b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Workload="ip--172--31--26--39-k8s-calico--kube--controllers--68785684b4--j7pqm-eth0" Mar 14 00:15:49.443966 containerd[2011]: 2026-03-14 00:15:49.437 [INFO][5900] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:49.443966 containerd[2011]: 2026-03-14 00:15:49.440 [INFO][5889] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259" Mar 14 00:15:49.445188 containerd[2011]: time="2026-03-14T00:15:49.445090100Z" level=info msg="TearDown network for sandbox \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\" successfully" Mar 14 00:15:49.452249 containerd[2011]: time="2026-03-14T00:15:49.452170676Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:49.452436 containerd[2011]: time="2026-03-14T00:15:49.452271428Z" level=info msg="RemovePodSandbox \"b312e702170875dc625f176a9e3d66b6013c480cf42a326d393a75a8f5cbf259\" returns successfully" Mar 14 00:15:49.453104 containerd[2011]: time="2026-03-14T00:15:49.453006896Z" level=info msg="StopPodSandbox for \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\"" Mar 14 00:15:49.636334 containerd[2011]: 2026-03-14 00:15:49.529 [WARNING][5914] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e308c40-a8ad-497a-822b-a95b9df4915b", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397", Pod:"csi-node-driver-4xhrv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali338be02951d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:49.636334 containerd[2011]: 2026-03-14 00:15:49.530 [INFO][5914] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Mar 14 00:15:49.636334 containerd[2011]: 2026-03-14 00:15:49.532 [INFO][5914] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" iface="eth0" netns="" Mar 14 00:15:49.636334 containerd[2011]: 2026-03-14 00:15:49.532 [INFO][5914] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Mar 14 00:15:49.636334 containerd[2011]: 2026-03-14 00:15:49.533 [INFO][5914] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Mar 14 00:15:49.636334 containerd[2011]: 2026-03-14 00:15:49.589 [INFO][5921] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" HandleID="k8s-pod-network.069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Workload="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" Mar 14 00:15:49.636334 containerd[2011]: 2026-03-14 00:15:49.589 [INFO][5921] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:49.636334 containerd[2011]: 2026-03-14 00:15:49.589 [INFO][5921] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:49.636334 containerd[2011]: 2026-03-14 00:15:49.619 [WARNING][5921] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" HandleID="k8s-pod-network.069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Workload="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" Mar 14 00:15:49.636334 containerd[2011]: 2026-03-14 00:15:49.619 [INFO][5921] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" HandleID="k8s-pod-network.069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Workload="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" Mar 14 00:15:49.636334 containerd[2011]: 2026-03-14 00:15:49.626 [INFO][5921] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:49.636334 containerd[2011]: 2026-03-14 00:15:49.631 [INFO][5914] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Mar 14 00:15:49.636334 containerd[2011]: time="2026-03-14T00:15:49.636287937Z" level=info msg="TearDown network for sandbox \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\" successfully" Mar 14 00:15:49.639579 containerd[2011]: time="2026-03-14T00:15:49.636349125Z" level=info msg="StopPodSandbox for \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\" returns successfully" Mar 14 00:15:49.640683 containerd[2011]: time="2026-03-14T00:15:49.640621929Z" level=info msg="RemovePodSandbox for \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\"" Mar 14 00:15:49.640911 containerd[2011]: time="2026-03-14T00:15:49.640687677Z" level=info msg="Forcibly stopping sandbox \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\"" Mar 14 00:15:49.736517 kubelet[3430]: I0314 00:15:49.736283 3430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rccd6" podStartSLOduration=57.73381201 podStartE2EDuration="57.73381201s" podCreationTimestamp="2026-03-14 00:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:42.754046883 +0000 UTC m=+55.184946227" watchObservedRunningTime="2026-03-14 00:15:49.73381201 +0000 UTC m=+62.164711354" Mar 14 00:15:49.741123 kubelet[3430]: I0314 00:15:49.736815 3430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68785684b4-j7pqm" podStartSLOduration=24.364558597 podStartE2EDuration="31.736775902s" podCreationTimestamp="2026-03-14 00:15:18 +0000 UTC" firstStartedPulling="2026-03-14 00:15:41.130450091 +0000 UTC m=+53.561349435" lastFinishedPulling="2026-03-14 00:15:48.502667396 +0000 UTC m=+60.933566740" observedRunningTime="2026-03-14 00:15:49.736492318 +0000 UTC m=+62.167391686" watchObservedRunningTime="2026-03-14 00:15:49.736775902 +0000 UTC m=+62.167675270" Mar 14 00:15:49.856124 sshd[5894]: Accepted publickey for core from 68.220.241.50 port 55214 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:49.862717 sshd[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:49.882907 systemd-logind[1988]: New session 9 of user core. Mar 14 00:15:49.893179 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:15:50.038818 containerd[2011]: 2026-03-14 00:15:49.881 [WARNING][5936] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e308c40-a8ad-497a-822b-a95b9df4915b", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397", Pod:"csi-node-driver-4xhrv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali338be02951d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:50.038818 containerd[2011]: 2026-03-14 00:15:49.890 [INFO][5936] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Mar 14 00:15:50.038818 containerd[2011]: 2026-03-14 00:15:49.890 [INFO][5936] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" iface="eth0" netns="" Mar 14 00:15:50.038818 containerd[2011]: 2026-03-14 00:15:49.890 [INFO][5936] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Mar 14 00:15:50.038818 containerd[2011]: 2026-03-14 00:15:49.890 [INFO][5936] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Mar 14 00:15:50.038818 containerd[2011]: 2026-03-14 00:15:50.004 [INFO][5960] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" HandleID="k8s-pod-network.069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Workload="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" Mar 14 00:15:50.038818 containerd[2011]: 2026-03-14 00:15:50.005 [INFO][5960] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:50.038818 containerd[2011]: 2026-03-14 00:15:50.005 [INFO][5960] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:50.038818 containerd[2011]: 2026-03-14 00:15:50.027 [WARNING][5960] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" HandleID="k8s-pod-network.069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Workload="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" Mar 14 00:15:50.038818 containerd[2011]: 2026-03-14 00:15:50.027 [INFO][5960] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" HandleID="k8s-pod-network.069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Workload="ip--172--31--26--39-k8s-csi--node--driver--4xhrv-eth0" Mar 14 00:15:50.038818 containerd[2011]: 2026-03-14 00:15:50.030 [INFO][5960] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:50.038818 containerd[2011]: 2026-03-14 00:15:50.035 [INFO][5936] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8" Mar 14 00:15:50.040053 containerd[2011]: time="2026-03-14T00:15:50.038921467Z" level=info msg="TearDown network for sandbox \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\" successfully" Mar 14 00:15:50.046687 containerd[2011]: time="2026-03-14T00:15:50.046617607Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:50.046946 containerd[2011]: time="2026-03-14T00:15:50.046715059Z" level=info msg="RemovePodSandbox \"069da199eacbbb97f6f96fc7efbead0247cde2c1dc93a41a6b858f22e4e4ada8\" returns successfully" Mar 14 00:15:50.048183 containerd[2011]: time="2026-03-14T00:15:50.047582983Z" level=info msg="StopPodSandbox for \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\"" Mar 14 00:15:50.383508 containerd[2011]: 2026-03-14 00:15:50.227 [WARNING][5979] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" WorkloadEndpoint="ip--172--31--26--39-k8s-whisker--656d55fdc8--g69c4-eth0" Mar 14 00:15:50.383508 containerd[2011]: 2026-03-14 00:15:50.227 [INFO][5979] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Mar 14 00:15:50.383508 containerd[2011]: 2026-03-14 00:15:50.227 [INFO][5979] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" iface="eth0" netns="" Mar 14 00:15:50.383508 containerd[2011]: 2026-03-14 00:15:50.227 [INFO][5979] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Mar 14 00:15:50.383508 containerd[2011]: 2026-03-14 00:15:50.227 [INFO][5979] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Mar 14 00:15:50.383508 containerd[2011]: 2026-03-14 00:15:50.346 [INFO][6002] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" HandleID="k8s-pod-network.36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Workload="ip--172--31--26--39-k8s-whisker--656d55fdc8--g69c4-eth0" Mar 14 00:15:50.383508 containerd[2011]: 2026-03-14 00:15:50.347 [INFO][6002] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:50.383508 containerd[2011]: 2026-03-14 00:15:50.347 [INFO][6002] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:50.383508 containerd[2011]: 2026-03-14 00:15:50.372 [WARNING][6002] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" HandleID="k8s-pod-network.36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Workload="ip--172--31--26--39-k8s-whisker--656d55fdc8--g69c4-eth0" Mar 14 00:15:50.383508 containerd[2011]: 2026-03-14 00:15:50.372 [INFO][6002] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" HandleID="k8s-pod-network.36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Workload="ip--172--31--26--39-k8s-whisker--656d55fdc8--g69c4-eth0" Mar 14 00:15:50.383508 containerd[2011]: 2026-03-14 00:15:50.376 [INFO][6002] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:50.383508 containerd[2011]: 2026-03-14 00:15:50.380 [INFO][5979] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Mar 14 00:15:50.384400 containerd[2011]: time="2026-03-14T00:15:50.384359025Z" level=info msg="TearDown network for sandbox \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\" successfully" Mar 14 00:15:50.384510 containerd[2011]: time="2026-03-14T00:15:50.384482817Z" level=info msg="StopPodSandbox for \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\" returns successfully" Mar 14 00:15:50.385570 containerd[2011]: time="2026-03-14T00:15:50.385520565Z" level=info msg="RemovePodSandbox for \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\"" Mar 14 00:15:50.385733 containerd[2011]: time="2026-03-14T00:15:50.385582605Z" level=info msg="Forcibly stopping sandbox \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\"" Mar 14 00:15:50.451140 sshd[5894]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:50.463366 systemd[1]: sshd@8-172.31.26.39:22-68.220.241.50:55214.service: Deactivated successfully. Mar 14 00:15:50.469895 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:15:50.473142 systemd-logind[1988]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:15:50.478947 systemd-logind[1988]: Removed session 9. Mar 14 00:15:50.563116 containerd[2011]: 2026-03-14 00:15:50.476 [WARNING][6020] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" WorkloadEndpoint="ip--172--31--26--39-k8s-whisker--656d55fdc8--g69c4-eth0" Mar 14 00:15:50.563116 containerd[2011]: 2026-03-14 00:15:50.478 [INFO][6020] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Mar 14 00:15:50.563116 containerd[2011]: 2026-03-14 00:15:50.478 [INFO][6020] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" iface="eth0" netns="" Mar 14 00:15:50.563116 containerd[2011]: 2026-03-14 00:15:50.479 [INFO][6020] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Mar 14 00:15:50.563116 containerd[2011]: 2026-03-14 00:15:50.479 [INFO][6020] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Mar 14 00:15:50.563116 containerd[2011]: 2026-03-14 00:15:50.529 [INFO][6029] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" HandleID="k8s-pod-network.36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Workload="ip--172--31--26--39-k8s-whisker--656d55fdc8--g69c4-eth0" Mar 14 00:15:50.563116 containerd[2011]: 2026-03-14 00:15:50.530 [INFO][6029] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:50.563116 containerd[2011]: 2026-03-14 00:15:50.530 [INFO][6029] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:50.563116 containerd[2011]: 2026-03-14 00:15:50.547 [WARNING][6029] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" HandleID="k8s-pod-network.36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Workload="ip--172--31--26--39-k8s-whisker--656d55fdc8--g69c4-eth0" Mar 14 00:15:50.563116 containerd[2011]: 2026-03-14 00:15:50.547 [INFO][6029] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" HandleID="k8s-pod-network.36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Workload="ip--172--31--26--39-k8s-whisker--656d55fdc8--g69c4-eth0" Mar 14 00:15:50.563116 containerd[2011]: 2026-03-14 00:15:50.554 [INFO][6029] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:50.563116 containerd[2011]: 2026-03-14 00:15:50.559 [INFO][6020] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e" Mar 14 00:15:50.564081 containerd[2011]: time="2026-03-14T00:15:50.563998366Z" level=info msg="TearDown network for sandbox \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\" successfully" Mar 14 00:15:50.571261 containerd[2011]: time="2026-03-14T00:15:50.571191406Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:50.571445 containerd[2011]: time="2026-03-14T00:15:50.571299490Z" level=info msg="RemovePodSandbox \"36fe68c556843db6c069892a0ffcfb4b12dabd4bd943bfde6a35ab8ca59b264e\" returns successfully" Mar 14 00:15:50.572409 containerd[2011]: time="2026-03-14T00:15:50.572330782Z" level=info msg="StopPodSandbox for \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\"" Mar 14 00:15:50.726450 containerd[2011]: 2026-03-14 00:15:50.647 [WARNING][6044] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"95ad4bc9-cde4-4484-81ed-f6e09950a754", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b", Pod:"goldmane-cccfbd5cf-pbgth", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali37c1525e69f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:50.726450 containerd[2011]: 2026-03-14 00:15:50.647 [INFO][6044] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Mar 14 00:15:50.726450 containerd[2011]: 2026-03-14 00:15:50.647 [INFO][6044] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" iface="eth0" netns="" Mar 14 00:15:50.726450 containerd[2011]: 2026-03-14 00:15:50.647 [INFO][6044] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Mar 14 00:15:50.726450 containerd[2011]: 2026-03-14 00:15:50.647 [INFO][6044] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Mar 14 00:15:50.726450 containerd[2011]: 2026-03-14 00:15:50.701 [INFO][6051] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" HandleID="k8s-pod-network.1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Workload="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" Mar 14 00:15:50.726450 containerd[2011]: 2026-03-14 00:15:50.702 [INFO][6051] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:50.726450 containerd[2011]: 2026-03-14 00:15:50.702 [INFO][6051] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:50.726450 containerd[2011]: 2026-03-14 00:15:50.716 [WARNING][6051] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" HandleID="k8s-pod-network.1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Workload="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" Mar 14 00:15:50.726450 containerd[2011]: 2026-03-14 00:15:50.716 [INFO][6051] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" HandleID="k8s-pod-network.1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Workload="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" Mar 14 00:15:50.726450 containerd[2011]: 2026-03-14 00:15:50.719 [INFO][6051] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:50.726450 containerd[2011]: 2026-03-14 00:15:50.723 [INFO][6044] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Mar 14 00:15:50.726450 containerd[2011]: time="2026-03-14T00:15:50.726430607Z" level=info msg="TearDown network for sandbox \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\" successfully" Mar 14 00:15:50.729180 containerd[2011]: time="2026-03-14T00:15:50.726468383Z" level=info msg="StopPodSandbox for \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\" returns successfully" Mar 14 00:15:50.729180 containerd[2011]: time="2026-03-14T00:15:50.728426147Z" level=info msg="RemovePodSandbox for \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\"" Mar 14 00:15:50.729180 containerd[2011]: time="2026-03-14T00:15:50.728477687Z" level=info msg="Forcibly stopping sandbox \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\"" Mar 14 00:15:50.897950 containerd[2011]: 2026-03-14 00:15:50.808 [WARNING][6065] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"95ad4bc9-cde4-4484-81ed-f6e09950a754", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b", Pod:"goldmane-cccfbd5cf-pbgth", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali37c1525e69f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:50.897950 containerd[2011]: 2026-03-14 00:15:50.808 [INFO][6065] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Mar 14 00:15:50.897950 containerd[2011]: 2026-03-14 00:15:50.808 [INFO][6065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" iface="eth0" netns="" Mar 14 00:15:50.897950 containerd[2011]: 2026-03-14 00:15:50.808 [INFO][6065] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Mar 14 00:15:50.897950 containerd[2011]: 2026-03-14 00:15:50.808 [INFO][6065] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Mar 14 00:15:50.897950 containerd[2011]: 2026-03-14 00:15:50.866 [INFO][6072] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" HandleID="k8s-pod-network.1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Workload="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" Mar 14 00:15:50.897950 containerd[2011]: 2026-03-14 00:15:50.866 [INFO][6072] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:50.897950 containerd[2011]: 2026-03-14 00:15:50.866 [INFO][6072] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:50.897950 containerd[2011]: 2026-03-14 00:15:50.882 [WARNING][6072] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" HandleID="k8s-pod-network.1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Workload="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" Mar 14 00:15:50.897950 containerd[2011]: 2026-03-14 00:15:50.882 [INFO][6072] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" HandleID="k8s-pod-network.1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Workload="ip--172--31--26--39-k8s-goldmane--cccfbd5cf--pbgth-eth0" Mar 14 00:15:50.897950 containerd[2011]: 2026-03-14 00:15:50.889 [INFO][6072] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:50.897950 containerd[2011]: 2026-03-14 00:15:50.894 [INFO][6065] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db" Mar 14 00:15:50.897950 containerd[2011]: time="2026-03-14T00:15:50.897592199Z" level=info msg="TearDown network for sandbox \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\" successfully" Mar 14 00:15:50.905239 containerd[2011]: time="2026-03-14T00:15:50.905176332Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:50.905396 containerd[2011]: time="2026-03-14T00:15:50.905320692Z" level=info msg="RemovePodSandbox \"1201fe5d2151833b82f3e0f138a0d7ea784d30f9cebd560845c42c53a97a84db\" returns successfully" Mar 14 00:15:50.906552 containerd[2011]: time="2026-03-14T00:15:50.906057384Z" level=info msg="StopPodSandbox for \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\"" Mar 14 00:15:51.060700 containerd[2011]: 2026-03-14 00:15:50.980 [WARNING][6086] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0", GenerateName:"calico-apiserver-65b4c4f55c-", Namespace:"calico-system", SelfLink:"", UID:"ef193aa7-7886-492e-84a7-367d7c11360a", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b4c4f55c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a", Pod:"calico-apiserver-65b4c4f55c-t5bf4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6f4491b4956", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:51.060700 containerd[2011]: 2026-03-14 00:15:50.980 [INFO][6086] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Mar 14 00:15:51.060700 containerd[2011]: 2026-03-14 00:15:50.980 [INFO][6086] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" iface="eth0" netns="" Mar 14 00:15:51.060700 containerd[2011]: 2026-03-14 00:15:50.980 [INFO][6086] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Mar 14 00:15:51.060700 containerd[2011]: 2026-03-14 00:15:50.980 [INFO][6086] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Mar 14 00:15:51.060700 containerd[2011]: 2026-03-14 00:15:51.028 [INFO][6093] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" HandleID="k8s-pod-network.d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" Mar 14 00:15:51.060700 containerd[2011]: 2026-03-14 00:15:51.029 [INFO][6093] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:51.060700 containerd[2011]: 2026-03-14 00:15:51.029 [INFO][6093] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:51.060700 containerd[2011]: 2026-03-14 00:15:51.049 [WARNING][6093] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" HandleID="k8s-pod-network.d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" Mar 14 00:15:51.060700 containerd[2011]: 2026-03-14 00:15:51.049 [INFO][6093] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" HandleID="k8s-pod-network.d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" Mar 14 00:15:51.060700 containerd[2011]: 2026-03-14 00:15:51.052 [INFO][6093] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:51.060700 containerd[2011]: 2026-03-14 00:15:51.056 [INFO][6086] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Mar 14 00:15:51.060700 containerd[2011]: time="2026-03-14T00:15:51.060514556Z" level=info msg="TearDown network for sandbox \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\" successfully" Mar 14 00:15:51.060700 containerd[2011]: time="2026-03-14T00:15:51.060555164Z" level=info msg="StopPodSandbox for \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\" returns successfully" Mar 14 00:15:51.063260 containerd[2011]: time="2026-03-14T00:15:51.061487180Z" level=info msg="RemovePodSandbox for \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\"" Mar 14 00:15:51.063260 containerd[2011]: time="2026-03-14T00:15:51.061538396Z" level=info msg="Forcibly stopping sandbox \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\"" Mar 14 00:15:51.214892 containerd[2011]: 2026-03-14 00:15:51.137 [WARNING][6108] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0", GenerateName:"calico-apiserver-65b4c4f55c-", Namespace:"calico-system", SelfLink:"", UID:"ef193aa7-7886-492e-84a7-367d7c11360a", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b4c4f55c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a", Pod:"calico-apiserver-65b4c4f55c-t5bf4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6f4491b4956", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:51.214892 containerd[2011]: 2026-03-14 00:15:51.137 [INFO][6108] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Mar 14 00:15:51.214892 containerd[2011]: 2026-03-14 00:15:51.137 [INFO][6108] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" iface="eth0" netns="" Mar 14 00:15:51.214892 containerd[2011]: 2026-03-14 00:15:51.137 [INFO][6108] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Mar 14 00:15:51.214892 containerd[2011]: 2026-03-14 00:15:51.137 [INFO][6108] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Mar 14 00:15:51.214892 containerd[2011]: 2026-03-14 00:15:51.185 [INFO][6115] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" HandleID="k8s-pod-network.d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" Mar 14 00:15:51.214892 containerd[2011]: 2026-03-14 00:15:51.186 [INFO][6115] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:51.214892 containerd[2011]: 2026-03-14 00:15:51.186 [INFO][6115] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:51.214892 containerd[2011]: 2026-03-14 00:15:51.201 [WARNING][6115] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" HandleID="k8s-pod-network.d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" Mar 14 00:15:51.214892 containerd[2011]: 2026-03-14 00:15:51.201 [INFO][6115] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" HandleID="k8s-pod-network.d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Workload="ip--172--31--26--39-k8s-calico--apiserver--65b4c4f55c--t5bf4-eth0" Mar 14 00:15:51.214892 containerd[2011]: 2026-03-14 00:15:51.206 [INFO][6115] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:51.214892 containerd[2011]: 2026-03-14 00:15:51.211 [INFO][6108] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539" Mar 14 00:15:51.214892 containerd[2011]: time="2026-03-14T00:15:51.214778709Z" level=info msg="TearDown network for sandbox \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\" successfully" Mar 14 00:15:51.223235 containerd[2011]: time="2026-03-14T00:15:51.222628425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:51.223235 containerd[2011]: time="2026-03-14T00:15:51.222739497Z" level=info msg="RemovePodSandbox \"d7e5b1e85a8b665efe9c058b2f57c790be7aced9731a98db8d771e3e965b4539\" returns successfully" Mar 14 00:15:51.223469 containerd[2011]: time="2026-03-14T00:15:51.223369677Z" level=info msg="StopPodSandbox for \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\"" Mar 14 00:15:51.393513 containerd[2011]: 2026-03-14 00:15:51.298 [WARNING][6130] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e2fe8cef-b474-4b32-815f-59d01d17b696", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2", Pod:"coredns-66bc5c9577-rccd6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3693feb2e58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:51.393513 containerd[2011]: 2026-03-14 00:15:51.298 [INFO][6130] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Mar 14 00:15:51.393513 containerd[2011]: 2026-03-14 00:15:51.298 [INFO][6130] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" iface="eth0" netns="" Mar 14 00:15:51.393513 containerd[2011]: 2026-03-14 00:15:51.298 [INFO][6130] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Mar 14 00:15:51.393513 containerd[2011]: 2026-03-14 00:15:51.299 [INFO][6130] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Mar 14 00:15:51.393513 containerd[2011]: 2026-03-14 00:15:51.349 [INFO][6137] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" HandleID="k8s-pod-network.336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" Mar 14 00:15:51.393513 containerd[2011]: 2026-03-14 00:15:51.349 [INFO][6137] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:51.393513 containerd[2011]: 2026-03-14 00:15:51.349 [INFO][6137] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:51.393513 containerd[2011]: 2026-03-14 00:15:51.371 [WARNING][6137] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" HandleID="k8s-pod-network.336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" Mar 14 00:15:51.393513 containerd[2011]: 2026-03-14 00:15:51.371 [INFO][6137] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" HandleID="k8s-pod-network.336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" Mar 14 00:15:51.393513 containerd[2011]: 2026-03-14 00:15:51.377 [INFO][6137] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:51.393513 containerd[2011]: 2026-03-14 00:15:51.384 [INFO][6130] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Mar 14 00:15:51.393513 containerd[2011]: time="2026-03-14T00:15:51.392720794Z" level=info msg="TearDown network for sandbox \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\" successfully" Mar 14 00:15:51.393513 containerd[2011]: time="2026-03-14T00:15:51.392785774Z" level=info msg="StopPodSandbox for \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\" returns successfully" Mar 14 00:15:51.394520 containerd[2011]: time="2026-03-14T00:15:51.393900394Z" level=info msg="RemovePodSandbox for \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\"" Mar 14 00:15:51.394520 containerd[2011]: time="2026-03-14T00:15:51.393987610Z" level=info msg="Forcibly stopping sandbox \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\"" Mar 14 00:15:51.630880 containerd[2011]: 2026-03-14 00:15:51.559 [WARNING][6153] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e2fe8cef-b474-4b32-815f-59d01d17b696", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-39", ContainerID:"a575d3b0733e479b1f1a236fc0fccdc0794dac2ef8ae32f92d56e26f5ac602f2", Pod:"coredns-66bc5c9577-rccd6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3693feb2e58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:51.630880 containerd[2011]: 2026-03-14 00:15:51.560 [INFO][6153] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Mar 14 00:15:51.630880 containerd[2011]: 2026-03-14 00:15:51.560 [INFO][6153] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" iface="eth0" netns="" Mar 14 00:15:51.630880 containerd[2011]: 2026-03-14 00:15:51.560 [INFO][6153] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Mar 14 00:15:51.630880 containerd[2011]: 2026-03-14 00:15:51.560 [INFO][6153] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Mar 14 00:15:51.630880 containerd[2011]: 2026-03-14 00:15:51.602 [INFO][6160] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" HandleID="k8s-pod-network.336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" Mar 14 00:15:51.630880 containerd[2011]: 2026-03-14 00:15:51.602 [INFO][6160] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:51.630880 containerd[2011]: 2026-03-14 00:15:51.603 [INFO][6160] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:51.630880 containerd[2011]: 2026-03-14 00:15:51.620 [WARNING][6160] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" HandleID="k8s-pod-network.336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" Mar 14 00:15:51.630880 containerd[2011]: 2026-03-14 00:15:51.620 [INFO][6160] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" HandleID="k8s-pod-network.336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Workload="ip--172--31--26--39-k8s-coredns--66bc5c9577--rccd6-eth0" Mar 14 00:15:51.630880 containerd[2011]: 2026-03-14 00:15:51.624 [INFO][6160] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:51.630880 containerd[2011]: 2026-03-14 00:15:51.627 [INFO][6153] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa" Mar 14 00:15:51.632519 containerd[2011]: time="2026-03-14T00:15:51.630925295Z" level=info msg="TearDown network for sandbox \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\" successfully" Mar 14 00:15:51.643178 containerd[2011]: time="2026-03-14T00:15:51.643094711Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:51.643384 containerd[2011]: time="2026-03-14T00:15:51.643197683Z" level=info msg="RemovePodSandbox \"336b41285bd7b66f42986cbfa3f03a7cd1695b9cf30226a718bd61e433242caa\" returns successfully" Mar 14 00:15:53.146560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4083752090.mount: Deactivated successfully. Mar 14 00:15:53.804565 containerd[2011]: time="2026-03-14T00:15:53.804480878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:53.807126 containerd[2011]: time="2026-03-14T00:15:53.807053438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 14 00:15:53.809522 containerd[2011]: time="2026-03-14T00:15:53.809443958Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:53.817868 containerd[2011]: time="2026-03-14T00:15:53.816457142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:53.818373 containerd[2011]: time="2026-03-14T00:15:53.818319902Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 5.314279562s" Mar 14 00:15:53.818506 containerd[2011]: time="2026-03-14T00:15:53.818475674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 14 00:15:53.820988 containerd[2011]: time="2026-03-14T00:15:53.820944074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:15:53.828321 containerd[2011]: time="2026-03-14T00:15:53.827750534Z" level=info msg="CreateContainer within sandbox \"1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 14 00:15:53.866991 containerd[2011]: time="2026-03-14T00:15:53.866812634Z" level=info msg="CreateContainer within sandbox \"1d2ff154f64bfbea7e6fcd200398fea20f804d122362680345fb48997f31b76b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"42c646b894286cec4e5b9dda0f0bdf5a6736b42126274e6c9e65497112d8a4b8\"" Mar 14 00:15:53.871928 containerd[2011]: time="2026-03-14T00:15:53.871865930Z" level=info msg="StartContainer for \"42c646b894286cec4e5b9dda0f0bdf5a6736b42126274e6c9e65497112d8a4b8\"" Mar 14 00:15:53.940132 systemd[1]: Started cri-containerd-42c646b894286cec4e5b9dda0f0bdf5a6736b42126274e6c9e65497112d8a4b8.scope - libcontainer container 42c646b894286cec4e5b9dda0f0bdf5a6736b42126274e6c9e65497112d8a4b8. Mar 14 00:15:54.015580 containerd[2011]: time="2026-03-14T00:15:54.015335567Z" level=info msg="StartContainer for \"42c646b894286cec4e5b9dda0f0bdf5a6736b42126274e6c9e65497112d8a4b8\" returns successfully" Mar 14 00:15:54.758409 kubelet[3430]: I0314 00:15:54.758282 3430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-pbgth" podStartSLOduration=28.183275821 podStartE2EDuration="40.758256567s" podCreationTimestamp="2026-03-14 00:15:14 +0000 UTC" firstStartedPulling="2026-03-14 00:15:41.245684772 +0000 UTC m=+53.676584104" lastFinishedPulling="2026-03-14 00:15:53.820665518 +0000 UTC m=+66.251564850" observedRunningTime="2026-03-14 00:15:54.754259007 +0000 UTC m=+67.185158399" watchObservedRunningTime="2026-03-14 00:15:54.758256567 +0000 UTC m=+67.189155923" Mar 14 00:15:55.562447 systemd[1]: Started sshd@9-172.31.26.39:22-68.220.241.50:34052.service - OpenSSH per-connection server daemon (68.220.241.50:34052). Mar 14 00:15:56.158745 sshd[6261]: Accepted publickey for core from 68.220.241.50 port 34052 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:56.165363 sshd[6261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:56.181703 systemd-logind[1988]: New session 10 of user core. Mar 14 00:15:56.187140 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:15:56.710473 containerd[2011]: time="2026-03-14T00:15:56.707476204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:56.712205 containerd[2011]: time="2026-03-14T00:15:56.712136416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 14 00:15:56.715678 containerd[2011]: time="2026-03-14T00:15:56.715027432Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:56.723318 containerd[2011]: time="2026-03-14T00:15:56.723241816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:56.726077 containerd[2011]: time="2026-03-14T00:15:56.725666860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.904341966s" Mar 14 00:15:56.726077 containerd[2011]: time="2026-03-14T00:15:56.725731420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 14 00:15:56.729219 containerd[2011]: time="2026-03-14T00:15:56.728260252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:15:56.736247 containerd[2011]: time="2026-03-14T00:15:56.736118524Z" level=info msg="CreateContainer within sandbox \"ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:15:56.763639 containerd[2011]: time="2026-03-14T00:15:56.763411001Z" level=info msg="CreateContainer within sandbox \"ba48fac8e49e6a26fbb55df72bc137f0b1f0af17b089dd3020937430cf42fe5a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7c24c5640fa51594604aa2aab8ee71cebdddc3c366741a8b7596d343100096ae\"" Mar 14 00:15:56.765993 containerd[2011]: time="2026-03-14T00:15:56.765768701Z" level=info msg="StartContainer for \"7c24c5640fa51594604aa2aab8ee71cebdddc3c366741a8b7596d343100096ae\"" Mar 14 00:15:56.798790 sshd[6261]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:56.832560 systemd[1]: run-containerd-runc-k8s.io-42c646b894286cec4e5b9dda0f0bdf5a6736b42126274e6c9e65497112d8a4b8-runc.DYyvbj.mount: Deactivated successfully. Mar 14 00:15:56.836169 systemd[1]: sshd@9-172.31.26.39:22-68.220.241.50:34052.service: Deactivated successfully. Mar 14 00:15:56.846435 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:15:56.853140 systemd-logind[1988]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:15:56.863358 systemd-logind[1988]: Removed session 10. Mar 14 00:15:56.913976 systemd[1]: Started cri-containerd-7c24c5640fa51594604aa2aab8ee71cebdddc3c366741a8b7596d343100096ae.scope - libcontainer container 7c24c5640fa51594604aa2aab8ee71cebdddc3c366741a8b7596d343100096ae. Mar 14 00:15:57.014187 containerd[2011]: time="2026-03-14T00:15:57.014108798Z" level=info msg="StartContainer for \"7c24c5640fa51594604aa2aab8ee71cebdddc3c366741a8b7596d343100096ae\" returns successfully" Mar 14 00:15:57.055518 containerd[2011]: time="2026-03-14T00:15:57.055447034Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:57.059237 containerd[2011]: time="2026-03-14T00:15:57.059174210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 14 00:15:57.065103 containerd[2011]: time="2026-03-14T00:15:57.064984838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 336.656606ms" Mar 14 00:15:57.065103 containerd[2011]: time="2026-03-14T00:15:57.065098034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 14 00:15:57.070453 containerd[2011]: time="2026-03-14T00:15:57.069776942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 14 00:15:57.077410 containerd[2011]: time="2026-03-14T00:15:57.077276618Z" level=info msg="CreateContainer within sandbox \"09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:15:57.115227 containerd[2011]: time="2026-03-14T00:15:57.114619166Z" level=info msg="CreateContainer within sandbox \"09dd993e94bc74096998a8cce0c954713264f60c915aa1567aec0e6fb3759163\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2f307b4716de27b0184a9d5d6c3042e632769291ae691c6d90635ef5084f6c97\"" Mar 14 00:15:57.119529 containerd[2011]: time="2026-03-14T00:15:57.119125610Z" level=info msg="StartContainer for \"2f307b4716de27b0184a9d5d6c3042e632769291ae691c6d90635ef5084f6c97\"" Mar 14 00:15:57.125628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866712290.mount: Deactivated successfully. Mar 14 00:15:57.190633 systemd[1]: Started cri-containerd-2f307b4716de27b0184a9d5d6c3042e632769291ae691c6d90635ef5084f6c97.scope - libcontainer container 2f307b4716de27b0184a9d5d6c3042e632769291ae691c6d90635ef5084f6c97. Mar 14 00:15:57.273786 containerd[2011]: time="2026-03-14T00:15:57.273511815Z" level=info msg="StartContainer for \"2f307b4716de27b0184a9d5d6c3042e632769291ae691c6d90635ef5084f6c97\" returns successfully" Mar 14 00:15:57.771100 kubelet[3430]: I0314 00:15:57.770974 3430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-65b4c4f55c-t5bf4" podStartSLOduration=29.613384107 podStartE2EDuration="44.770948538s" podCreationTimestamp="2026-03-14 00:15:13 +0000 UTC" firstStartedPulling="2026-03-14 00:15:41.570520345 +0000 UTC m=+54.001419689" lastFinishedPulling="2026-03-14 00:15:56.728084788 +0000 UTC m=+69.158984120" observedRunningTime="2026-03-14 00:15:57.768552954 +0000 UTC m=+70.199452298" watchObservedRunningTime="2026-03-14 00:15:57.770948538 +0000 UTC m=+70.201847906" Mar 14 00:15:58.586729 containerd[2011]: time="2026-03-14T00:15:58.586645026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:58.590490 containerd[2011]: time="2026-03-14T00:15:58.590389410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 14 00:15:58.593383 containerd[2011]: time="2026-03-14T00:15:58.593281170Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:58.603511 containerd[2011]: time="2026-03-14T00:15:58.603433698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:58.609448 containerd[2011]: time="2026-03-14T00:15:58.609370206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.539502412s" Mar 14 00:15:58.609448 containerd[2011]: time="2026-03-14T00:15:58.609440118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 14 00:15:58.611606 containerd[2011]: time="2026-03-14T00:15:58.611545122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 14 00:15:58.620219 containerd[2011]: time="2026-03-14T00:15:58.620165754Z" level=info msg="CreateContainer within sandbox \"514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 14 00:15:58.669119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3206309043.mount: Deactivated successfully. Mar 14 00:15:58.672684 containerd[2011]: time="2026-03-14T00:15:58.672251850Z" level=info msg="CreateContainer within sandbox \"514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"cc4ce9d4671c54e2eb4924955e0c3e7b6637e946e2ce6e7d7ec1e1afff0671a9\"" Mar 14 00:15:58.678977 containerd[2011]: time="2026-03-14T00:15:58.676593942Z" level=info msg="StartContainer for \"cc4ce9d4671c54e2eb4924955e0c3e7b6637e946e2ce6e7d7ec1e1afff0671a9\"" Mar 14 00:15:58.784156 systemd[1]: Started cri-containerd-cc4ce9d4671c54e2eb4924955e0c3e7b6637e946e2ce6e7d7ec1e1afff0671a9.scope - libcontainer container cc4ce9d4671c54e2eb4924955e0c3e7b6637e946e2ce6e7d7ec1e1afff0671a9. Mar 14 00:15:58.797404 systemd[1]: run-containerd-runc-k8s.io-cc4ce9d4671c54e2eb4924955e0c3e7b6637e946e2ce6e7d7ec1e1afff0671a9-runc.klRFMa.mount: Deactivated successfully. Mar 14 00:15:59.028376 containerd[2011]: time="2026-03-14T00:15:59.028320700Z" level=info msg="StartContainer for \"cc4ce9d4671c54e2eb4924955e0c3e7b6637e946e2ce6e7d7ec1e1afff0671a9\" returns successfully" Mar 14 00:16:00.068857 kubelet[3430]: I0314 00:16:00.068710 3430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-65b4c4f55c-x5m6q" podStartSLOduration=32.191204163 podStartE2EDuration="47.068686169s" podCreationTimestamp="2026-03-14 00:15:13 +0000 UTC" firstStartedPulling="2026-03-14 00:15:42.189591504 +0000 UTC m=+54.620490836" lastFinishedPulling="2026-03-14 00:15:57.067073426 +0000 UTC m=+69.497972842" observedRunningTime="2026-03-14 00:15:57.81645639 +0000 UTC m=+70.247355782" watchObservedRunningTime="2026-03-14 00:16:00.068686169 +0000 UTC m=+72.499585513" Mar 14 00:16:00.400896 containerd[2011]: time="2026-03-14T00:16:00.400645087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:00.406020 containerd[2011]: time="2026-03-14T00:16:00.405946555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 14 00:16:00.408326 containerd[2011]: time="2026-03-14T00:16:00.408263239Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:00.417477 containerd[2011]: time="2026-03-14T00:16:00.416975983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:00.420091 containerd[2011]: time="2026-03-14T00:16:00.419272207Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.807658145s" Mar 14 00:16:00.420091 containerd[2011]: time="2026-03-14T00:16:00.419339827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 14 00:16:00.424966 containerd[2011]: time="2026-03-14T00:16:00.424914271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 14 00:16:00.445313 containerd[2011]: time="2026-03-14T00:16:00.444806839Z" level=info msg="CreateContainer within sandbox \"0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 14 00:16:00.525704 containerd[2011]: time="2026-03-14T00:16:00.523545607Z" level=info msg="CreateContainer within sandbox \"0067191b53c4d592f7e192078291bce9570a8c0e38cdda072bdb9f14d8ac0397\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ebc06359c8a8c1071c4d97f316c49774603e9228c4e0331895b4bd13ded79114\"" Mar 14 00:16:00.527116 containerd[2011]: time="2026-03-14T00:16:00.526353019Z" level=info msg="StartContainer for \"ebc06359c8a8c1071c4d97f316c49774603e9228c4e0331895b4bd13ded79114\"" Mar 14 00:16:00.640442 systemd[1]: run-containerd-runc-k8s.io-ebc06359c8a8c1071c4d97f316c49774603e9228c4e0331895b4bd13ded79114-runc.ax5Ul3.mount: Deactivated successfully. Mar 14 00:16:00.673519 systemd[1]: Started cri-containerd-ebc06359c8a8c1071c4d97f316c49774603e9228c4e0331895b4bd13ded79114.scope - libcontainer container ebc06359c8a8c1071c4d97f316c49774603e9228c4e0331895b4bd13ded79114. Mar 14 00:16:00.921522 containerd[2011]: time="2026-03-14T00:16:00.920427609Z" level=info msg="StartContainer for \"ebc06359c8a8c1071c4d97f316c49774603e9228c4e0331895b4bd13ded79114\" returns successfully" Mar 14 00:16:01.070155 kubelet[3430]: I0314 00:16:01.070081 3430 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 14 00:16:01.070155 kubelet[3430]: I0314 00:16:01.070160 3430 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 14 00:16:01.902347 systemd[1]: Started sshd@10-172.31.26.39:22-68.220.241.50:34060.service - OpenSSH per-connection server daemon (68.220.241.50:34060). Mar 14 00:16:02.201240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2460478610.mount: Deactivated successfully. Mar 14 00:16:02.220410 containerd[2011]: time="2026-03-14T00:16:02.220327508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:02.222133 containerd[2011]: time="2026-03-14T00:16:02.222069620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 14 00:16:02.224851 containerd[2011]: time="2026-03-14T00:16:02.223076300Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:02.228299 containerd[2011]: time="2026-03-14T00:16:02.228185252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:02.230550 containerd[2011]: time="2026-03-14T00:16:02.230472224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.805245797s" Mar 14 00:16:02.230684 containerd[2011]: time="2026-03-14T00:16:02.230562656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 14 00:16:02.238979 containerd[2011]: time="2026-03-14T00:16:02.238919288Z" level=info msg="CreateContainer within sandbox \"514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 14 00:16:02.258063 containerd[2011]: time="2026-03-14T00:16:02.257953448Z" level=info msg="CreateContainer within sandbox \"514ba68b85499d8b5fa2f64f0bbc0344a623d6bea65676bf90b87b46fbdef6a3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0ac112cf5ec31204fc22799ec2e27ec68d0ef760432ab80e43a1cf35aa68a793\"" Mar 14 00:16:02.259619 containerd[2011]: time="2026-03-14T00:16:02.259214900Z" level=info msg="StartContainer for \"0ac112cf5ec31204fc22799ec2e27ec68d0ef760432ab80e43a1cf35aa68a793\"" Mar 14 00:16:02.322171 systemd[1]: Started cri-containerd-0ac112cf5ec31204fc22799ec2e27ec68d0ef760432ab80e43a1cf35aa68a793.scope - libcontainer container 0ac112cf5ec31204fc22799ec2e27ec68d0ef760432ab80e43a1cf35aa68a793. Mar 14 00:16:02.389107 containerd[2011]: time="2026-03-14T00:16:02.389040453Z" level=info msg="StartContainer for \"0ac112cf5ec31204fc22799ec2e27ec68d0ef760432ab80e43a1cf35aa68a793\" returns successfully" Mar 14 00:16:02.458968 sshd[6499]: Accepted publickey for core from 68.220.241.50 port 34060 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:02.462180 sshd[6499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:02.471582 systemd-logind[1988]: New session 11 of user core. Mar 14 00:16:02.479130 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:16:02.885874 kubelet[3430]: I0314 00:16:02.884078 3430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7b9d4484f9-xvjs4" podStartSLOduration=3.99042796 podStartE2EDuration="23.884055323s" podCreationTimestamp="2026-03-14 00:15:39 +0000 UTC" firstStartedPulling="2026-03-14 00:15:42.339049165 +0000 UTC m=+54.769948521" lastFinishedPulling="2026-03-14 00:16:02.232676552 +0000 UTC m=+74.663575884" observedRunningTime="2026-03-14 00:16:02.882735815 +0000 UTC m=+75.313635171" watchObservedRunningTime="2026-03-14 00:16:02.884055323 +0000 UTC m=+75.314954667" Mar 14 00:16:02.885874 kubelet[3430]: I0314 00:16:02.884378 3430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4xhrv" podStartSLOduration=26.112985305 podStartE2EDuration="45.884366243s" podCreationTimestamp="2026-03-14 00:15:17 +0000 UTC" firstStartedPulling="2026-03-14 00:15:40.652413289 +0000 UTC m=+53.083312633" lastFinishedPulling="2026-03-14 00:16:00.423794227 +0000 UTC m=+72.854693571" observedRunningTime="2026-03-14 00:16:01.872744374 +0000 UTC m=+74.303643730" watchObservedRunningTime="2026-03-14 00:16:02.884366243 +0000 UTC m=+75.315265587" Mar 14 00:16:03.011223 sshd[6499]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:03.017169 systemd-logind[1988]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:16:03.017731 systemd[1]: sshd@10-172.31.26.39:22-68.220.241.50:34060.service: Deactivated successfully. Mar 14 00:16:03.024344 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:16:03.032089 systemd-logind[1988]: Removed session 11. Mar 14 00:16:03.115403 systemd[1]: Started sshd@11-172.31.26.39:22-68.220.241.50:36234.service - OpenSSH per-connection server daemon (68.220.241.50:36234). Mar 14 00:16:03.668822 sshd[6566]: Accepted publickey for core from 68.220.241.50 port 36234 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:03.672263 sshd[6566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:03.686953 systemd-logind[1988]: New session 12 of user core. Mar 14 00:16:03.696163 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:16:04.324736 sshd[6566]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:04.331694 systemd[1]: sshd@11-172.31.26.39:22-68.220.241.50:36234.service: Deactivated successfully. Mar 14 00:16:04.338355 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:16:04.340761 systemd-logind[1988]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:16:04.343408 systemd-logind[1988]: Removed session 12. Mar 14 00:16:04.416452 systemd[1]: Started sshd@12-172.31.26.39:22-68.220.241.50:36244.service - OpenSSH per-connection server daemon (68.220.241.50:36244). Mar 14 00:16:04.931147 sshd[6586]: Accepted publickey for core from 68.220.241.50 port 36244 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:04.936493 sshd[6586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:04.950556 systemd-logind[1988]: New session 13 of user core. Mar 14 00:16:04.958157 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:16:05.431274 sshd[6586]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:05.439066 systemd[1]: sshd@12-172.31.26.39:22-68.220.241.50:36244.service: Deactivated successfully. Mar 14 00:16:05.444056 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:16:05.446189 systemd-logind[1988]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:16:05.449175 systemd-logind[1988]: Removed session 13. Mar 14 00:16:10.533664 systemd[1]: Started sshd@13-172.31.26.39:22-68.220.241.50:36252.service - OpenSSH per-connection server daemon (68.220.241.50:36252). Mar 14 00:16:11.045185 sshd[6634]: Accepted publickey for core from 68.220.241.50 port 36252 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:11.048081 sshd[6634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:11.056819 systemd-logind[1988]: New session 14 of user core. Mar 14 00:16:11.065124 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:16:11.526207 sshd[6634]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:11.533359 systemd-logind[1988]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:16:11.535144 systemd[1]: sshd@13-172.31.26.39:22-68.220.241.50:36252.service: Deactivated successfully. Mar 14 00:16:11.540615 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:16:11.543262 systemd-logind[1988]: Removed session 14. Mar 14 00:16:11.623550 systemd[1]: Started sshd@14-172.31.26.39:22-68.220.241.50:36262.service - OpenSSH per-connection server daemon (68.220.241.50:36262). Mar 14 00:16:12.134239 sshd[6647]: Accepted publickey for core from 68.220.241.50 port 36262 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:12.137196 sshd[6647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:12.146393 systemd-logind[1988]: New session 15 of user core. Mar 14 00:16:12.153147 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:16:12.969131 sshd[6647]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:12.976459 systemd[1]: sshd@14-172.31.26.39:22-68.220.241.50:36262.service: Deactivated successfully. Mar 14 00:16:12.983538 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:16:12.986129 systemd-logind[1988]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:16:12.989196 systemd-logind[1988]: Removed session 15. Mar 14 00:16:13.064460 systemd[1]: Started sshd@15-172.31.26.39:22-68.220.241.50:55310.service - OpenSSH per-connection server daemon (68.220.241.50:55310). Mar 14 00:16:13.578607 sshd[6658]: Accepted publickey for core from 68.220.241.50 port 55310 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:13.581998 sshd[6658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:13.591287 systemd-logind[1988]: New session 16 of user core. Mar 14 00:16:13.603178 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:16:14.867481 sshd[6658]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:14.875661 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:16:14.878616 systemd[1]: sshd@15-172.31.26.39:22-68.220.241.50:55310.service: Deactivated successfully. Mar 14 00:16:14.891946 systemd-logind[1988]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:16:14.895541 systemd-logind[1988]: Removed session 16. Mar 14 00:16:14.974449 systemd[1]: Started sshd@16-172.31.26.39:22-68.220.241.50:55318.service - OpenSSH per-connection server daemon (68.220.241.50:55318). Mar 14 00:16:15.518672 sshd[6683]: Accepted publickey for core from 68.220.241.50 port 55318 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:15.521549 sshd[6683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:15.529395 systemd-logind[1988]: New session 17 of user core. Mar 14 00:16:15.542128 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:16:16.291140 sshd[6683]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:16.298754 systemd[1]: sshd@16-172.31.26.39:22-68.220.241.50:55318.service: Deactivated successfully. Mar 14 00:16:16.304275 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:16:16.307760 systemd-logind[1988]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:16:16.309979 systemd-logind[1988]: Removed session 17. Mar 14 00:16:16.377347 systemd[1]: Started sshd@17-172.31.26.39:22-68.220.241.50:55324.service - OpenSSH per-connection server daemon (68.220.241.50:55324). Mar 14 00:16:16.884159 sshd[6695]: Accepted publickey for core from 68.220.241.50 port 55324 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:16.885969 sshd[6695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:16.895636 systemd-logind[1988]: New session 18 of user core. Mar 14 00:16:16.903119 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:16:17.355396 sshd[6695]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:17.361771 systemd[1]: sshd@17-172.31.26.39:22-68.220.241.50:55324.service: Deactivated successfully. Mar 14 00:16:17.366460 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:16:17.368171 systemd-logind[1988]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:16:17.370393 systemd-logind[1988]: Removed session 18. Mar 14 00:16:22.467376 systemd[1]: Started sshd@18-172.31.26.39:22-68.220.241.50:46894.service - OpenSSH per-connection server daemon (68.220.241.50:46894). Mar 14 00:16:23.010396 sshd[6732]: Accepted publickey for core from 68.220.241.50 port 46894 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:23.012971 sshd[6732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:23.027371 systemd-logind[1988]: New session 19 of user core. Mar 14 00:16:23.032986 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:16:23.507749 sshd[6732]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:23.515432 systemd-logind[1988]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:16:23.515866 systemd[1]: sshd@18-172.31.26.39:22-68.220.241.50:46894.service: Deactivated successfully. Mar 14 00:16:23.520520 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:16:23.522721 systemd-logind[1988]: Removed session 19. Mar 14 00:16:28.599475 systemd[1]: Started sshd@19-172.31.26.39:22-68.220.241.50:46898.service - OpenSSH per-connection server daemon (68.220.241.50:46898). Mar 14 00:16:29.106881 sshd[6785]: Accepted publickey for core from 68.220.241.50 port 46898 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:29.108642 sshd[6785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:29.117465 systemd-logind[1988]: New session 20 of user core. Mar 14 00:16:29.124097 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:16:29.587133 sshd[6785]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:29.593857 systemd[1]: sshd@19-172.31.26.39:22-68.220.241.50:46898.service: Deactivated successfully. Mar 14 00:16:29.599590 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:16:29.602313 systemd-logind[1988]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:16:29.604704 systemd-logind[1988]: Removed session 20. Mar 14 00:16:34.689452 systemd[1]: Started sshd@20-172.31.26.39:22-68.220.241.50:55612.service - OpenSSH per-connection server daemon (68.220.241.50:55612). Mar 14 00:16:35.194148 sshd[6798]: Accepted publickey for core from 68.220.241.50 port 55612 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:35.197011 sshd[6798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:35.206927 systemd-logind[1988]: New session 21 of user core. Mar 14 00:16:35.212145 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:16:35.679080 sshd[6798]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:35.687190 systemd-logind[1988]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:16:35.688466 systemd[1]: sshd@20-172.31.26.39:22-68.220.241.50:55612.service: Deactivated successfully. Mar 14 00:16:35.694009 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:16:35.698948 systemd-logind[1988]: Removed session 21. Mar 14 00:16:40.778001 systemd[1]: Started sshd@21-172.31.26.39:22-68.220.241.50:55628.service - OpenSSH per-connection server daemon (68.220.241.50:55628). Mar 14 00:16:41.303880 sshd[6852]: Accepted publickey for core from 68.220.241.50 port 55628 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:41.306942 sshd[6852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:41.319941 systemd-logind[1988]: New session 22 of user core. Mar 14 00:16:41.329158 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:16:41.827669 sshd[6852]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:41.836744 systemd-logind[1988]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:16:41.837573 systemd[1]: sshd@21-172.31.26.39:22-68.220.241.50:55628.service: Deactivated successfully. Mar 14 00:16:41.848667 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:16:41.857852 systemd-logind[1988]: Removed session 22. Mar 14 00:16:46.923365 systemd[1]: Started sshd@22-172.31.26.39:22-68.220.241.50:60156.service - OpenSSH per-connection server daemon (68.220.241.50:60156). Mar 14 00:16:47.439563 sshd[6864]: Accepted publickey for core from 68.220.241.50 port 60156 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:47.443351 sshd[6864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:47.451539 systemd-logind[1988]: New session 23 of user core. Mar 14 00:16:47.462075 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:16:47.904628 sshd[6864]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:47.916617 systemd-logind[1988]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:16:47.917347 systemd[1]: sshd@22-172.31.26.39:22-68.220.241.50:60156.service: Deactivated successfully. Mar 14 00:16:47.923927 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:16:47.925956 systemd-logind[1988]: Removed session 23. Mar 14 00:17:03.473071 systemd[1]: cri-containerd-7abc7397f798da1ec9c4cdc9f7341859406905abda1343d94e0e94126c9577c0.scope: Deactivated successfully. Mar 14 00:17:03.473541 systemd[1]: cri-containerd-7abc7397f798da1ec9c4cdc9f7341859406905abda1343d94e0e94126c9577c0.scope: Consumed 25.946s CPU time. Mar 14 00:17:03.514295 containerd[2011]: time="2026-03-14T00:17:03.514069712Z" level=info msg="shim disconnected" id=7abc7397f798da1ec9c4cdc9f7341859406905abda1343d94e0e94126c9577c0 namespace=k8s.io Mar 14 00:17:03.514295 containerd[2011]: time="2026-03-14T00:17:03.514154660Z" level=warning msg="cleaning up after shim disconnected" id=7abc7397f798da1ec9c4cdc9f7341859406905abda1343d94e0e94126c9577c0 namespace=k8s.io Mar 14 00:17:03.514295 containerd[2011]: time="2026-03-14T00:17:03.514175276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:03.522049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7abc7397f798da1ec9c4cdc9f7341859406905abda1343d94e0e94126c9577c0-rootfs.mount: Deactivated successfully. Mar 14 00:17:03.542688 containerd[2011]: time="2026-03-14T00:17:03.542524028Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:17:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:17:04.065671 kubelet[3430]: I0314 00:17:04.065571 3430 scope.go:117] "RemoveContainer" containerID="7abc7397f798da1ec9c4cdc9f7341859406905abda1343d94e0e94126c9577c0" Mar 14 00:17:04.073411 containerd[2011]: time="2026-03-14T00:17:04.073311823Z" level=info msg="CreateContainer within sandbox \"5d16732c52a5449d80b62a4c983d04b585f139be73a5c56da762c6f5392b5c8a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 14 00:17:04.091250 containerd[2011]: time="2026-03-14T00:17:04.091173571Z" level=info msg="CreateContainer within sandbox \"5d16732c52a5449d80b62a4c983d04b585f139be73a5c56da762c6f5392b5c8a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e68bb53134602f81f8b25125960a6726bef9ace441c8f0d544a162c2189b847d\"" Mar 14 00:17:04.093858 containerd[2011]: time="2026-03-14T00:17:04.092431819Z" level=info msg="StartContainer for \"e68bb53134602f81f8b25125960a6726bef9ace441c8f0d544a162c2189b847d\"" Mar 14 00:17:04.157285 systemd[1]: Started cri-containerd-e68bb53134602f81f8b25125960a6726bef9ace441c8f0d544a162c2189b847d.scope - libcontainer container e68bb53134602f81f8b25125960a6726bef9ace441c8f0d544a162c2189b847d. Mar 14 00:17:04.207376 containerd[2011]: time="2026-03-14T00:17:04.207310796Z" level=info msg="StartContainer for \"e68bb53134602f81f8b25125960a6726bef9ace441c8f0d544a162c2189b847d\" returns successfully" Mar 14 00:17:05.142664 systemd[1]: cri-containerd-34c1e0d8c9f5dea538cd329847d41b302487e9371999b8d1b04fa9956b2dadcd.scope: Deactivated successfully. Mar 14 00:17:05.146700 systemd[1]: cri-containerd-34c1e0d8c9f5dea538cd329847d41b302487e9371999b8d1b04fa9956b2dadcd.scope: Consumed 4.488s CPU time, 18.1M memory peak, 0B memory swap peak. Mar 14 00:17:05.197000 containerd[2011]: time="2026-03-14T00:17:05.196411113Z" level=info msg="shim disconnected" id=34c1e0d8c9f5dea538cd329847d41b302487e9371999b8d1b04fa9956b2dadcd namespace=k8s.io Mar 14 00:17:05.198556 containerd[2011]: time="2026-03-14T00:17:05.198274809Z" level=warning msg="cleaning up after shim disconnected" id=34c1e0d8c9f5dea538cd329847d41b302487e9371999b8d1b04fa9956b2dadcd namespace=k8s.io Mar 14 00:17:05.198556 containerd[2011]: time="2026-03-14T00:17:05.198343245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:05.203137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34c1e0d8c9f5dea538cd329847d41b302487e9371999b8d1b04fa9956b2dadcd-rootfs.mount: Deactivated successfully. Mar 14 00:17:06.084460 kubelet[3430]: I0314 00:17:06.084095 3430 scope.go:117] "RemoveContainer" containerID="34c1e0d8c9f5dea538cd329847d41b302487e9371999b8d1b04fa9956b2dadcd" Mar 14 00:17:06.088627 containerd[2011]: time="2026-03-14T00:17:06.088554945Z" level=info msg="CreateContainer within sandbox \"3698f47381b6873df810b8895352917d0a04b4fef6bcbc4d6c5d21683e7ffdbe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 14 00:17:06.111439 containerd[2011]: time="2026-03-14T00:17:06.111170157Z" level=info msg="CreateContainer within sandbox \"3698f47381b6873df810b8895352917d0a04b4fef6bcbc4d6c5d21683e7ffdbe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d4c7498d19b85013b410e41dce0f095c3f9153ad457bfafe98e35efbb740522f\"" Mar 14 00:17:06.112692 containerd[2011]: time="2026-03-14T00:17:06.112611501Z" level=info msg="StartContainer for \"d4c7498d19b85013b410e41dce0f095c3f9153ad457bfafe98e35efbb740522f\"" Mar 14 00:17:06.170149 systemd[1]: Started cri-containerd-d4c7498d19b85013b410e41dce0f095c3f9153ad457bfafe98e35efbb740522f.scope - libcontainer container d4c7498d19b85013b410e41dce0f095c3f9153ad457bfafe98e35efbb740522f. Mar 14 00:17:06.244370 containerd[2011]: time="2026-03-14T00:17:06.244195750Z" level=info msg="StartContainer for \"d4c7498d19b85013b410e41dce0f095c3f9153ad457bfafe98e35efbb740522f\" returns successfully" Mar 14 00:17:09.364373 systemd[1]: cri-containerd-1e35d4e6564137ae7bd2dfc51b1c53e9c24c8d64cf7a2a994b278746671b25b9.scope: Deactivated successfully. Mar 14 00:17:09.365022 systemd[1]: cri-containerd-1e35d4e6564137ae7bd2dfc51b1c53e9c24c8d64cf7a2a994b278746671b25b9.scope: Consumed 4.810s CPU time, 16.0M memory peak, 0B memory swap peak. Mar 14 00:17:09.422761 containerd[2011]: time="2026-03-14T00:17:09.422675954Z" level=info msg="shim disconnected" id=1e35d4e6564137ae7bd2dfc51b1c53e9c24c8d64cf7a2a994b278746671b25b9 namespace=k8s.io Mar 14 00:17:09.426346 containerd[2011]: time="2026-03-14T00:17:09.423740954Z" level=warning msg="cleaning up after shim disconnected" id=1e35d4e6564137ae7bd2dfc51b1c53e9c24c8d64cf7a2a994b278746671b25b9 namespace=k8s.io Mar 14 00:17:09.426346 containerd[2011]: time="2026-03-14T00:17:09.423864014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:09.427304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e35d4e6564137ae7bd2dfc51b1c53e9c24c8d64cf7a2a994b278746671b25b9-rootfs.mount: Deactivated successfully. Mar 14 00:17:10.110184 kubelet[3430]: I0314 00:17:10.109790 3430 scope.go:117] "RemoveContainer" containerID="1e35d4e6564137ae7bd2dfc51b1c53e9c24c8d64cf7a2a994b278746671b25b9" Mar 14 00:17:10.117785 containerd[2011]: time="2026-03-14T00:17:10.116896861Z" level=info msg="CreateContainer within sandbox \"30d514da4094fbb7be2717eda97d6003ddd8db84d4edb428635896628d619d29\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 14 00:17:10.152469 containerd[2011]: time="2026-03-14T00:17:10.152278477Z" level=info msg="CreateContainer within sandbox \"30d514da4094fbb7be2717eda97d6003ddd8db84d4edb428635896628d619d29\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7bf3cfb79d2d81db1ecbb6de1f9dfb432b6f6ff80bd45abba7405f8762da687e\"" Mar 14 00:17:10.154976 containerd[2011]: time="2026-03-14T00:17:10.153753289Z" level=info msg="StartContainer for \"7bf3cfb79d2d81db1ecbb6de1f9dfb432b6f6ff80bd45abba7405f8762da687e\"" Mar 14 00:17:10.213746 systemd[1]: Started cri-containerd-7bf3cfb79d2d81db1ecbb6de1f9dfb432b6f6ff80bd45abba7405f8762da687e.scope - libcontainer container 7bf3cfb79d2d81db1ecbb6de1f9dfb432b6f6ff80bd45abba7405f8762da687e. Mar 14 00:17:10.293320 containerd[2011]: time="2026-03-14T00:17:10.292925150Z" level=info msg="StartContainer for \"7bf3cfb79d2d81db1ecbb6de1f9dfb432b6f6ff80bd45abba7405f8762da687e\" returns successfully" Mar 14 00:17:10.661029 kubelet[3430]: E0314 00:17:10.660591 3430 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-39?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"