Apr 24 23:36:24.237950 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 24 23:36:24.238000 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Apr 24 22:19:35 -00 2026 Apr 24 23:36:24.238029 kernel: KASLR disabled due to lack of seed Apr 24 23:36:24.238048 kernel: efi: EFI v2.7 by EDK II Apr 24 23:36:24.238065 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Apr 24 23:36:24.238082 kernel: ACPI: Early table checksum verification disabled Apr 24 23:36:24.238102 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 24 23:36:24.238157 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 24 23:36:24.238180 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 24 23:36:24.238197 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 24 23:36:24.238224 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 24 23:36:24.238241 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 24 23:36:24.238257 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 24 23:36:24.238274 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 24 23:36:24.238294 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 24 23:36:24.238315 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 24 23:36:24.238334 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 24 23:36:24.238352 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 24 23:36:24.238369 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 24 23:36:24.238387 kernel: printk: bootconsole [uart0] enabled Apr 24 23:36:24.238404 kernel: NUMA: Failed to initialise from firmware Apr 24 23:36:24.238422 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 24 23:36:24.238440 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 24 23:36:24.238457 kernel: Zone ranges: Apr 24 23:36:24.238474 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 24 23:36:24.238491 kernel: DMA32 empty Apr 24 23:36:24.238513 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 24 23:36:24.238531 kernel: Movable zone start for each node Apr 24 23:36:24.238548 kernel: Early memory node ranges Apr 24 23:36:24.238565 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 24 23:36:24.238582 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 24 23:36:24.238599 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 24 23:36:24.238618 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 24 23:36:24.238636 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 24 23:36:24.238653 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 24 23:36:24.238670 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 24 23:36:24.238687 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 24 23:36:24.238704 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 24 23:36:24.238729 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 24 23:36:24.238747 kernel: psci: probing for conduit method from ACPI. Apr 24 23:36:24.238772 kernel: psci: PSCIv1.0 detected in firmware. Apr 24 23:36:24.238791 kernel: psci: Using standard PSCI v0.2 function IDs Apr 24 23:36:24.238809 kernel: psci: Trusted OS migration not required Apr 24 23:36:24.238831 kernel: psci: SMC Calling Convention v1.1 Apr 24 23:36:24.238850 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Apr 24 23:36:24.238868 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 24 23:36:24.238886 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 24 23:36:24.238905 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 24 23:36:24.238923 kernel: Detected PIPT I-cache on CPU0 Apr 24 23:36:24.238941 kernel: CPU features: detected: GIC system register CPU interface Apr 24 23:36:24.238959 kernel: CPU features: detected: Spectre-v2 Apr 24 23:36:24.238977 kernel: CPU features: detected: Spectre-v3a Apr 24 23:36:24.238995 kernel: CPU features: detected: Spectre-BHB Apr 24 23:36:24.239013 kernel: CPU features: detected: ARM erratum 1742098 Apr 24 23:36:24.239036 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 24 23:36:24.239055 kernel: alternatives: applying boot alternatives Apr 24 23:36:24.239075 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=63304dd98a277d4592d17e0085ae3f91ca70cc8ec6dedfdd357a1e9755f9a8b3 Apr 24 23:36:24.239095 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:36:24.239994 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 23:36:24.240168 kernel: Fallback order for Node 0: 0 Apr 24 23:36:24.240192 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 24 23:36:24.240213 kernel: Policy zone: Normal Apr 24 23:36:24.240234 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:36:24.240255 kernel: software IO TLB: area num 2. Apr 24 23:36:24.240273 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 24 23:36:24.240308 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Apr 24 23:36:24.240329 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:36:24.240347 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:36:24.240367 kernel: rcu: RCU event tracing is enabled. Apr 24 23:36:24.240386 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:36:24.240405 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:36:24.240424 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:36:24.240443 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:36:24.240461 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:36:24.240480 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 24 23:36:24.240499 kernel: GICv3: 96 SPIs implemented Apr 24 23:36:24.240524 kernel: GICv3: 0 Extended SPIs implemented Apr 24 23:36:24.240543 kernel: Root IRQ handler: gic_handle_irq Apr 24 23:36:24.240561 kernel: GICv3: GICv3 features: 16 PPIs Apr 24 23:36:24.240579 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 24 23:36:24.240597 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 24 23:36:24.240616 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 24 23:36:24.240636 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 24 23:36:24.240655 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 24 23:36:24.240672 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 24 23:36:24.240690 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 24 23:36:24.240709 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:36:24.240727 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 24 23:36:24.240751 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 24 23:36:24.240769 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 24 23:36:24.240788 kernel: Console: colour dummy device 80x25 Apr 24 23:36:24.240806 kernel: printk: console [tty1] enabled Apr 24 23:36:24.240824 kernel: ACPI: Core revision 20230628 Apr 24 23:36:24.240843 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 24 23:36:24.240861 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:36:24.240880 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:36:24.240898 kernel: landlock: Up and running. Apr 24 23:36:24.240924 kernel: SELinux: Initializing. Apr 24 23:36:24.240972 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:36:24.240992 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:36:24.241011 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:36:24.241030 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:36:24.241049 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:36:24.241084 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:36:24.241105 kernel: Platform MSI: ITS@0x10080000 domain created Apr 24 23:36:24.241166 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 24 23:36:24.241236 kernel: Remapping and enabling EFI services. Apr 24 23:36:24.241258 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:36:24.241277 kernel: Detected PIPT I-cache on CPU1 Apr 24 23:36:24.241295 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 24 23:36:24.241314 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 24 23:36:24.241332 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 24 23:36:24.241350 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:36:24.241368 kernel: SMP: Total of 2 processors activated. Apr 24 23:36:24.241386 kernel: CPU features: detected: 32-bit EL0 Support Apr 24 23:36:24.241412 kernel: CPU features: detected: 32-bit EL1 Support Apr 24 23:36:24.241432 kernel: CPU features: detected: CRC32 instructions Apr 24 23:36:24.241451 kernel: CPU: All CPU(s) started at EL1 Apr 24 23:36:24.241481 kernel: alternatives: applying system-wide alternatives Apr 24 23:36:24.241504 kernel: devtmpfs: initialized Apr 24 23:36:24.241524 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:36:24.241543 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:36:24.241562 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:36:24.241581 kernel: SMBIOS 3.0.0 present. Apr 24 23:36:24.241604 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 24 23:36:24.241623 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:36:24.241642 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 24 23:36:24.241662 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 24 23:36:24.241681 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 24 23:36:24.241700 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:36:24.241719 kernel: audit: type=2000 audit(0.288:1): state=initialized audit_enabled=0 res=1 Apr 24 23:36:24.241738 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:36:24.241762 kernel: cpuidle: using governor menu Apr 24 23:36:24.241781 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 24 23:36:24.241800 kernel: ASID allocator initialised with 65536 entries Apr 24 23:36:24.241819 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:36:24.241839 kernel: Serial: AMBA PL011 UART driver Apr 24 23:36:24.241859 kernel: Modules: 17488 pages in range for non-PLT usage Apr 24 23:36:24.241879 kernel: Modules: 509008 pages in range for PLT usage Apr 24 23:36:24.241898 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:36:24.241917 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:36:24.241942 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 24 23:36:24.241962 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 24 23:36:24.241981 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:36:24.242001 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:36:24.242020 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 24 23:36:24.242039 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 24 23:36:24.242058 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:36:24.242079 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:36:24.242099 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:36:24.244806 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:36:24.244841 kernel: ACPI: Interpreter enabled Apr 24 23:36:24.244860 kernel: ACPI: Using GIC for interrupt routing Apr 24 23:36:24.244880 kernel: ACPI: MCFG table detected, 1 entries Apr 24 23:36:24.244900 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Apr 24 23:36:24.245404 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 23:36:24.245711 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 24 23:36:24.245972 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 24 23:36:24.246396 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Apr 24 23:36:24.246667 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Apr 24 23:36:24.246703 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 24 23:36:24.246725 kernel: acpiphp: Slot [1] registered Apr 24 23:36:24.246746 kernel: acpiphp: Slot [2] registered Apr 24 23:36:24.246767 kernel: acpiphp: Slot [3] registered Apr 24 23:36:24.246790 kernel: acpiphp: Slot [4] registered Apr 24 23:36:24.246812 kernel: acpiphp: Slot [5] registered Apr 24 23:36:24.246846 kernel: acpiphp: Slot [6] registered Apr 24 23:36:24.246867 kernel: acpiphp: Slot [7] registered Apr 24 23:36:24.246887 kernel: acpiphp: Slot [8] registered Apr 24 23:36:24.246907 kernel: acpiphp: Slot [9] registered Apr 24 23:36:24.246927 kernel: acpiphp: Slot [10] registered Apr 24 23:36:24.246947 kernel: acpiphp: Slot [11] registered Apr 24 23:36:24.246967 kernel: acpiphp: Slot [12] registered Apr 24 23:36:24.246985 kernel: acpiphp: Slot [13] registered Apr 24 23:36:24.247005 kernel: acpiphp: Slot [14] registered Apr 24 23:36:24.247026 kernel: acpiphp: Slot [15] registered Apr 24 23:36:24.247054 kernel: acpiphp: Slot [16] registered Apr 24 23:36:24.247074 kernel: acpiphp: Slot [17] registered Apr 24 23:36:24.247094 kernel: acpiphp: Slot [18] registered Apr 24 23:36:24.257161 kernel: acpiphp: Slot [19] registered Apr 24 23:36:24.257225 kernel: acpiphp: Slot [20] registered Apr 24 23:36:24.257247 kernel: acpiphp: Slot [21] registered Apr 24 23:36:24.257267 kernel: acpiphp: Slot [22] registered Apr 24 23:36:24.257288 kernel: acpiphp: Slot [23] registered Apr 24 23:36:24.257333 kernel: acpiphp: Slot [24] registered Apr 24 23:36:24.257365 kernel: acpiphp: Slot [25] registered Apr 24 23:36:24.257386 kernel: acpiphp: Slot [26] registered Apr 24 23:36:24.257405 kernel: acpiphp: Slot [27] registered Apr 24 23:36:24.257424 kernel: acpiphp: Slot [28] registered Apr 24 23:36:24.257444 kernel: acpiphp: Slot [29] registered Apr 24 23:36:24.257463 kernel: acpiphp: Slot [30] registered Apr 24 23:36:24.257482 kernel: acpiphp: Slot [31] registered Apr 24 23:36:24.257502 kernel: PCI host bridge to bus 0000:00 Apr 24 23:36:24.257805 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 24 23:36:24.258041 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 24 23:36:24.258337 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 24 23:36:24.258549 kernel: pci_bus 0000:00: root bus resource [bus 00] Apr 24 23:36:24.258818 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 24 23:36:24.259081 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 24 23:36:24.259587 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 24 23:36:24.259861 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 24 23:36:24.260098 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 24 23:36:24.260442 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 24 23:36:24.260742 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 24 23:36:24.261037 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 24 23:36:24.263640 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 24 23:36:24.263934 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 24 23:36:24.264248 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 24 23:36:24.264497 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 24 23:36:24.264730 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 24 23:36:24.264996 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 24 23:36:24.265041 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 24 23:36:24.265063 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 24 23:36:24.265088 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 24 23:36:24.265111 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 24 23:36:24.265386 kernel: iommu: Default domain type: Translated Apr 24 23:36:24.265407 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 24 23:36:24.265427 kernel: efivars: Registered efivars operations Apr 24 23:36:24.265447 kernel: vgaarb: loaded Apr 24 23:36:24.265469 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 24 23:36:24.265488 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:36:24.265508 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:36:24.265527 kernel: pnp: PnP ACPI init Apr 24 23:36:24.265852 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 24 23:36:24.265896 kernel: pnp: PnP ACPI: found 1 devices Apr 24 23:36:24.265916 kernel: NET: Registered PF_INET protocol family Apr 24 23:36:24.265936 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:36:24.265955 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 23:36:24.265975 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:36:24.265994 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 23:36:24.266014 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 23:36:24.266033 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 23:36:24.266058 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:36:24.266079 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:36:24.266099 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:36:24.266149 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:36:24.266173 kernel: kvm [1]: HYP mode not available Apr 24 23:36:24.266193 kernel: Initialise system trusted keyrings Apr 24 23:36:24.266213 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 23:36:24.266232 kernel: Key type asymmetric registered Apr 24 23:36:24.266251 kernel: Asymmetric key parser 'x509' registered Apr 24 23:36:24.266278 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 24 23:36:24.266297 kernel: io scheduler mq-deadline registered Apr 24 23:36:24.266317 kernel: io scheduler kyber registered Apr 24 23:36:24.266335 kernel: io scheduler bfq registered Apr 24 23:36:24.266622 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 24 23:36:24.266661 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 24 23:36:24.266681 kernel: ACPI: button: Power Button [PWRB] Apr 24 23:36:24.266702 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 24 23:36:24.266722 kernel: ACPI: button: Sleep Button [SLPB] Apr 24 23:36:24.266751 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:36:24.266772 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 24 23:36:24.267021 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 24 23:36:24.267055 kernel: printk: console [ttyS0] disabled Apr 24 23:36:24.267076 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 24 23:36:24.267096 kernel: printk: console [ttyS0] enabled Apr 24 23:36:24.267147 kernel: printk: bootconsole [uart0] disabled Apr 24 23:36:24.267174 kernel: thunder_xcv, ver 1.0 Apr 24 23:36:24.267193 kernel: thunder_bgx, ver 1.0 Apr 24 23:36:24.267225 kernel: nicpf, ver 1.0 Apr 24 23:36:24.267244 kernel: nicvf, ver 1.0 Apr 24 23:36:24.267514 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 24 23:36:24.267734 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-24T23:36:23 UTC (1777073783) Apr 24 23:36:24.267762 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 24 23:36:24.267782 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 24 23:36:24.267801 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 24 23:36:24.267820 kernel: watchdog: Hard watchdog permanently disabled Apr 24 23:36:24.267848 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:36:24.267867 kernel: Segment Routing with IPv6 Apr 24 23:36:24.267886 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:36:24.267905 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:36:24.267924 kernel: Key type dns_resolver registered Apr 24 23:36:24.267944 kernel: registered taskstats version 1 Apr 24 23:36:24.267962 kernel: Loading compiled-in X.509 certificates Apr 24 23:36:24.267982 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 96a6e7da7ac9a3ef656057ccd8e13f251b310c24' Apr 24 23:36:24.268002 kernel: Key type .fscrypt registered Apr 24 23:36:24.268027 kernel: Key type fscrypt-provisioning registered Apr 24 23:36:24.268046 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:36:24.268065 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:36:24.268086 kernel: ima: No architecture policies found Apr 24 23:36:24.268106 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 24 23:36:24.268267 kernel: clk: Disabling unused clocks Apr 24 23:36:24.268290 kernel: Freeing unused kernel memory: 39424K Apr 24 23:36:24.268309 kernel: Run /init as init process Apr 24 23:36:24.268328 kernel: with arguments: Apr 24 23:36:24.268355 kernel: /init Apr 24 23:36:24.268374 kernel: with environment: Apr 24 23:36:24.268393 kernel: HOME=/ Apr 24 23:36:24.268412 kernel: TERM=linux Apr 24 23:36:24.268436 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:36:24.268461 systemd[1]: Detected virtualization amazon. Apr 24 23:36:24.268481 systemd[1]: Detected architecture arm64. Apr 24 23:36:24.268501 systemd[1]: Running in initrd. Apr 24 23:36:24.268527 systemd[1]: No hostname configured, using default hostname. Apr 24 23:36:24.268548 systemd[1]: Hostname set to . Apr 24 23:36:24.268569 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:36:24.268590 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:36:24.268611 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:36:24.268633 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:36:24.268655 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:36:24.268679 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:36:24.268705 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:36:24.268727 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:36:24.268750 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:36:24.268772 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:36:24.268793 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:36:24.268815 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:36:24.268841 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:36:24.268862 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:36:24.268883 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:36:24.268904 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:36:24.268949 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:36:24.268983 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:36:24.269008 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:36:24.269032 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:36:24.269055 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:36:24.269087 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:36:24.269109 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:36:24.269167 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:36:24.269191 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:36:24.269213 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:36:24.269234 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:36:24.269255 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:36:24.269276 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:36:24.269298 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:36:24.269329 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:36:24.269350 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:36:24.269439 systemd-journald[252]: Collecting audit messages is disabled. Apr 24 23:36:24.269489 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:36:24.269521 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:36:24.272282 systemd-journald[252]: Journal started Apr 24 23:36:24.272345 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2630b4b5332c9fd1c8f96e65758771) is 8.0M, max 75.3M, 67.3M free. Apr 24 23:36:24.275446 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:36:24.242921 systemd-modules-load[253]: Inserted module 'overlay' Apr 24 23:36:24.291399 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:36:24.305074 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:36:24.309216 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:24.312344 kernel: Bridge firewalling registered Apr 24 23:36:24.312199 systemd-modules-load[253]: Inserted module 'br_netfilter' Apr 24 23:36:24.328583 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:36:24.338999 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:36:24.345409 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:36:24.356393 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:36:24.372527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:36:24.381483 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:36:24.420556 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:36:24.427821 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:36:24.435430 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:36:24.446626 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:36:24.466701 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:36:24.484459 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:36:24.498635 dracut-cmdline[285]: dracut-dracut-053 Apr 24 23:36:24.506079 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=63304dd98a277d4592d17e0085ae3f91ca70cc8ec6dedfdd357a1e9755f9a8b3 Apr 24 23:36:24.565289 systemd-resolved[294]: Positive Trust Anchors: Apr 24 23:36:24.565332 systemd-resolved[294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:36:24.565398 systemd-resolved[294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:36:24.691172 kernel: SCSI subsystem initialized Apr 24 23:36:24.698169 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:36:24.712178 kernel: iscsi: registered transport (tcp) Apr 24 23:36:24.736872 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:36:24.736969 kernel: QLogic iSCSI HBA Driver Apr 24 23:36:24.808172 kernel: random: crng init done Apr 24 23:36:24.808453 systemd-resolved[294]: Defaulting to hostname 'linux'. Apr 24 23:36:24.812662 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:36:24.817484 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:36:24.843209 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:36:24.857414 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:36:24.887170 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:36:24.887248 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:36:24.890162 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:36:24.957177 kernel: raid6: neonx8 gen() 6720 MB/s Apr 24 23:36:24.974161 kernel: raid6: neonx4 gen() 6515 MB/s Apr 24 23:36:24.991164 kernel: raid6: neonx2 gen() 5428 MB/s Apr 24 23:36:25.008158 kernel: raid6: neonx1 gen() 3940 MB/s Apr 24 23:36:25.025161 kernel: raid6: int64x8 gen() 3798 MB/s Apr 24 23:36:25.042162 kernel: raid6: int64x4 gen() 3669 MB/s Apr 24 23:36:25.059163 kernel: raid6: int64x2 gen() 3587 MB/s Apr 24 23:36:25.077240 kernel: raid6: int64x1 gen() 2733 MB/s Apr 24 23:36:25.077287 kernel: raid6: using algorithm neonx8 gen() 6720 MB/s Apr 24 23:36:25.096168 kernel: raid6: .... xor() 4784 MB/s, rmw enabled Apr 24 23:36:25.096226 kernel: raid6: using neon recovery algorithm Apr 24 23:36:25.104156 kernel: xor: measuring software checksum speed Apr 24 23:36:25.106562 kernel: 8regs : 10256 MB/sec Apr 24 23:36:25.106598 kernel: 32regs : 11914 MB/sec Apr 24 23:36:25.107881 kernel: arm64_neon : 9568 MB/sec Apr 24 23:36:25.107925 kernel: xor: using function: 32regs (11914 MB/sec) Apr 24 23:36:25.194502 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:36:25.213769 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:36:25.225420 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:36:25.271462 systemd-udevd[472]: Using default interface naming scheme 'v255'. Apr 24 23:36:25.279711 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:36:25.311368 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:36:25.335456 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Apr 24 23:36:25.395636 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:36:25.402702 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:36:25.521651 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:36:25.537781 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:36:25.568299 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:36:25.571500 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:36:25.574479 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:36:25.579332 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:36:25.603715 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:36:25.634574 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:36:25.730001 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 24 23:36:25.730086 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 24 23:36:25.733906 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 24 23:36:25.734310 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 24 23:36:25.735683 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:36:25.739771 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:36:25.751995 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:36:25.764244 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 24 23:36:25.764283 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 24 23:36:25.761337 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:36:25.761626 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:25.776367 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:36:25.782689 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:87:d9:ca:92:df Apr 24 23:36:25.783001 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 24 23:36:25.787793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:36:25.798169 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 23:36:25.798228 kernel: GPT:9289727 != 33554431 Apr 24 23:36:25.800152 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 23:36:25.803191 kernel: GPT:9289727 != 33554431 Apr 24 23:36:25.803256 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:36:25.805530 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:36:25.811520 (udev-worker)[519]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:36:25.821904 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:25.832363 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:36:25.883927 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:36:25.931157 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (551) Apr 24 23:36:25.956163 kernel: BTRFS: device fsid 5f4cf890-f9e2-4e04-aa84-1bcfb6e5643e devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (520) Apr 24 23:36:26.009793 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 24 23:36:26.035967 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 24 23:36:26.055695 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 24 23:36:26.095003 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 24 23:36:26.097820 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 24 23:36:26.118490 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:36:26.132967 disk-uuid[662]: Primary Header is updated. Apr 24 23:36:26.132967 disk-uuid[662]: Secondary Entries is updated. Apr 24 23:36:26.132967 disk-uuid[662]: Secondary Header is updated. Apr 24 23:36:26.144194 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:36:26.153181 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:36:26.163170 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:36:27.172148 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:36:27.172820 disk-uuid[663]: The operation has completed successfully. Apr 24 23:36:27.359844 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:36:27.361077 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:36:27.417387 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:36:27.425735 sh[1005]: Success Apr 24 23:36:27.452204 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 24 23:36:27.559617 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:36:27.577327 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:36:27.581405 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:36:27.630775 kernel: BTRFS info (device dm-0): first mount of filesystem 5f4cf890-f9e2-4e04-aa84-1bcfb6e5643e Apr 24 23:36:27.630840 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:36:27.630867 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:36:27.632751 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:36:27.634197 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:36:27.720155 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 24 23:36:27.744268 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:36:27.748485 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:36:27.761364 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:36:27.769541 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:36:27.793975 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:36:27.794049 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:36:27.795792 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 24 23:36:27.811159 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 24 23:36:27.829204 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:36:27.832779 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:36:27.842640 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:36:27.854567 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:36:27.958937 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:36:27.982422 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:36:28.036593 systemd-networkd[1199]: lo: Link UP Apr 24 23:36:28.037094 systemd-networkd[1199]: lo: Gained carrier Apr 24 23:36:28.040269 systemd-networkd[1199]: Enumeration completed Apr 24 23:36:28.041272 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:36:28.041458 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:36:28.041465 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:36:28.054192 systemd[1]: Reached target network.target - Network. Apr 24 23:36:28.057249 systemd-networkd[1199]: eth0: Link UP Apr 24 23:36:28.057258 systemd-networkd[1199]: eth0: Gained carrier Apr 24 23:36:28.057275 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:36:28.080224 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.28.13/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 24 23:36:28.261010 ignition[1110]: Ignition 2.19.0 Apr 24 23:36:28.261037 ignition[1110]: Stage: fetch-offline Apr 24 23:36:28.265467 ignition[1110]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:28.265507 ignition[1110]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:28.270352 ignition[1110]: Ignition finished successfully Apr 24 23:36:28.273430 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:36:28.284514 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 23:36:28.312577 ignition[1208]: Ignition 2.19.0 Apr 24 23:36:28.312597 ignition[1208]: Stage: fetch Apr 24 23:36:28.313767 ignition[1208]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:28.313792 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:28.313951 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:28.345090 ignition[1208]: PUT result: OK Apr 24 23:36:28.348722 ignition[1208]: parsed url from cmdline: "" Apr 24 23:36:28.348737 ignition[1208]: no config URL provided Apr 24 23:36:28.348752 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:36:28.348777 ignition[1208]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:36:28.348809 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:28.353251 ignition[1208]: PUT result: OK Apr 24 23:36:28.354346 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 24 23:36:28.358027 ignition[1208]: GET result: OK Apr 24 23:36:28.363459 ignition[1208]: parsing config with SHA512: 8968afeb7e7fb9c5d63f236cc2f177ec307244b72fcd081624816d43958504128643772e31ddf1300680d5696059279b88d306d3975a690e0ac02f845da4101e Apr 24 23:36:28.371600 unknown[1208]: fetched base config from "system" Apr 24 23:36:28.371625 unknown[1208]: fetched base config from "system" Apr 24 23:36:28.371639 unknown[1208]: fetched user config from "aws" Apr 24 23:36:28.381214 ignition[1208]: fetch: fetch complete Apr 24 23:36:28.381235 ignition[1208]: fetch: fetch passed Apr 24 23:36:28.381348 ignition[1208]: Ignition finished successfully Apr 24 23:36:28.387367 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 23:36:28.403363 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:36:28.429439 ignition[1215]: Ignition 2.19.0 Apr 24 23:36:28.429465 ignition[1215]: Stage: kargs Apr 24 23:36:28.431373 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:28.431400 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:28.431638 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:28.437726 ignition[1215]: PUT result: OK Apr 24 23:36:28.450044 ignition[1215]: kargs: kargs passed Apr 24 23:36:28.450185 ignition[1215]: Ignition finished successfully Apr 24 23:36:28.457378 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:36:28.468435 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:36:28.494339 ignition[1221]: Ignition 2.19.0 Apr 24 23:36:28.494366 ignition[1221]: Stage: disks Apr 24 23:36:28.497035 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:28.497062 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:28.497259 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:28.499268 ignition[1221]: PUT result: OK Apr 24 23:36:28.509062 ignition[1221]: disks: disks passed Apr 24 23:36:28.509426 ignition[1221]: Ignition finished successfully Apr 24 23:36:28.516850 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:36:28.523930 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:36:28.526694 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:36:28.529679 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:36:28.532093 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:36:28.534557 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:36:28.549915 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:36:28.587184 systemd-fsck[1229]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 24 23:36:28.593957 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:36:28.608308 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:36:28.688159 kernel: EXT4-fs (nvme0n1p9): mounted filesystem edaa698b-3baa-4242-8691-64cb9f35f18f r/w with ordered data mode. Quota mode: none. Apr 24 23:36:28.689105 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:36:28.693402 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:36:28.712296 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:36:28.720495 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:36:28.727507 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 23:36:28.727616 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:36:28.727668 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:36:28.745712 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1248) Apr 24 23:36:28.750081 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:36:28.750173 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:36:28.750203 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 24 23:36:28.758014 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:36:28.769157 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 24 23:36:28.770522 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:36:28.773111 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:36:29.171740 initrd-setup-root[1272]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:36:29.191526 initrd-setup-root[1279]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:36:29.201432 initrd-setup-root[1286]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:36:29.210516 initrd-setup-root[1293]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:36:29.552538 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:36:29.564436 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:36:29.570742 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:36:29.573710 systemd-networkd[1199]: eth0: Gained IPv6LL Apr 24 23:36:29.597378 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:36:29.599472 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:36:29.642253 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:36:29.662676 ignition[1361]: INFO : Ignition 2.19.0 Apr 24 23:36:29.662676 ignition[1361]: INFO : Stage: mount Apr 24 23:36:29.668217 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:29.668217 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:29.668217 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:29.668217 ignition[1361]: INFO : PUT result: OK Apr 24 23:36:29.692991 ignition[1361]: INFO : mount: mount passed Apr 24 23:36:29.692991 ignition[1361]: INFO : Ignition finished successfully Apr 24 23:36:29.681223 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:36:29.700341 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:36:29.731068 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:36:29.754154 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1373) Apr 24 23:36:29.754215 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:36:29.757437 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:36:29.757485 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 24 23:36:29.764151 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 24 23:36:29.768387 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:36:29.805487 ignition[1390]: INFO : Ignition 2.19.0 Apr 24 23:36:29.805487 ignition[1390]: INFO : Stage: files Apr 24 23:36:29.809899 ignition[1390]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:29.809899 ignition[1390]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:29.809899 ignition[1390]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:29.817800 ignition[1390]: INFO : PUT result: OK Apr 24 23:36:29.824060 ignition[1390]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:36:29.829919 ignition[1390]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:36:29.829919 ignition[1390]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:36:29.870933 ignition[1390]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:36:29.874309 ignition[1390]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:36:29.877738 unknown[1390]: wrote ssh authorized keys file for user: core Apr 24 23:36:29.880325 ignition[1390]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:36:29.884105 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 24 23:36:29.888215 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 24 23:36:29.888215 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 24 23:36:29.888215 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 24 23:36:29.984611 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 24 23:36:30.149190 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 24 23:36:30.149190 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:36:30.149190 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:36:30.149190 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:36:30.149190 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:36:30.149190 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:36:30.149190 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:36:30.149190 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:36:30.149190 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:36:30.149190 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:36:30.149190 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:36:30.194831 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:36:30.194831 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:36:30.194831 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:36:30.194831 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 24 23:36:30.651207 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 24 23:36:31.031062 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:36:31.031062 ignition[1390]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 24 23:36:31.039143 ignition[1390]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 24 23:36:31.039143 ignition[1390]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 24 23:36:31.039143 ignition[1390]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 24 23:36:31.039143 ignition[1390]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 24 23:36:31.039143 ignition[1390]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:36:31.039143 ignition[1390]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:36:31.039143 ignition[1390]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 24 23:36:31.039143 ignition[1390]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:36:31.039143 ignition[1390]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:36:31.039143 ignition[1390]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:36:31.039143 ignition[1390]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:36:31.039143 ignition[1390]: INFO : files: files passed Apr 24 23:36:31.039143 ignition[1390]: INFO : Ignition finished successfully Apr 24 23:36:31.048164 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:36:31.086543 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:36:31.100897 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:36:31.122467 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:36:31.123651 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:36:31.141901 initrd-setup-root-after-ignition[1418]: grep: Apr 24 23:36:31.144046 initrd-setup-root-after-ignition[1422]: grep: Apr 24 23:36:31.144046 initrd-setup-root-after-ignition[1418]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:36:31.144046 initrd-setup-root-after-ignition[1418]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:36:31.152677 initrd-setup-root-after-ignition[1422]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:36:31.159252 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:36:31.159767 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:36:31.172563 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:36:31.225040 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:36:31.225355 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:36:31.233277 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:36:31.237235 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:36:31.241771 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:36:31.255400 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:36:31.284578 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:36:31.300574 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:36:31.331150 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:36:31.332269 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:36:31.342365 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:36:31.345037 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:36:31.356185 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:36:31.358273 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:36:31.358387 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:36:31.361368 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:36:31.372491 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:36:31.376509 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:36:31.378945 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:36:31.381510 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:36:31.384085 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:36:31.395422 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:36:31.398268 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:36:31.400516 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:36:31.403887 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:36:31.407912 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:36:31.408031 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:36:31.410665 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:36:31.426655 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:36:31.429349 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:36:31.431859 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:36:31.440514 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:36:31.440628 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:36:31.443306 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:36:31.443396 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:36:31.446289 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:36:31.446371 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:36:31.467372 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:36:31.474094 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:36:31.482802 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:36:31.482935 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:36:31.488320 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:36:31.488431 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:36:31.508570 ignition[1443]: INFO : Ignition 2.19.0 Apr 24 23:36:31.508570 ignition[1443]: INFO : Stage: umount Apr 24 23:36:31.508570 ignition[1443]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:31.508570 ignition[1443]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:31.508570 ignition[1443]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:31.523755 ignition[1443]: INFO : PUT result: OK Apr 24 23:36:31.529184 ignition[1443]: INFO : umount: umount passed Apr 24 23:36:31.529184 ignition[1443]: INFO : Ignition finished successfully Apr 24 23:36:31.544596 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:36:31.544873 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:36:31.552261 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:36:31.552363 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:36:31.554799 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:36:31.554904 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:36:31.557510 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 23:36:31.557612 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 23:36:31.560029 systemd[1]: Stopped target network.target - Network. Apr 24 23:36:31.563652 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:36:31.563769 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:36:31.576387 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:36:31.591321 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:36:31.593871 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:36:31.599949 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:36:31.605289 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:36:31.613566 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:36:31.613651 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:36:31.616570 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:36:31.616659 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:36:31.619152 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:36:31.619263 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:36:31.622095 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:36:31.622198 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:36:31.625060 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:36:31.627843 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:36:31.638292 systemd-networkd[1199]: eth0: DHCPv6 lease lost Apr 24 23:36:31.639738 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:36:31.643106 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:36:31.645281 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:36:31.650412 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:36:31.659710 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:36:31.678656 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:36:31.680419 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:36:31.687887 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:36:31.688015 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:36:31.698351 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:36:31.698472 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:36:31.713298 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:36:31.715443 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:36:31.715565 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:36:31.718572 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:36:31.718666 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:36:31.721470 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:36:31.721553 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:36:31.724270 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:36:31.724349 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:36:31.728158 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:36:31.771295 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:36:31.773022 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:36:31.781411 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:36:31.781583 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:36:31.784315 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:36:31.784396 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:36:31.788081 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:36:31.788207 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:36:31.795314 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:36:31.795433 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:36:31.811510 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:36:31.811617 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:36:31.819670 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:36:31.827242 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:36:31.829778 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:36:31.836462 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 23:36:31.836734 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:36:31.845593 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:36:31.846216 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:36:31.854288 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:36:31.855251 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:31.866002 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:36:31.866863 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:36:31.884491 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:36:31.886305 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:36:31.893455 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:36:31.909134 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:36:31.980618 systemd[1]: Switching root. Apr 24 23:36:32.017168 systemd-journald[252]: Journal stopped Apr 24 23:36:35.245761 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Apr 24 23:36:35.245893 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:36:35.245939 kernel: SELinux: policy capability open_perms=1 Apr 24 23:36:35.245971 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:36:35.246003 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:36:35.246033 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:36:35.246070 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:36:35.246100 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:36:35.255976 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:36:35.256031 kernel: audit: type=1403 audit(1777073793.390:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:36:35.256066 systemd[1]: Successfully loaded SELinux policy in 64.594ms. Apr 24 23:36:35.256250 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.164ms. Apr 24 23:36:35.256293 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:36:35.256325 systemd[1]: Detected virtualization amazon. Apr 24 23:36:35.256368 systemd[1]: Detected architecture arm64. Apr 24 23:36:35.256402 systemd[1]: Detected first boot. Apr 24 23:36:35.256453 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:36:35.256488 zram_generator::config[1502]: No configuration found. Apr 24 23:36:35.256524 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:36:35.256559 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:36:35.256595 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 24 23:36:35.256627 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:36:35.256664 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:36:35.256699 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:36:35.256735 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:36:35.256767 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:36:35.256797 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:36:35.256832 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:36:35.256862 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:36:35.256912 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:36:35.256949 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:36:35.256991 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:36:35.257024 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:36:35.257055 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:36:35.257103 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:36:35.263251 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 23:36:35.263297 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:36:35.263330 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:36:35.263361 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:36:35.263396 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:36:35.263435 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:36:35.263468 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:36:35.263498 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:36:35.263530 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:36:35.263561 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:36:35.263591 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:36:35.263633 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:36:35.263672 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:36:35.263708 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:36:35.263739 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:36:35.263769 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:36:35.263801 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:36:35.263832 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:36:35.263862 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:36:35.263894 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:36:35.263923 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:36:35.263957 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:36:35.263992 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:36:35.264024 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:36:35.264056 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:36:35.264088 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:36:35.264510 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:36:35.264555 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:36:35.264589 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:36:35.264621 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:36:35.264664 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:36:35.264700 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 24 23:36:35.264734 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 24 23:36:35.264765 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:36:35.264794 kernel: fuse: init (API version 7.39) Apr 24 23:36:35.264823 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:36:35.264852 kernel: loop: module loaded Apr 24 23:36:35.264910 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:36:35.264947 kernel: ACPI: bus type drm_connector registered Apr 24 23:36:35.264984 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:36:35.265015 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:36:35.265047 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:36:35.265076 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:36:35.265109 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:36:35.272491 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:36:35.272530 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:36:35.272612 systemd-journald[1612]: Collecting audit messages is disabled. Apr 24 23:36:35.272683 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:36:35.272718 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:36:35.272750 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:36:35.272780 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:36:35.272814 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:36:35.272844 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:36:35.272876 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:36:35.272924 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:36:35.272956 systemd-journald[1612]: Journal started Apr 24 23:36:35.273010 systemd-journald[1612]: Runtime Journal (/run/log/journal/ec2630b4b5332c9fd1c8f96e65758771) is 8.0M, max 75.3M, 67.3M free. Apr 24 23:36:35.281261 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:36:35.289193 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:36:35.290301 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:36:35.291044 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:36:35.294660 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:36:35.295008 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:36:35.298636 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:36:35.299448 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:36:35.302895 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:36:35.308505 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:36:35.312628 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:36:35.337272 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:36:35.346403 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:36:35.359413 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:36:35.362299 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:36:35.382570 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:36:35.397537 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:36:35.400376 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:36:35.405724 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:36:35.408350 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:36:35.425565 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:36:35.445354 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:36:35.454652 systemd-journald[1612]: Time spent on flushing to /var/log/journal/ec2630b4b5332c9fd1c8f96e65758771 is 66.992ms for 889 entries. Apr 24 23:36:35.454652 systemd-journald[1612]: System Journal (/var/log/journal/ec2630b4b5332c9fd1c8f96e65758771) is 8.0M, max 195.6M, 187.6M free. Apr 24 23:36:35.543434 systemd-journald[1612]: Received client request to flush runtime journal. Apr 24 23:36:35.455035 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:36:35.460618 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:36:35.463751 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:36:35.489173 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:36:35.504957 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:36:35.509496 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:36:35.550403 udevadm[1658]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 24 23:36:35.554975 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:36:35.578907 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:36:35.592284 systemd-tmpfiles[1654]: ACLs are not supported, ignoring. Apr 24 23:36:35.592326 systemd-tmpfiles[1654]: ACLs are not supported, ignoring. Apr 24 23:36:35.601352 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:36:35.622414 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:36:35.672190 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:36:35.683499 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:36:35.716337 systemd-tmpfiles[1675]: ACLs are not supported, ignoring. Apr 24 23:36:35.716377 systemd-tmpfiles[1675]: ACLs are not supported, ignoring. Apr 24 23:36:35.727899 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:36:36.404341 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:36:36.415602 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:36:36.481592 systemd-udevd[1681]: Using default interface naming scheme 'v255'. Apr 24 23:36:36.515225 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:36:36.526441 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:36:36.573415 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:36:36.645987 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 24 23:36:36.661331 (udev-worker)[1683]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:36:36.736663 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:36:36.883754 systemd-networkd[1685]: lo: Link UP Apr 24 23:36:36.884489 systemd-networkd[1685]: lo: Gained carrier Apr 24 23:36:36.888399 systemd-networkd[1685]: Enumeration completed Apr 24 23:36:36.888792 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:36:36.892502 systemd-networkd[1685]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:36:36.892517 systemd-networkd[1685]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:36:36.895347 systemd-networkd[1685]: eth0: Link UP Apr 24 23:36:36.895708 systemd-networkd[1685]: eth0: Gained carrier Apr 24 23:36:36.895740 systemd-networkd[1685]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:36:36.903357 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:36:36.910249 systemd-networkd[1685]: eth0: DHCPv4 address 172.31.28.13/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 24 23:36:36.938187 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1696) Apr 24 23:36:37.106111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:36:37.206817 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 24 23:36:37.225718 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:36:37.249608 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:36:37.262595 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:37.303369 lvm[1807]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:36:37.341970 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:36:37.347662 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:36:37.363603 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:36:37.376139 lvm[1813]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:36:37.411069 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:36:37.415146 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:36:37.418384 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:36:37.418432 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:36:37.420926 systemd[1]: Reached target machines.target - Containers. Apr 24 23:36:37.425474 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:36:37.438620 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:36:37.447348 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:36:37.450545 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:36:37.459376 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:36:37.468414 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:36:37.494606 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:36:37.502748 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:36:37.509537 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:36:37.531308 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:36:37.532751 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:36:37.555154 kernel: loop0: detected capacity change from 0 to 114432 Apr 24 23:36:37.652171 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:36:37.679214 kernel: loop1: detected capacity change from 0 to 114328 Apr 24 23:36:37.791166 kernel: loop2: detected capacity change from 0 to 52536 Apr 24 23:36:37.879526 kernel: loop3: detected capacity change from 0 to 209336 Apr 24 23:36:37.985178 kernel: loop4: detected capacity change from 0 to 114432 Apr 24 23:36:38.005156 kernel: loop5: detected capacity change from 0 to 114328 Apr 24 23:36:38.025163 kernel: loop6: detected capacity change from 0 to 52536 Apr 24 23:36:38.044179 kernel: loop7: detected capacity change from 0 to 209336 Apr 24 23:36:38.067393 (sd-merge)[1834]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 24 23:36:38.069189 (sd-merge)[1834]: Merged extensions into '/usr'. Apr 24 23:36:38.075657 systemd[1]: Reloading requested from client PID 1822 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:36:38.075688 systemd[1]: Reloading... Apr 24 23:36:38.221162 zram_generator::config[1865]: No configuration found. Apr 24 23:36:38.512902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:36:38.670049 systemd[1]: Reloading finished in 593 ms. Apr 24 23:36:38.701771 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:36:38.718685 systemd[1]: Starting ensure-sysext.service... Apr 24 23:36:38.729528 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:36:38.748965 systemd[1]: Reloading requested from client PID 1919 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:36:38.749004 systemd[1]: Reloading... Apr 24 23:36:38.795648 systemd-tmpfiles[1920]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:36:38.797395 systemd-tmpfiles[1920]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:36:38.799872 ldconfig[1817]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:36:38.801288 systemd-tmpfiles[1920]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:36:38.801985 systemd-tmpfiles[1920]: ACLs are not supported, ignoring. Apr 24 23:36:38.802200 systemd-tmpfiles[1920]: ACLs are not supported, ignoring. Apr 24 23:36:38.817813 systemd-tmpfiles[1920]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:36:38.817845 systemd-tmpfiles[1920]: Skipping /boot Apr 24 23:36:38.854414 systemd-networkd[1685]: eth0: Gained IPv6LL Apr 24 23:36:38.873668 systemd-tmpfiles[1920]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:36:38.873701 systemd-tmpfiles[1920]: Skipping /boot Apr 24 23:36:38.956182 zram_generator::config[1955]: No configuration found. Apr 24 23:36:39.210444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:36:39.367346 systemd[1]: Reloading finished in 616 ms. Apr 24 23:36:39.393619 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:36:39.401994 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:36:39.410224 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:36:39.446394 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:36:39.455423 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:36:39.470426 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:36:39.484665 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:36:39.503396 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:36:39.524901 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:36:39.528739 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:36:39.555410 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:36:39.574648 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:36:39.582238 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:36:39.587443 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:36:39.588045 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:36:39.593639 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:36:39.594024 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:36:39.626425 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:36:39.643958 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:36:39.651104 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:36:39.652803 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:36:39.679174 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:36:39.686171 augenrules[2049]: No rules Apr 24 23:36:39.692229 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:36:39.708869 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:36:39.718678 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:36:39.748455 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:36:39.753513 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:36:39.753884 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:36:39.771900 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:36:39.782824 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:36:39.791759 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:36:39.799989 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:36:39.808991 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:36:39.811727 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:36:39.821677 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:36:39.822062 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:36:39.826536 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:36:39.828461 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:36:39.846884 systemd[1]: Finished ensure-sysext.service. Apr 24 23:36:39.860634 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:36:39.870315 systemd-resolved[2021]: Positive Trust Anchors: Apr 24 23:36:39.870350 systemd-resolved[2021]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:36:39.870415 systemd-resolved[2021]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:36:39.874876 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:36:39.875086 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:36:39.879229 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:36:39.880549 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:36:39.894457 systemd-resolved[2021]: Defaulting to hostname 'linux'. Apr 24 23:36:39.897980 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:36:39.901074 systemd[1]: Reached target network.target - Network. Apr 24 23:36:39.903428 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:36:39.905967 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:36:39.908690 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:36:39.911264 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:36:39.914550 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:36:39.917740 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:36:39.920398 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:36:39.923244 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:36:39.926567 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:36:39.926619 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:36:39.928898 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:36:39.932448 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:36:39.938153 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:36:39.943655 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:36:39.953098 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:36:39.955960 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:36:39.958417 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:36:39.961061 systemd[1]: System is tainted: cgroupsv1 Apr 24 23:36:39.961395 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:36:39.961562 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:36:39.971356 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:36:39.979530 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 24 23:36:39.997091 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:36:40.006860 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:36:40.013073 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:36:40.017293 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:36:40.022720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:36:40.038415 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:36:40.050655 systemd[1]: Started ntpd.service - Network Time Service. Apr 24 23:36:40.073855 jq[2083]: false Apr 24 23:36:40.095242 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:36:40.130298 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:36:40.151527 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 24 23:36:40.163462 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:36:40.177048 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:36:40.211176 dbus-daemon[2082]: [system] SELinux support is enabled Apr 24 23:36:40.220597 coreos-metadata[2080]: Apr 24 23:36:40.219 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 24 23:36:40.220597 coreos-metadata[2080]: Apr 24 23:36:40.219 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 24 23:36:40.220597 coreos-metadata[2080]: Apr 24 23:36:40.219 INFO Fetch successful Apr 24 23:36:40.220597 coreos-metadata[2080]: Apr 24 23:36:40.219 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 24 23:36:40.220597 coreos-metadata[2080]: Apr 24 23:36:40.219 INFO Fetch successful Apr 24 23:36:40.220597 coreos-metadata[2080]: Apr 24 23:36:40.220 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 24 23:36:40.220597 coreos-metadata[2080]: Apr 24 23:36:40.220 INFO Fetch successful Apr 24 23:36:40.220597 coreos-metadata[2080]: Apr 24 23:36:40.220 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 24 23:36:40.220597 coreos-metadata[2080]: Apr 24 23:36:40.220 INFO Fetch successful Apr 24 23:36:40.220597 coreos-metadata[2080]: Apr 24 23:36:40.220 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 24 23:36:40.220597 coreos-metadata[2080]: Apr 24 23:36:40.220 INFO Fetch failed with 404: resource not found Apr 24 23:36:40.220597 coreos-metadata[2080]: Apr 24 23:36:40.220 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 24 23:36:40.220597 coreos-metadata[2080]: Apr 24 23:36:40.220 INFO Fetch successful Apr 24 23:36:40.220597 coreos-metadata[2080]: Apr 24 23:36:40.220 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 24 23:36:40.228695 coreos-metadata[2080]: Apr 24 23:36:40.220 INFO Fetch successful Apr 24 23:36:40.228695 coreos-metadata[2080]: Apr 24 23:36:40.223 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 24 23:36:40.228695 coreos-metadata[2080]: Apr 24 23:36:40.223 INFO Fetch successful Apr 24 23:36:40.228695 coreos-metadata[2080]: Apr 24 23:36:40.223 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 24 23:36:40.224467 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:36:40.231847 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:36:40.236277 ntpd[2087]: ntpd 4.2.8p17@1.4004-o Fri Apr 24 21:50:58 UTC 2026 (1): Starting Apr 24 23:36:40.237221 ntpd[2087]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 24 23:36:40.239368 coreos-metadata[2080]: Apr 24 23:36:40.236 INFO Fetch successful Apr 24 23:36:40.239368 coreos-metadata[2080]: Apr 24 23:36:40.236 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 24 23:36:40.239368 coreos-metadata[2080]: Apr 24 23:36:40.236 INFO Fetch successful Apr 24 23:36:40.239562 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: ntpd 4.2.8p17@1.4004-o Fri Apr 24 21:50:58 UTC 2026 (1): Starting Apr 24 23:36:40.239562 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 24 23:36:40.239562 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: ---------------------------------------------------- Apr 24 23:36:40.239562 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: ntp-4 is maintained by Network Time Foundation, Apr 24 23:36:40.239562 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 24 23:36:40.239562 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: corporation. Support and training for ntp-4 are Apr 24 23:36:40.239562 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: available at https://www.nwtime.org/support Apr 24 23:36:40.239562 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: ---------------------------------------------------- Apr 24 23:36:40.237242 ntpd[2087]: ---------------------------------------------------- Apr 24 23:36:40.237278 ntpd[2087]: ntp-4 is maintained by Network Time Foundation, Apr 24 23:36:40.237300 ntpd[2087]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 24 23:36:40.237320 ntpd[2087]: corporation. Support and training for ntp-4 are Apr 24 23:36:40.237338 ntpd[2087]: available at https://www.nwtime.org/support Apr 24 23:36:40.250357 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: proto: precision = 0.096 usec (-23) Apr 24 23:36:40.250357 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: basedate set to 2026-04-12 Apr 24 23:36:40.250357 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: gps base set to 2026-04-12 (week 2414) Apr 24 23:36:40.250357 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: Listen and drop on 0 v6wildcard [::]:123 Apr 24 23:36:40.250357 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 24 23:36:40.250357 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: Listen normally on 2 lo 127.0.0.1:123 Apr 24 23:36:40.250357 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: Listen normally on 3 eth0 172.31.28.13:123 Apr 24 23:36:40.250357 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: Listen normally on 4 lo [::1]:123 Apr 24 23:36:40.250357 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: Listen normally on 5 eth0 [fe80::487:d9ff:feca:92df%2]:123 Apr 24 23:36:40.250357 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: Listening on routing socket on fd #22 for interface updates Apr 24 23:36:40.237356 ntpd[2087]: ---------------------------------------------------- Apr 24 23:36:40.243003 ntpd[2087]: proto: precision = 0.096 usec (-23) Apr 24 23:36:40.243967 ntpd[2087]: basedate set to 2026-04-12 Apr 24 23:36:40.243999 ntpd[2087]: gps base set to 2026-04-12 (week 2414) Apr 24 23:36:40.246631 ntpd[2087]: Listen and drop on 0 v6wildcard [::]:123 Apr 24 23:36:40.246710 ntpd[2087]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 24 23:36:40.247001 ntpd[2087]: Listen normally on 2 lo 127.0.0.1:123 Apr 24 23:36:40.247077 ntpd[2087]: Listen normally on 3 eth0 172.31.28.13:123 Apr 24 23:36:40.247185 ntpd[2087]: Listen normally on 4 lo [::1]:123 Apr 24 23:36:40.247274 ntpd[2087]: Listen normally on 5 eth0 [fe80::487:d9ff:feca:92df%2]:123 Apr 24 23:36:40.247339 ntpd[2087]: Listening on routing socket on fd #22 for interface updates Apr 24 23:36:40.247626 dbus-daemon[2082]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1685 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 24 23:36:40.254189 ntpd[2087]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:36:40.262707 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:36:40.274022 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:36:40.274022 ntpd[2087]: 24 Apr 23:36:40 ntpd[2087]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:36:40.254250 ntpd[2087]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:36:40.283320 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:36:40.305098 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:36:40.327150 jq[2114]: true Apr 24 23:36:40.330823 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:36:40.338208 extend-filesystems[2084]: Found loop4 Apr 24 23:36:40.338208 extend-filesystems[2084]: Found loop5 Apr 24 23:36:40.338208 extend-filesystems[2084]: Found loop6 Apr 24 23:36:40.338208 extend-filesystems[2084]: Found loop7 Apr 24 23:36:40.338208 extend-filesystems[2084]: Found nvme0n1 Apr 24 23:36:40.338208 extend-filesystems[2084]: Found nvme0n1p1 Apr 24 23:36:40.338208 extend-filesystems[2084]: Found nvme0n1p2 Apr 24 23:36:40.338208 extend-filesystems[2084]: Found nvme0n1p3 Apr 24 23:36:40.391267 extend-filesystems[2084]: Found usr Apr 24 23:36:40.391267 extend-filesystems[2084]: Found nvme0n1p4 Apr 24 23:36:40.391267 extend-filesystems[2084]: Found nvme0n1p6 Apr 24 23:36:40.391267 extend-filesystems[2084]: Found nvme0n1p7 Apr 24 23:36:40.391267 extend-filesystems[2084]: Found nvme0n1p9 Apr 24 23:36:40.391267 extend-filesystems[2084]: Checking size of /dev/nvme0n1p9 Apr 24 23:36:40.339361 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:36:40.363958 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:36:40.370591 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:36:40.392751 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:36:40.398391 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:36:40.467110 update_engine[2110]: I20260424 23:36:40.465584 2110 main.cc:92] Flatcar Update Engine starting Apr 24 23:36:40.479450 update_engine[2110]: I20260424 23:36:40.476745 2110 update_check_scheduler.cc:74] Next update check in 2m53s Apr 24 23:36:40.501047 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:36:40.502845 (ntainerd)[2134]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:36:40.518996 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 24 23:36:40.540903 extend-filesystems[2084]: Resized partition /dev/nvme0n1p9 Apr 24 23:36:40.582213 jq[2132]: true Apr 24 23:36:40.593002 extend-filesystems[2150]: resize2fs 1.47.1 (20-May-2024) Apr 24 23:36:40.616652 dbus-daemon[2082]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 24 23:36:40.617758 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:36:40.629630 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:36:40.635572 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:36:40.636986 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:36:40.637084 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:36:40.643092 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:36:40.644185 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:36:40.646153 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 24 23:36:40.652164 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:36:40.656770 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:36:40.664084 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 24 23:36:40.675035 tar[2126]: linux-arm64/LICENSE Apr 24 23:36:40.675035 tar[2126]: linux-arm64/helm Apr 24 23:36:40.694831 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 24 23:36:40.709826 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 24 23:36:40.805156 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 24 23:36:40.816255 extend-filesystems[2150]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 24 23:36:40.816255 extend-filesystems[2150]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 24 23:36:40.816255 extend-filesystems[2150]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 24 23:36:40.829996 extend-filesystems[2084]: Resized filesystem in /dev/nvme0n1p9 Apr 24 23:36:40.849495 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:36:40.850020 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:36:40.856169 bash[2182]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:36:40.866887 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:36:40.884680 systemd[1]: Starting sshkeys.service... Apr 24 23:36:40.907536 systemd-logind[2104]: Watching system buttons on /dev/input/event0 (Power Button) Apr 24 23:36:40.907593 systemd-logind[2104]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 24 23:36:40.912209 systemd-logind[2104]: New seat seat0. Apr 24 23:36:40.919184 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 24 23:36:40.943686 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 24 23:36:40.949658 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:36:41.174028 amazon-ssm-agent[2171]: Initializing new seelog logger Apr 24 23:36:41.174028 amazon-ssm-agent[2171]: New Seelog Logger Creation Complete Apr 24 23:36:41.174028 amazon-ssm-agent[2171]: 2026/04/24 23:36:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:36:41.174028 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:36:41.174028 amazon-ssm-agent[2171]: 2026/04/24 23:36:41 processing appconfig overrides Apr 24 23:36:41.174028 amazon-ssm-agent[2171]: 2026/04/24 23:36:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:36:41.174028 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:36:41.179147 amazon-ssm-agent[2171]: 2026-04-24 23:36:41 INFO Proxy environment variables: Apr 24 23:36:41.180320 amazon-ssm-agent[2171]: 2026/04/24 23:36:41 processing appconfig overrides Apr 24 23:36:41.180981 amazon-ssm-agent[2171]: 2026/04/24 23:36:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:36:41.180981 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:36:41.180981 amazon-ssm-agent[2171]: 2026/04/24 23:36:41 processing appconfig overrides Apr 24 23:36:41.197149 amazon-ssm-agent[2171]: 2026/04/24 23:36:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:36:41.197149 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:36:41.197149 amazon-ssm-agent[2171]: 2026/04/24 23:36:41 processing appconfig overrides Apr 24 23:36:41.275509 amazon-ssm-agent[2171]: 2026-04-24 23:36:41 INFO https_proxy: Apr 24 23:36:41.312336 coreos-metadata[2189]: Apr 24 23:36:41.311 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 24 23:36:41.313560 coreos-metadata[2189]: Apr 24 23:36:41.313 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 24 23:36:41.314107 coreos-metadata[2189]: Apr 24 23:36:41.313 INFO Fetch successful Apr 24 23:36:41.314385 coreos-metadata[2189]: Apr 24 23:36:41.314 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 24 23:36:41.315774 coreos-metadata[2189]: Apr 24 23:36:41.315 INFO Fetch successful Apr 24 23:36:41.317764 unknown[2189]: wrote ssh authorized keys file for user: core Apr 24 23:36:41.377153 amazon-ssm-agent[2171]: 2026-04-24 23:36:41 INFO http_proxy: Apr 24 23:36:41.386143 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (2204) Apr 24 23:36:41.394252 update-ssh-keys[2223]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:36:41.395402 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 24 23:36:41.415584 systemd[1]: Finished sshkeys.service. Apr 24 23:36:41.482156 amazon-ssm-agent[2171]: 2026-04-24 23:36:41 INFO no_proxy: Apr 24 23:36:41.588305 amazon-ssm-agent[2171]: 2026-04-24 23:36:41 INFO Checking if agent identity type OnPrem can be assumed Apr 24 23:36:41.606277 dbus-daemon[2082]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 24 23:36:41.607283 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 24 23:36:41.607821 dbus-daemon[2082]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2172 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 24 23:36:41.619363 locksmithd[2165]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:36:41.649586 systemd[1]: Starting polkit.service - Authorization Manager... Apr 24 23:36:41.695180 amazon-ssm-agent[2171]: 2026-04-24 23:36:41 INFO Checking if agent identity type EC2 can be assumed Apr 24 23:36:41.772567 polkitd[2248]: Started polkitd version 121 Apr 24 23:36:41.796201 amazon-ssm-agent[2171]: 2026-04-24 23:36:41 INFO Agent will take identity from EC2 Apr 24 23:36:41.830101 polkitd[2248]: Loading rules from directory /etc/polkit-1/rules.d Apr 24 23:36:41.830250 polkitd[2248]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 24 23:36:41.831015 containerd[2134]: time="2026-04-24T23:36:41.830886975Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:36:41.836171 polkitd[2248]: Finished loading, compiling and executing 2 rules Apr 24 23:36:41.848370 dbus-daemon[2082]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 24 23:36:41.848667 systemd[1]: Started polkit.service - Authorization Manager. Apr 24 23:36:41.856670 polkitd[2248]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 24 23:36:41.896955 amazon-ssm-agent[2171]: 2026-04-24 23:36:41 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 24 23:36:41.945361 systemd-hostnamed[2172]: Hostname set to (transient) Apr 24 23:36:41.948183 systemd-resolved[2021]: System hostname changed to 'ip-172-31-28-13'. Apr 24 23:36:41.998455 amazon-ssm-agent[2171]: 2026-04-24 23:36:41 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 24 23:36:42.011410 containerd[2134]: time="2026-04-24T23:36:42.010040975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:42.032641 containerd[2134]: time="2026-04-24T23:36:42.031714092Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:36:42.032641 containerd[2134]: time="2026-04-24T23:36:42.031780224Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:36:42.032641 containerd[2134]: time="2026-04-24T23:36:42.031815372Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:36:42.032641 containerd[2134]: time="2026-04-24T23:36:42.032164932Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:36:42.032641 containerd[2134]: time="2026-04-24T23:36:42.032207508Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:42.032641 containerd[2134]: time="2026-04-24T23:36:42.032334048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:36:42.032641 containerd[2134]: time="2026-04-24T23:36:42.032366844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:42.033023 containerd[2134]: time="2026-04-24T23:36:42.032761656Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:36:42.033023 containerd[2134]: time="2026-04-24T23:36:42.032801184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:42.033023 containerd[2134]: time="2026-04-24T23:36:42.032842668Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:36:42.033023 containerd[2134]: time="2026-04-24T23:36:42.032894340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:42.033231 containerd[2134]: time="2026-04-24T23:36:42.033077976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:42.035581 containerd[2134]: time="2026-04-24T23:36:42.033531444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:42.035581 containerd[2134]: time="2026-04-24T23:36:42.033852948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:36:42.035581 containerd[2134]: time="2026-04-24T23:36:42.033887892Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:36:42.035581 containerd[2134]: time="2026-04-24T23:36:42.034075740Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:36:42.035581 containerd[2134]: time="2026-04-24T23:36:42.034214904Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:36:42.043212 containerd[2134]: time="2026-04-24T23:36:42.043033980Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:36:42.046174 containerd[2134]: time="2026-04-24T23:36:42.043407120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:36:42.046174 containerd[2134]: time="2026-04-24T23:36:42.043455396Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:36:42.046174 containerd[2134]: time="2026-04-24T23:36:42.043492548Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:36:42.046174 containerd[2134]: time="2026-04-24T23:36:42.043528656Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:36:42.046174 containerd[2134]: time="2026-04-24T23:36:42.043796640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:36:42.046174 containerd[2134]: time="2026-04-24T23:36:42.044415060Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:36:42.046174 containerd[2134]: time="2026-04-24T23:36:42.044618916Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:36:42.046174 containerd[2134]: time="2026-04-24T23:36:42.044650824Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:36:42.046174 containerd[2134]: time="2026-04-24T23:36:42.044685804Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:36:42.046174 containerd[2134]: time="2026-04-24T23:36:42.044718768Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:36:42.046174 containerd[2134]: time="2026-04-24T23:36:42.044749428Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:36:42.046174 containerd[2134]: time="2026-04-24T23:36:42.044778756Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:36:42.046174 containerd[2134]: time="2026-04-24T23:36:42.044811792Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:36:42.046174 containerd[2134]: time="2026-04-24T23:36:42.044843448Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:36:42.046826 containerd[2134]: time="2026-04-24T23:36:42.044902740Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:36:42.046826 containerd[2134]: time="2026-04-24T23:36:42.044937732Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:36:42.046826 containerd[2134]: time="2026-04-24T23:36:42.044965356Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:36:42.046826 containerd[2134]: time="2026-04-24T23:36:42.045004920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.046826 containerd[2134]: time="2026-04-24T23:36:42.045036336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.046826 containerd[2134]: time="2026-04-24T23:36:42.045065016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.046826 containerd[2134]: time="2026-04-24T23:36:42.045095124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.046826 containerd[2134]: time="2026-04-24T23:36:42.046764384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.046826 containerd[2134]: time="2026-04-24T23:36:42.046824120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.047246 containerd[2134]: time="2026-04-24T23:36:42.046854372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.047246 containerd[2134]: time="2026-04-24T23:36:42.046889280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.047246 containerd[2134]: time="2026-04-24T23:36:42.046919964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.047246 containerd[2134]: time="2026-04-24T23:36:42.046954332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.047246 containerd[2134]: time="2026-04-24T23:36:42.046983612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.047246 containerd[2134]: time="2026-04-24T23:36:42.047012508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.047246 containerd[2134]: time="2026-04-24T23:36:42.047041440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.047246 containerd[2134]: time="2026-04-24T23:36:42.047076084Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:36:42.047246 containerd[2134]: time="2026-04-24T23:36:42.047149044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.047246 containerd[2134]: time="2026-04-24T23:36:42.047185704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.047246 containerd[2134]: time="2026-04-24T23:36:42.047214408Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:36:42.047707 containerd[2134]: time="2026-04-24T23:36:42.047445912Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:36:42.047707 containerd[2134]: time="2026-04-24T23:36:42.047488812Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:36:42.047707 containerd[2134]: time="2026-04-24T23:36:42.047515188Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:36:42.047707 containerd[2134]: time="2026-04-24T23:36:42.047543844Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:36:42.047707 containerd[2134]: time="2026-04-24T23:36:42.047567208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.047707 containerd[2134]: time="2026-04-24T23:36:42.047600940Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:36:42.047707 containerd[2134]: time="2026-04-24T23:36:42.047627844Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:36:42.047707 containerd[2134]: time="2026-04-24T23:36:42.047653332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:36:42.056355 containerd[2134]: time="2026-04-24T23:36:42.050637372Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:36:42.056355 containerd[2134]: time="2026-04-24T23:36:42.050774232Z" level=info msg="Connect containerd service" Apr 24 23:36:42.056355 containerd[2134]: time="2026-04-24T23:36:42.050853660Z" level=info msg="using legacy CRI server" Apr 24 23:36:42.056355 containerd[2134]: time="2026-04-24T23:36:42.050874924Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:36:42.056355 containerd[2134]: time="2026-04-24T23:36:42.051028008Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:36:42.056355 containerd[2134]: time="2026-04-24T23:36:42.054841440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:36:42.060517 containerd[2134]: time="2026-04-24T23:36:42.056383536Z" level=info msg="Start subscribing containerd event" Apr 24 23:36:42.060517 containerd[2134]: time="2026-04-24T23:36:42.056473956Z" level=info msg="Start recovering state" Apr 24 23:36:42.060517 containerd[2134]: time="2026-04-24T23:36:42.056598900Z" level=info msg="Start event monitor" Apr 24 23:36:42.060517 containerd[2134]: time="2026-04-24T23:36:42.056629056Z" level=info msg="Start snapshots syncer" Apr 24 23:36:42.060517 containerd[2134]: time="2026-04-24T23:36:42.056649816Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:36:42.060517 containerd[2134]: time="2026-04-24T23:36:42.056668416Z" level=info msg="Start streaming server" Apr 24 23:36:42.060888 containerd[2134]: time="2026-04-24T23:36:42.060594336Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:36:42.060888 containerd[2134]: time="2026-04-24T23:36:42.060708684Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:36:42.060977 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:36:42.100537 containerd[2134]: time="2026-04-24T23:36:42.095226012Z" level=info msg="containerd successfully booted in 0.271068s" Apr 24 23:36:42.104629 amazon-ssm-agent[2171]: 2026-04-24 23:36:41 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 24 23:36:42.203857 amazon-ssm-agent[2171]: 2026-04-24 23:36:41 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 24 23:36:42.304723 amazon-ssm-agent[2171]: 2026-04-24 23:36:41 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 24 23:36:42.404948 amazon-ssm-agent[2171]: 2026-04-24 23:36:41 INFO [amazon-ssm-agent] Starting Core Agent Apr 24 23:36:42.504822 amazon-ssm-agent[2171]: 2026-04-24 23:36:41 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 24 23:36:42.602973 sshd_keygen[2129]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:36:42.606922 amazon-ssm-agent[2171]: 2026-04-24 23:36:41 INFO [Registrar] Starting registrar module Apr 24 23:36:42.702961 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:36:42.708483 amazon-ssm-agent[2171]: 2026-04-24 23:36:41 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 24 23:36:42.717254 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:36:42.729626 systemd[1]: Started sshd@0-172.31.28.13:22-20.229.252.112:55830.service - OpenSSH per-connection server daemon (20.229.252.112:55830). Apr 24 23:36:42.770670 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:36:42.771289 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:36:42.783633 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:36:42.847913 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:36:42.861689 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:36:42.876776 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 23:36:42.882251 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:36:43.000504 tar[2126]: linux-arm64/README.md Apr 24 23:36:43.045602 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:36:43.686546 amazon-ssm-agent[2171]: 2026-04-24 23:36:43 INFO [EC2Identity] EC2 registration was successful. Apr 24 23:36:43.714650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:36:43.721673 (kubelet)[2369]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:36:43.721978 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:36:43.726158 amazon-ssm-agent[2171]: 2026-04-24 23:36:43 INFO [CredentialRefresher] credentialRefresher has started Apr 24 23:36:43.726158 amazon-ssm-agent[2171]: 2026-04-24 23:36:43 INFO [CredentialRefresher] Starting credentials refresher loop Apr 24 23:36:43.726324 amazon-ssm-agent[2171]: 2026-04-24 23:36:43 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 24 23:36:43.731274 systemd[1]: Startup finished in 10.826s (kernel) + 10.403s (userspace) = 21.230s. Apr 24 23:36:43.772927 sshd[2345]: Accepted publickey for core from 20.229.252.112 port 55830 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:36:43.779305 sshd[2345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:43.786654 amazon-ssm-agent[2171]: 2026-04-24 23:36:43 INFO [CredentialRefresher] Next credential rotation will be in 30.266658845466665 minutes Apr 24 23:36:43.803501 systemd-logind[2104]: New session 1 of user core. Apr 24 23:36:43.805397 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:36:43.813219 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:36:43.849498 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:36:43.863432 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:36:43.880798 (systemd)[2378]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:36:44.114087 systemd[2378]: Queued start job for default target default.target. Apr 24 23:36:44.114841 systemd[2378]: Created slice app.slice - User Application Slice. Apr 24 23:36:44.114883 systemd[2378]: Reached target paths.target - Paths. Apr 24 23:36:44.114914 systemd[2378]: Reached target timers.target - Timers. Apr 24 23:36:44.125412 systemd[2378]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:36:44.141676 systemd[2378]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:36:44.141944 systemd[2378]: Reached target sockets.target - Sockets. Apr 24 23:36:44.142147 systemd[2378]: Reached target basic.target - Basic System. Apr 24 23:36:44.142356 systemd[2378]: Reached target default.target - Main User Target. Apr 24 23:36:44.142528 systemd[2378]: Startup finished in 249ms. Apr 24 23:36:44.142952 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:36:44.155688 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:36:44.755100 amazon-ssm-agent[2171]: 2026-04-24 23:36:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 24 23:36:44.848701 systemd[1]: Started sshd@1-172.31.28.13:22-20.229.252.112:55832.service - OpenSSH per-connection server daemon (20.229.252.112:55832). Apr 24 23:36:44.856217 amazon-ssm-agent[2171]: 2026-04-24 23:36:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2395) started Apr 24 23:36:44.911524 kubelet[2369]: E0424 23:36:44.911387 2369 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:36:44.917038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:36:44.918236 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:36:44.956927 amazon-ssm-agent[2171]: 2026-04-24 23:36:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 24 23:36:45.846158 sshd[2400]: Accepted publickey for core from 20.229.252.112 port 55832 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:36:45.848680 sshd[2400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:45.857624 systemd-logind[2104]: New session 2 of user core. Apr 24 23:36:45.870701 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:36:46.533463 sshd[2400]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:46.538043 systemd[1]: sshd@1-172.31.28.13:22-20.229.252.112:55832.service: Deactivated successfully. Apr 24 23:36:46.543547 systemd-logind[2104]: Session 2 logged out. Waiting for processes to exit. Apr 24 23:36:46.547215 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 23:36:46.549229 systemd-logind[2104]: Removed session 2. Apr 24 23:36:46.720563 systemd[1]: Started sshd@2-172.31.28.13:22-20.229.252.112:33580.service - OpenSSH per-connection server daemon (20.229.252.112:33580). Apr 24 23:36:47.586170 systemd-resolved[2021]: Clock change detected. Flushing caches. Apr 24 23:36:48.098027 sshd[2415]: Accepted publickey for core from 20.229.252.112 port 33580 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:36:48.099744 sshd[2415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:48.108384 systemd-logind[2104]: New session 3 of user core. Apr 24 23:36:48.119486 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:36:48.802362 sshd[2415]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:48.809597 systemd[1]: sshd@2-172.31.28.13:22-20.229.252.112:33580.service: Deactivated successfully. Apr 24 23:36:48.817305 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 23:36:48.818750 systemd-logind[2104]: Session 3 logged out. Waiting for processes to exit. Apr 24 23:36:48.820955 systemd-logind[2104]: Removed session 3. Apr 24 23:36:48.971615 systemd[1]: Started sshd@3-172.31.28.13:22-20.229.252.112:33594.service - OpenSSH per-connection server daemon (20.229.252.112:33594). Apr 24 23:36:50.002029 sshd[2423]: Accepted publickey for core from 20.229.252.112 port 33594 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:36:50.003786 sshd[2423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:50.012230 systemd-logind[2104]: New session 4 of user core. Apr 24 23:36:50.019509 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:36:50.708277 sshd[2423]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:50.715864 systemd[1]: sshd@3-172.31.28.13:22-20.229.252.112:33594.service: Deactivated successfully. Apr 24 23:36:50.717284 systemd-logind[2104]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:36:50.722247 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:36:50.723766 systemd-logind[2104]: Removed session 4. Apr 24 23:36:50.885460 systemd[1]: Started sshd@4-172.31.28.13:22-20.229.252.112:33602.service - OpenSSH per-connection server daemon (20.229.252.112:33602). Apr 24 23:36:51.903026 sshd[2431]: Accepted publickey for core from 20.229.252.112 port 33602 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:36:51.905045 sshd[2431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:51.914085 systemd-logind[2104]: New session 5 of user core. Apr 24 23:36:51.923468 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:36:52.477379 sudo[2435]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:36:52.478027 sudo[2435]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:36:52.497119 sudo[2435]: pam_unix(sudo:session): session closed for user root Apr 24 23:36:52.663427 sshd[2431]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:52.670768 systemd[1]: sshd@4-172.31.28.13:22-20.229.252.112:33602.service: Deactivated successfully. Apr 24 23:36:52.672096 systemd-logind[2104]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:36:52.678290 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:36:52.680506 systemd-logind[2104]: Removed session 5. Apr 24 23:36:52.823476 systemd[1]: Started sshd@5-172.31.28.13:22-20.229.252.112:33616.service - OpenSSH per-connection server daemon (20.229.252.112:33616). Apr 24 23:36:53.814047 sshd[2440]: Accepted publickey for core from 20.229.252.112 port 33616 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:36:53.816642 sshd[2440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:53.825797 systemd-logind[2104]: New session 6 of user core. Apr 24 23:36:53.831797 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:36:54.339474 sudo[2445]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:36:54.340134 sudo[2445]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:36:54.346535 sudo[2445]: pam_unix(sudo:session): session closed for user root Apr 24 23:36:54.356392 sudo[2444]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:36:54.357105 sudo[2444]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:36:54.382920 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:36:54.387224 auditctl[2448]: No rules Apr 24 23:36:54.388048 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:36:54.388545 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:36:54.405697 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:36:54.448014 augenrules[2467]: No rules Apr 24 23:36:54.449711 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:36:54.455301 sudo[2444]: pam_unix(sudo:session): session closed for user root Apr 24 23:36:54.617266 sshd[2440]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:54.625296 systemd-logind[2104]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:36:54.626664 systemd[1]: sshd@5-172.31.28.13:22-20.229.252.112:33616.service: Deactivated successfully. Apr 24 23:36:54.632089 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:36:54.633968 systemd-logind[2104]: Removed session 6. Apr 24 23:36:54.803468 systemd[1]: Started sshd@6-172.31.28.13:22-20.229.252.112:33630.service - OpenSSH per-connection server daemon (20.229.252.112:33630). Apr 24 23:36:55.304741 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:36:55.318312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:36:55.642502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:36:55.657596 (kubelet)[2491]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:36:55.732892 kubelet[2491]: E0424 23:36:55.732756 2491 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:36:55.742287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:36:55.742692 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:36:55.821301 sshd[2477]: Accepted publickey for core from 20.229.252.112 port 33630 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:36:55.823746 sshd[2477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:55.834192 systemd-logind[2104]: New session 7 of user core. Apr 24 23:36:55.841339 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:36:56.362166 sudo[2501]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:36:56.362825 sudo[2501]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:36:56.952393 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:36:56.953528 (dockerd)[2516]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:36:57.450780 dockerd[2516]: time="2026-04-24T23:36:57.450684872Z" level=info msg="Starting up" Apr 24 23:36:57.748649 dockerd[2516]: time="2026-04-24T23:36:57.748487290Z" level=info msg="Loading containers: start." Apr 24 23:36:57.937014 kernel: Initializing XFRM netlink socket Apr 24 23:36:57.978935 (udev-worker)[2538]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:36:58.067083 systemd-networkd[1685]: docker0: Link UP Apr 24 23:36:58.094545 dockerd[2516]: time="2026-04-24T23:36:58.094470968Z" level=info msg="Loading containers: done." Apr 24 23:36:58.119159 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2733039281-merged.mount: Deactivated successfully. Apr 24 23:36:58.125252 dockerd[2516]: time="2026-04-24T23:36:58.125149520Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:36:58.125419 dockerd[2516]: time="2026-04-24T23:36:58.125329688Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:36:58.125573 dockerd[2516]: time="2026-04-24T23:36:58.125516804Z" level=info msg="Daemon has completed initialization" Apr 24 23:36:58.198389 dockerd[2516]: time="2026-04-24T23:36:58.198072812Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:36:58.200375 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:36:59.172644 containerd[2134]: time="2026-04-24T23:36:59.172237233Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 24 23:36:59.851680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971376572.mount: Deactivated successfully. Apr 24 23:37:01.301395 containerd[2134]: time="2026-04-24T23:37:01.301307004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:01.303674 containerd[2134]: time="2026-04-24T23:37:01.303601764Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=27008787" Apr 24 23:37:01.307012 containerd[2134]: time="2026-04-24T23:37:01.305728956Z" level=info msg="ImageCreate event name:\"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:01.312225 containerd[2134]: time="2026-04-24T23:37:01.312167124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:01.314917 containerd[2134]: time="2026-04-24T23:37:01.314847744Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"27005386\" in 2.142546551s" Apr 24 23:37:01.315207 containerd[2134]: time="2026-04-24T23:37:01.315170220Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\"" Apr 24 23:37:01.316557 containerd[2134]: time="2026-04-24T23:37:01.316513872Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 24 23:37:02.742070 containerd[2134]: time="2026-04-24T23:37:02.741115131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:02.745256 containerd[2134]: time="2026-04-24T23:37:02.744810159Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=23297774" Apr 24 23:37:02.748016 containerd[2134]: time="2026-04-24T23:37:02.747228567Z" level=info msg="ImageCreate event name:\"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:02.754032 containerd[2134]: time="2026-04-24T23:37:02.753638451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:02.756384 containerd[2134]: time="2026-04-24T23:37:02.756055383Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"24804413\" in 1.439207851s" Apr 24 23:37:02.756384 containerd[2134]: time="2026-04-24T23:37:02.756116799Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\"" Apr 24 23:37:02.757403 containerd[2134]: time="2026-04-24T23:37:02.757120491Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 24 23:37:04.027443 containerd[2134]: time="2026-04-24T23:37:04.027360829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:04.030452 containerd[2134]: time="2026-04-24T23:37:04.030390661Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=18141358" Apr 24 23:37:04.032302 containerd[2134]: time="2026-04-24T23:37:04.032249461Z" level=info msg="ImageCreate event name:\"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:04.038436 containerd[2134]: time="2026-04-24T23:37:04.038342029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:04.041056 containerd[2134]: time="2026-04-24T23:37:04.040807225Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"19648015\" in 1.283633718s" Apr 24 23:37:04.041056 containerd[2134]: time="2026-04-24T23:37:04.040867309Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\"" Apr 24 23:37:04.042116 containerd[2134]: time="2026-04-24T23:37:04.041776357Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 24 23:37:05.354160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount240827935.mount: Deactivated successfully. Apr 24 23:37:05.804896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 24 23:37:05.813431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:06.102434 containerd[2134]: time="2026-04-24T23:37:06.100559055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:06.104711 containerd[2134]: time="2026-04-24T23:37:06.104632527Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=28040508" Apr 24 23:37:06.106898 containerd[2134]: time="2026-04-24T23:37:06.106828347Z" level=info msg="ImageCreate event name:\"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:06.116703 containerd[2134]: time="2026-04-24T23:37:06.114742756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:06.117034 containerd[2134]: time="2026-04-24T23:37:06.116935936Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"28039527\" in 2.075098235s" Apr 24 23:37:06.117207 containerd[2134]: time="2026-04-24T23:37:06.117170200Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\"" Apr 24 23:37:06.119258 containerd[2134]: time="2026-04-24T23:37:06.119203648Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 24 23:37:06.260335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:06.281544 (kubelet)[2740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:37:06.361123 kubelet[2740]: E0424 23:37:06.360439 2740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:37:06.367359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:37:06.367802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:37:06.642258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254823449.mount: Deactivated successfully. Apr 24 23:37:07.862168 containerd[2134]: time="2026-04-24T23:37:07.862069388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:07.864570 containerd[2134]: time="2026-04-24T23:37:07.864494528Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Apr 24 23:37:07.867807 containerd[2134]: time="2026-04-24T23:37:07.866791136Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:07.875961 containerd[2134]: time="2026-04-24T23:37:07.875887916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:07.878643 containerd[2134]: time="2026-04-24T23:37:07.878576264Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.75912418s" Apr 24 23:37:07.878808 containerd[2134]: time="2026-04-24T23:37:07.878640788Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 24 23:37:07.879684 containerd[2134]: time="2026-04-24T23:37:07.879616892Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 24 23:37:08.400243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3222881629.mount: Deactivated successfully. Apr 24 23:37:08.413822 containerd[2134]: time="2026-04-24T23:37:08.413732599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:08.416736 containerd[2134]: time="2026-04-24T23:37:08.416622583Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 24 23:37:08.419299 containerd[2134]: time="2026-04-24T23:37:08.419218603Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:08.426679 containerd[2134]: time="2026-04-24T23:37:08.426594331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:08.429619 containerd[2134]: time="2026-04-24T23:37:08.429384559Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 549.689175ms" Apr 24 23:37:08.429619 containerd[2134]: time="2026-04-24T23:37:08.429445879Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 24 23:37:08.430609 containerd[2134]: time="2026-04-24T23:37:08.430531603Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 24 23:37:09.026907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611690877.mount: Deactivated successfully. Apr 24 23:37:11.326365 containerd[2134]: time="2026-04-24T23:37:11.326276733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:11.328741 containerd[2134]: time="2026-04-24T23:37:11.328664733Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21886366" Apr 24 23:37:11.331821 containerd[2134]: time="2026-04-24T23:37:11.330751041Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:11.337617 containerd[2134]: time="2026-04-24T23:37:11.337558581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:11.340213 containerd[2134]: time="2026-04-24T23:37:11.340103769Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.90950283s" Apr 24 23:37:11.340450 containerd[2134]: time="2026-04-24T23:37:11.340411401Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 24 23:37:12.329346 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 24 23:37:16.554849 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 24 23:37:16.565901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:16.979384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:16.997767 (kubelet)[2905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:37:17.084586 kubelet[2905]: E0424 23:37:17.084522 2905 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:37:17.095327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:37:17.096489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:37:17.204910 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:17.219573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:17.296481 systemd[1]: Reloading requested from client PID 2921 ('systemctl') (unit session-7.scope)... Apr 24 23:37:17.296792 systemd[1]: Reloading... Apr 24 23:37:17.556201 zram_generator::config[2964]: No configuration found. Apr 24 23:37:17.849680 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:37:18.041801 systemd[1]: Reloading finished in 744 ms. Apr 24 23:37:18.129645 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 23:37:18.129876 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 23:37:18.130550 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:18.142919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:18.525443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:18.541862 (kubelet)[3034]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:37:18.626534 kubelet[3034]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:37:18.626534 kubelet[3034]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:37:18.626534 kubelet[3034]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:37:18.627255 kubelet[3034]: I0424 23:37:18.626588 3034 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:37:20.964279 kubelet[3034]: I0424 23:37:20.964190 3034 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:37:20.964279 kubelet[3034]: I0424 23:37:20.964251 3034 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:37:20.965150 kubelet[3034]: I0424 23:37:20.964666 3034 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:37:21.009798 kubelet[3034]: E0424 23:37:21.009735 3034 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:37:21.012050 kubelet[3034]: I0424 23:37:21.011774 3034 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:37:21.032044 kubelet[3034]: E0424 23:37:21.031553 3034 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:37:21.032044 kubelet[3034]: I0424 23:37:21.031614 3034 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:37:21.038750 kubelet[3034]: I0424 23:37:21.038692 3034 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:37:21.040062 kubelet[3034]: I0424 23:37:21.039954 3034 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:37:21.040514 kubelet[3034]: I0424 23:37:21.040244 3034 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-13","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 24 23:37:21.041061 kubelet[3034]: I0424 23:37:21.040776 3034 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:37:21.041061 kubelet[3034]: I0424 23:37:21.040810 3034 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:37:21.041280 kubelet[3034]: I0424 23:37:21.041233 3034 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:37:21.048010 kubelet[3034]: I0424 23:37:21.047681 3034 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:37:21.048186 kubelet[3034]: I0424 23:37:21.048018 3034 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:37:21.049722 kubelet[3034]: I0424 23:37:21.049643 3034 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:37:21.052352 kubelet[3034]: I0424 23:37:21.051951 3034 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:37:21.060103 kubelet[3034]: I0424 23:37:21.059812 3034 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:37:21.061408 kubelet[3034]: I0424 23:37:21.061355 3034 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:37:21.063080 kubelet[3034]: W0424 23:37:21.061849 3034 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:37:21.069653 kubelet[3034]: I0424 23:37:21.069615 3034 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:37:21.069909 kubelet[3034]: I0424 23:37:21.069882 3034 server.go:1289] "Started kubelet" Apr 24 23:37:21.070439 kubelet[3034]: E0424 23:37:21.070385 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-13&limit=500&resourceVersion=0\": dial tcp 172.31.28.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:37:21.073591 kubelet[3034]: E0424 23:37:21.073511 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:37:21.073856 kubelet[3034]: I0424 23:37:21.073770 3034 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:37:21.076678 kubelet[3034]: I0424 23:37:21.076546 3034 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:37:21.077510 kubelet[3034]: I0424 23:37:21.077466 3034 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:37:21.079605 kubelet[3034]: I0424 23:37:21.079540 3034 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:37:21.087502 kubelet[3034]: E0424 23:37:21.085198 3034 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.13:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-13.18a96f3e3c9b264a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-13,UID:ip-172-31-28-13,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-13,},FirstTimestamp:2026-04-24 23:37:21.069827658 +0000 UTC m=+2.519385422,LastTimestamp:2026-04-24 23:37:21.069827658 +0000 UTC m=+2.519385422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-13,}" Apr 24 23:37:21.094014 kubelet[3034]: I0424 23:37:21.093104 3034 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:37:21.094326 kubelet[3034]: I0424 23:37:21.094292 3034 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:37:21.094952 kubelet[3034]: E0424 23:37:21.094899 3034 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-13\" not found" Apr 24 23:37:21.097127 kubelet[3034]: I0424 23:37:21.096329 3034 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:37:21.099263 kubelet[3034]: I0424 23:37:21.096397 3034 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:37:21.102276 kubelet[3034]: I0424 23:37:21.096421 3034 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:37:21.102276 kubelet[3034]: E0424 23:37:21.097785 3034 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:37:21.102276 kubelet[3034]: I0424 23:37:21.098142 3034 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:37:21.102803 kubelet[3034]: I0424 23:37:21.102472 3034 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:37:21.105210 kubelet[3034]: E0424 23:37:21.098369 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:37:21.105210 kubelet[3034]: E0424 23:37:21.098487 3034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-13?timeout=10s\": dial tcp 172.31.28.13:6443: connect: connection refused" interval="200ms" Apr 24 23:37:21.109043 kubelet[3034]: I0424 23:37:21.108864 3034 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:37:21.148548 kubelet[3034]: I0424 23:37:21.148396 3034 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:37:21.153888 kubelet[3034]: I0424 23:37:21.153293 3034 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:37:21.153888 kubelet[3034]: I0424 23:37:21.153342 3034 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:37:21.153888 kubelet[3034]: I0424 23:37:21.153380 3034 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:37:21.153888 kubelet[3034]: I0424 23:37:21.153396 3034 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:37:21.153888 kubelet[3034]: E0424 23:37:21.153469 3034 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:37:21.156087 kubelet[3034]: E0424 23:37:21.155961 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:37:21.185140 kubelet[3034]: I0424 23:37:21.185096 3034 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:37:21.185729 kubelet[3034]: I0424 23:37:21.185349 3034 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:37:21.185729 kubelet[3034]: I0424 23:37:21.185393 3034 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:37:21.189700 kubelet[3034]: I0424 23:37:21.189659 3034 policy_none.go:49] "None policy: Start" Apr 24 23:37:21.189917 kubelet[3034]: I0424 23:37:21.189892 3034 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:37:21.190062 kubelet[3034]: I0424 23:37:21.190041 3034 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:37:21.200867 kubelet[3034]: E0424 23:37:21.200804 3034 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:37:21.201483 kubelet[3034]: I0424 23:37:21.201190 3034 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:37:21.201483 kubelet[3034]: I0424 23:37:21.201233 3034 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:37:21.205216 kubelet[3034]: I0424 23:37:21.205095 3034 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:37:21.208875 kubelet[3034]: E0424 23:37:21.208623 3034 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:37:21.208875 kubelet[3034]: E0424 23:37:21.208703 3034 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-13\" not found" Apr 24 23:37:21.272163 kubelet[3034]: E0424 23:37:21.271173 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-13\" not found" node="ip-172-31-28-13" Apr 24 23:37:21.282061 kubelet[3034]: E0424 23:37:21.281967 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-13\" not found" node="ip-172-31-28-13" Apr 24 23:37:21.287633 kubelet[3034]: E0424 23:37:21.287577 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-13\" not found" node="ip-172-31-28-13" Apr 24 23:37:21.303725 kubelet[3034]: I0424 23:37:21.303662 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0565166f08fe9860ff12f12d62915211-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-13\" (UID: \"0565166f08fe9860ff12f12d62915211\") " pod="kube-system/kube-controller-manager-ip-172-31-28-13" Apr 24 23:37:21.303895 kubelet[3034]: I0424 23:37:21.303745 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0565166f08fe9860ff12f12d62915211-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-13\" (UID: \"0565166f08fe9860ff12f12d62915211\") " pod="kube-system/kube-controller-manager-ip-172-31-28-13" Apr 24 23:37:21.303895 kubelet[3034]: I0424 23:37:21.303793 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0565166f08fe9860ff12f12d62915211-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-13\" (UID: \"0565166f08fe9860ff12f12d62915211\") " pod="kube-system/kube-controller-manager-ip-172-31-28-13" Apr 24 23:37:21.303895 kubelet[3034]: I0424 23:37:21.303837 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f25c14caf52b956ac62a340d3daefc9b-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-13\" (UID: \"f25c14caf52b956ac62a340d3daefc9b\") " pod="kube-system/kube-scheduler-ip-172-31-28-13" Apr 24 23:37:21.304146 kubelet[3034]: I0424 23:37:21.303878 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf8c8c7ff6a2a75fd43b43b0bbe07ee9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-13\" (UID: \"bf8c8c7ff6a2a75fd43b43b0bbe07ee9\") " pod="kube-system/kube-apiserver-ip-172-31-28-13" Apr 24 23:37:21.304146 kubelet[3034]: I0424 23:37:21.303941 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0565166f08fe9860ff12f12d62915211-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-13\" (UID: \"0565166f08fe9860ff12f12d62915211\") " pod="kube-system/kube-controller-manager-ip-172-31-28-13" Apr 24 23:37:21.304146 kubelet[3034]: I0424 23:37:21.304010 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf8c8c7ff6a2a75fd43b43b0bbe07ee9-ca-certs\") pod \"kube-apiserver-ip-172-31-28-13\" (UID: \"bf8c8c7ff6a2a75fd43b43b0bbe07ee9\") " pod="kube-system/kube-apiserver-ip-172-31-28-13" Apr 24 23:37:21.304146 kubelet[3034]: I0424 23:37:21.304057 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf8c8c7ff6a2a75fd43b43b0bbe07ee9-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-13\" (UID: \"bf8c8c7ff6a2a75fd43b43b0bbe07ee9\") " pod="kube-system/kube-apiserver-ip-172-31-28-13" Apr 24 23:37:21.304146 kubelet[3034]: I0424 23:37:21.304095 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0565166f08fe9860ff12f12d62915211-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-13\" (UID: \"0565166f08fe9860ff12f12d62915211\") " pod="kube-system/kube-controller-manager-ip-172-31-28-13" Apr 24 23:37:21.305919 kubelet[3034]: E0424 23:37:21.305824 3034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-13?timeout=10s\": dial tcp 172.31.28.13:6443: connect: connection refused" interval="400ms" Apr 24 23:37:21.306206 kubelet[3034]: I0424 23:37:21.306158 3034 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-13" Apr 24 23:37:21.307030 kubelet[3034]: E0424 23:37:21.306927 3034 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.13:6443/api/v1/nodes\": dial tcp 172.31.28.13:6443: connect: connection refused" node="ip-172-31-28-13" Apr 24 23:37:21.324929 kubelet[3034]: E0424 23:37:21.324750 3034 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.13:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-13.18a96f3e3c9b264a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-13,UID:ip-172-31-28-13,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-13,},FirstTimestamp:2026-04-24 23:37:21.069827658 +0000 UTC m=+2.519385422,LastTimestamp:2026-04-24 23:37:21.069827658 +0000 UTC m=+2.519385422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-13,}" Apr 24 23:37:21.509485 kubelet[3034]: I0424 23:37:21.509415 3034 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-13" Apr 24 23:37:21.510022 kubelet[3034]: E0424 23:37:21.509907 3034 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.13:6443/api/v1/nodes\": dial tcp 172.31.28.13:6443: connect: connection refused" node="ip-172-31-28-13" Apr 24 23:37:21.573911 containerd[2134]: time="2026-04-24T23:37:21.573818180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-13,Uid:bf8c8c7ff6a2a75fd43b43b0bbe07ee9,Namespace:kube-system,Attempt:0,}" Apr 24 23:37:21.584456 containerd[2134]: time="2026-04-24T23:37:21.584363984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-13,Uid:0565166f08fe9860ff12f12d62915211,Namespace:kube-system,Attempt:0,}" Apr 24 23:37:21.590497 containerd[2134]: time="2026-04-24T23:37:21.590267120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-13,Uid:f25c14caf52b956ac62a340d3daefc9b,Namespace:kube-system,Attempt:0,}" Apr 24 23:37:21.707061 kubelet[3034]: E0424 23:37:21.706832 3034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-13?timeout=10s\": dial tcp 172.31.28.13:6443: connect: connection refused" interval="800ms" Apr 24 23:37:21.913517 kubelet[3034]: I0424 23:37:21.913334 3034 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-13" Apr 24 23:37:21.914734 kubelet[3034]: E0424 23:37:21.914666 3034 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.13:6443/api/v1/nodes\": dial tcp 172.31.28.13:6443: connect: connection refused" node="ip-172-31-28-13" Apr 24 23:37:22.097349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4265227799.mount: Deactivated successfully. Apr 24 23:37:22.115070 containerd[2134]: time="2026-04-24T23:37:22.114581059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:37:22.116956 containerd[2134]: time="2026-04-24T23:37:22.116854183Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:37:22.119043 containerd[2134]: time="2026-04-24T23:37:22.118788175Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 24 23:37:22.121562 containerd[2134]: time="2026-04-24T23:37:22.121107679Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:37:22.123489 containerd[2134]: time="2026-04-24T23:37:22.123406975Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:37:22.127039 containerd[2134]: time="2026-04-24T23:37:22.126466723Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:37:22.127976 containerd[2134]: time="2026-04-24T23:37:22.127900615Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:37:22.132586 containerd[2134]: time="2026-04-24T23:37:22.132515083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:37:22.138727 containerd[2134]: time="2026-04-24T23:37:22.138618595Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 564.662079ms" Apr 24 23:37:22.155538 containerd[2134]: time="2026-04-24T23:37:22.155154727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 570.645147ms" Apr 24 23:37:22.157621 containerd[2134]: time="2026-04-24T23:37:22.157550743Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.134943ms" Apr 24 23:37:22.244534 kubelet[3034]: E0424 23:37:22.244361 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:37:22.350488 containerd[2134]: time="2026-04-24T23:37:22.349065248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:22.350488 containerd[2134]: time="2026-04-24T23:37:22.349176920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:22.350488 containerd[2134]: time="2026-04-24T23:37:22.349214384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:22.350488 containerd[2134]: time="2026-04-24T23:37:22.349380692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:22.367613 containerd[2134]: time="2026-04-24T23:37:22.366839240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:22.367613 containerd[2134]: time="2026-04-24T23:37:22.366947432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:22.367613 containerd[2134]: time="2026-04-24T23:37:22.367036028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:22.367613 containerd[2134]: time="2026-04-24T23:37:22.367233728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:22.374029 containerd[2134]: time="2026-04-24T23:37:22.373651088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:22.374029 containerd[2134]: time="2026-04-24T23:37:22.373766828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:22.374029 containerd[2134]: time="2026-04-24T23:37:22.373830140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:22.376019 containerd[2134]: time="2026-04-24T23:37:22.374704640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:22.504486 containerd[2134]: time="2026-04-24T23:37:22.503731761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-13,Uid:0565166f08fe9860ff12f12d62915211,Namespace:kube-system,Attempt:0,} returns sandbox id \"f387571d81e39d5747b6ddbd26feb4d8d5b7b457e6262129b72383c2ba62c4ad\"" Apr 24 23:37:22.507576 kubelet[3034]: E0424 23:37:22.507499 3034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-13?timeout=10s\": dial tcp 172.31.28.13:6443: connect: connection refused" interval="1.6s" Apr 24 23:37:22.521313 containerd[2134]: time="2026-04-24T23:37:22.520713009Z" level=info msg="CreateContainer within sandbox \"f387571d81e39d5747b6ddbd26feb4d8d5b7b457e6262129b72383c2ba62c4ad\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:37:22.553845 containerd[2134]: time="2026-04-24T23:37:22.553784133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-13,Uid:bf8c8c7ff6a2a75fd43b43b0bbe07ee9,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f323911945d845cc8650a722d2da38d67c732e5460e52128d46afa53009dcc5\"" Apr 24 23:37:22.559875 containerd[2134]: time="2026-04-24T23:37:22.559204725Z" level=info msg="CreateContainer within sandbox \"f387571d81e39d5747b6ddbd26feb4d8d5b7b457e6262129b72383c2ba62c4ad\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"54e35e40d2c3770b4dc5f31de09d647defcb9872e422fe73a802d53977097cf2\"" Apr 24 23:37:22.563127 containerd[2134]: time="2026-04-24T23:37:22.563057361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-13,Uid:f25c14caf52b956ac62a340d3daefc9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"03707ac5e32539a00ef064afb3ec89f0888981912a8ecee2841bc9b853c629c7\"" Apr 24 23:37:22.566031 containerd[2134]: time="2026-04-24T23:37:22.565371273Z" level=info msg="StartContainer for \"54e35e40d2c3770b4dc5f31de09d647defcb9872e422fe73a802d53977097cf2\"" Apr 24 23:37:22.570459 containerd[2134]: time="2026-04-24T23:37:22.570381633Z" level=info msg="CreateContainer within sandbox \"3f323911945d845cc8650a722d2da38d67c732e5460e52128d46afa53009dcc5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:37:22.577292 kubelet[3034]: E0424 23:37:22.577233 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:37:22.585971 containerd[2134]: time="2026-04-24T23:37:22.585781797Z" level=info msg="CreateContainer within sandbox \"03707ac5e32539a00ef064afb3ec89f0888981912a8ecee2841bc9b853c629c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:37:22.613564 containerd[2134]: time="2026-04-24T23:37:22.612680349Z" level=info msg="CreateContainer within sandbox \"3f323911945d845cc8650a722d2da38d67c732e5460e52128d46afa53009dcc5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bd61183410526ea372251f04f6f1760da275c9ccd2e10f85eb48c9f77866a7bb\"" Apr 24 23:37:22.615810 containerd[2134]: time="2026-04-24T23:37:22.614875473Z" level=info msg="StartContainer for \"bd61183410526ea372251f04f6f1760da275c9ccd2e10f85eb48c9f77866a7bb\"" Apr 24 23:37:22.621458 kubelet[3034]: E0424 23:37:22.621405 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-13&limit=500&resourceVersion=0\": dial tcp 172.31.28.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:37:22.632212 containerd[2134]: time="2026-04-24T23:37:22.632094790Z" level=info msg="CreateContainer within sandbox \"03707ac5e32539a00ef064afb3ec89f0888981912a8ecee2841bc9b853c629c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b677b4eda1abfd69bf5b4e42dae89887d2266e5d9932aad42c36d593f7373580\"" Apr 24 23:37:22.634288 containerd[2134]: time="2026-04-24T23:37:22.634068430Z" level=info msg="StartContainer for \"b677b4eda1abfd69bf5b4e42dae89887d2266e5d9932aad42c36d593f7373580\"" Apr 24 23:37:22.670339 kubelet[3034]: E0424 23:37:22.670286 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:37:22.722527 kubelet[3034]: I0424 23:37:22.722483 3034 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-13" Apr 24 23:37:22.723967 kubelet[3034]: E0424 23:37:22.723911 3034 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.13:6443/api/v1/nodes\": dial tcp 172.31.28.13:6443: connect: connection refused" node="ip-172-31-28-13" Apr 24 23:37:22.759271 containerd[2134]: time="2026-04-24T23:37:22.758163478Z" level=info msg="StartContainer for \"54e35e40d2c3770b4dc5f31de09d647defcb9872e422fe73a802d53977097cf2\" returns successfully" Apr 24 23:37:22.857542 containerd[2134]: time="2026-04-24T23:37:22.854689103Z" level=info msg="StartContainer for \"b677b4eda1abfd69bf5b4e42dae89887d2266e5d9932aad42c36d593f7373580\" returns successfully" Apr 24 23:37:22.873182 containerd[2134]: time="2026-04-24T23:37:22.872340899Z" level=info msg="StartContainer for \"bd61183410526ea372251f04f6f1760da275c9ccd2e10f85eb48c9f77866a7bb\" returns successfully" Apr 24 23:37:23.175143 kubelet[3034]: E0424 23:37:23.174781 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-13\" not found" node="ip-172-31-28-13" Apr 24 23:37:23.185297 kubelet[3034]: E0424 23:37:23.185213 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-13\" not found" node="ip-172-31-28-13" Apr 24 23:37:23.191024 kubelet[3034]: E0424 23:37:23.189976 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-13\" not found" node="ip-172-31-28-13" Apr 24 23:37:24.194061 kubelet[3034]: E0424 23:37:24.193879 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-13\" not found" node="ip-172-31-28-13" Apr 24 23:37:24.197659 kubelet[3034]: E0424 23:37:24.196088 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-13\" not found" node="ip-172-31-28-13" Apr 24 23:37:24.333481 kubelet[3034]: I0424 23:37:24.332314 3034 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-13" Apr 24 23:37:25.198164 kubelet[3034]: E0424 23:37:25.197012 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-13\" not found" node="ip-172-31-28-13" Apr 24 23:37:25.710689 kubelet[3034]: E0424 23:37:25.710641 3034 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-13\" not found" node="ip-172-31-28-13" Apr 24 23:37:25.863145 update_engine[2110]: I20260424 23:37:25.863063 2110 update_attempter.cc:509] Updating boot flags... Apr 24 23:37:25.903528 kubelet[3034]: I0424 23:37:25.899809 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-13" Apr 24 23:37:25.903528 kubelet[3034]: I0424 23:37:25.899945 3034 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-13" Apr 24 23:37:25.968060 kubelet[3034]: I0424 23:37:25.966405 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-13" Apr 24 23:37:26.026424 kubelet[3034]: E0424 23:37:26.026377 3034 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-13\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-13" Apr 24 23:37:26.029026 kubelet[3034]: I0424 23:37:26.026574 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-13" Apr 24 23:37:26.029848 kubelet[3034]: E0424 23:37:26.029697 3034 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-13\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-13" Apr 24 23:37:26.043045 kubelet[3034]: E0424 23:37:26.040203 3034 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-13\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-13" Apr 24 23:37:26.043045 kubelet[3034]: I0424 23:37:26.040267 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-13" Apr 24 23:37:26.045115 kubelet[3034]: E0424 23:37:26.044076 3034 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-13\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-13" Apr 24 23:37:26.077320 kubelet[3034]: I0424 23:37:26.076839 3034 apiserver.go:52] "Watching apiserver" Apr 24 23:37:26.100123 kubelet[3034]: I0424 23:37:26.099976 3034 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:37:26.132377 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3333) Apr 24 23:37:26.432051 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3337) Apr 24 23:37:28.043559 systemd[1]: Reloading requested from client PID 3502 ('systemctl') (unit session-7.scope)... Apr 24 23:37:28.043592 systemd[1]: Reloading... Apr 24 23:37:28.248035 zram_generator::config[3548]: No configuration found. Apr 24 23:37:28.521101 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:37:28.722502 systemd[1]: Reloading finished in 678 ms. Apr 24 23:37:28.790896 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:28.816514 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:37:28.817196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:28.826886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:29.201669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:29.232802 (kubelet)[3612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:37:29.333449 kubelet[3612]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:37:29.334816 kubelet[3612]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:37:29.334816 kubelet[3612]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:37:29.334816 kubelet[3612]: I0424 23:37:29.334275 3612 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:37:29.349083 kubelet[3612]: I0424 23:37:29.348971 3612 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:37:29.349083 kubelet[3612]: I0424 23:37:29.349078 3612 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:37:29.350301 kubelet[3612]: I0424 23:37:29.350247 3612 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:37:29.355327 kubelet[3612]: I0424 23:37:29.355277 3612 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:37:29.361445 kubelet[3612]: I0424 23:37:29.361393 3612 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:37:29.369710 kubelet[3612]: E0424 23:37:29.369633 3612 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:37:29.369710 kubelet[3612]: I0424 23:37:29.369689 3612 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:37:29.377272 kubelet[3612]: I0424 23:37:29.377213 3612 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:37:29.381938 kubelet[3612]: I0424 23:37:29.379970 3612 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:37:29.382204 kubelet[3612]: I0424 23:37:29.380074 3612 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-13","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 24 23:37:29.383024 kubelet[3612]: I0424 23:37:29.382961 3612 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:37:29.383160 kubelet[3612]: I0424 23:37:29.383142 3612 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:37:29.383851 kubelet[3612]: I0424 23:37:29.383812 3612 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:37:29.386690 kubelet[3612]: I0424 23:37:29.386632 3612 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:37:29.386690 kubelet[3612]: I0424 23:37:29.386690 3612 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:37:29.386898 kubelet[3612]: I0424 23:37:29.386759 3612 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:37:29.386898 kubelet[3612]: I0424 23:37:29.386793 3612 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:37:29.391908 kubelet[3612]: I0424 23:37:29.391827 3612 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:37:29.394850 kubelet[3612]: I0424 23:37:29.393841 3612 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:37:29.408162 kubelet[3612]: I0424 23:37:29.408128 3612 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:37:29.408659 kubelet[3612]: I0424 23:37:29.408478 3612 server.go:1289] "Started kubelet" Apr 24 23:37:29.425778 kubelet[3612]: I0424 23:37:29.422977 3612 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:37:29.437644 kubelet[3612]: I0424 23:37:29.436473 3612 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:37:29.437644 kubelet[3612]: I0424 23:37:29.408679 3612 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:37:29.437644 kubelet[3612]: I0424 23:37:29.437262 3612 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:37:29.442376 kubelet[3612]: I0424 23:37:29.442332 3612 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:37:29.446231 kubelet[3612]: I0424 23:37:29.445462 3612 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:37:29.449478 kubelet[3612]: I0424 23:37:29.448160 3612 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:37:29.449478 kubelet[3612]: E0424 23:37:29.448508 3612 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-13\" not found" Apr 24 23:37:29.450492 kubelet[3612]: I0424 23:37:29.450407 3612 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:37:29.450747 kubelet[3612]: I0424 23:37:29.450687 3612 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:37:29.523463 kubelet[3612]: I0424 23:37:29.521432 3612 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:37:29.523463 kubelet[3612]: I0424 23:37:29.521466 3612 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:37:29.523463 kubelet[3612]: I0424 23:37:29.522095 3612 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:37:29.552345 kubelet[3612]: I0424 23:37:29.550852 3612 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:37:29.552345 kubelet[3612]: E0424 23:37:29.550953 3612 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:37:29.572124 kubelet[3612]: I0424 23:37:29.572054 3612 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:37:29.572124 kubelet[3612]: I0424 23:37:29.572134 3612 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:37:29.572124 kubelet[3612]: I0424 23:37:29.572167 3612 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:37:29.572124 kubelet[3612]: I0424 23:37:29.572291 3612 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:37:29.572124 kubelet[3612]: E0424 23:37:29.572507 3612 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:37:29.672708 kubelet[3612]: E0424 23:37:29.672655 3612 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 24 23:37:29.705977 kubelet[3612]: I0424 23:37:29.705117 3612 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:37:29.705977 kubelet[3612]: I0424 23:37:29.705145 3612 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:37:29.705977 kubelet[3612]: I0424 23:37:29.705186 3612 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:37:29.705977 kubelet[3612]: I0424 23:37:29.705400 3612 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 23:37:29.705977 kubelet[3612]: I0424 23:37:29.705420 3612 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 23:37:29.705977 kubelet[3612]: I0424 23:37:29.705449 3612 policy_none.go:49] "None policy: Start" Apr 24 23:37:29.705977 kubelet[3612]: I0424 23:37:29.705467 3612 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:37:29.705977 kubelet[3612]: I0424 23:37:29.705490 3612 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:37:29.705977 kubelet[3612]: I0424 23:37:29.705662 3612 state_mem.go:75] "Updated machine memory state" Apr 24 23:37:29.712308 kubelet[3612]: E0424 23:37:29.711192 3612 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:37:29.712308 kubelet[3612]: I0424 23:37:29.711482 3612 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:37:29.712308 kubelet[3612]: I0424 23:37:29.711502 3612 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:37:29.716802 kubelet[3612]: I0424 23:37:29.715558 3612 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:37:29.720874 kubelet[3612]: E0424 23:37:29.720835 3612 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:37:29.828390 kubelet[3612]: I0424 23:37:29.828314 3612 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-13" Apr 24 23:37:29.841314 kubelet[3612]: I0424 23:37:29.839520 3612 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-13" Apr 24 23:37:29.841314 kubelet[3612]: I0424 23:37:29.839713 3612 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-13" Apr 24 23:37:29.880020 kubelet[3612]: I0424 23:37:29.874471 3612 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-13" Apr 24 23:37:29.880020 kubelet[3612]: I0424 23:37:29.875235 3612 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-13" Apr 24 23:37:29.882828 kubelet[3612]: I0424 23:37:29.882752 3612 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-13" Apr 24 23:37:29.952574 kubelet[3612]: I0424 23:37:29.952516 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf8c8c7ff6a2a75fd43b43b0bbe07ee9-ca-certs\") pod \"kube-apiserver-ip-172-31-28-13\" (UID: \"bf8c8c7ff6a2a75fd43b43b0bbe07ee9\") " pod="kube-system/kube-apiserver-ip-172-31-28-13" Apr 24 23:37:29.952870 kubelet[3612]: I0424 23:37:29.952839 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf8c8c7ff6a2a75fd43b43b0bbe07ee9-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-13\" (UID: \"bf8c8c7ff6a2a75fd43b43b0bbe07ee9\") " pod="kube-system/kube-apiserver-ip-172-31-28-13" Apr 24 23:37:29.954292 kubelet[3612]: I0424 23:37:29.953334 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0565166f08fe9860ff12f12d62915211-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-13\" (UID: \"0565166f08fe9860ff12f12d62915211\") " pod="kube-system/kube-controller-manager-ip-172-31-28-13" Apr 24 23:37:29.954541 kubelet[3612]: I0424 23:37:29.954505 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0565166f08fe9860ff12f12d62915211-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-13\" (UID: \"0565166f08fe9860ff12f12d62915211\") " pod="kube-system/kube-controller-manager-ip-172-31-28-13" Apr 24 23:37:29.954703 kubelet[3612]: I0424 23:37:29.954677 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0565166f08fe9860ff12f12d62915211-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-13\" (UID: \"0565166f08fe9860ff12f12d62915211\") " pod="kube-system/kube-controller-manager-ip-172-31-28-13" Apr 24 23:37:29.954832 kubelet[3612]: I0424 23:37:29.954810 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f25c14caf52b956ac62a340d3daefc9b-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-13\" (UID: \"f25c14caf52b956ac62a340d3daefc9b\") " pod="kube-system/kube-scheduler-ip-172-31-28-13" Apr 24 23:37:29.956030 kubelet[3612]: I0424 23:37:29.955498 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf8c8c7ff6a2a75fd43b43b0bbe07ee9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-13\" (UID: \"bf8c8c7ff6a2a75fd43b43b0bbe07ee9\") " pod="kube-system/kube-apiserver-ip-172-31-28-13" Apr 24 23:37:29.956030 kubelet[3612]: I0424 23:37:29.955549 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0565166f08fe9860ff12f12d62915211-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-13\" (UID: \"0565166f08fe9860ff12f12d62915211\") " pod="kube-system/kube-controller-manager-ip-172-31-28-13" Apr 24 23:37:29.956559 kubelet[3612]: I0424 23:37:29.955584 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0565166f08fe9860ff12f12d62915211-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-13\" (UID: \"0565166f08fe9860ff12f12d62915211\") " pod="kube-system/kube-controller-manager-ip-172-31-28-13" Apr 24 23:37:30.390677 kubelet[3612]: I0424 23:37:30.390408 3612 apiserver.go:52] "Watching apiserver" Apr 24 23:37:30.450652 kubelet[3612]: I0424 23:37:30.450579 3612 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:37:30.637481 kubelet[3612]: I0424 23:37:30.636866 3612 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-13" Apr 24 23:37:30.651045 kubelet[3612]: E0424 23:37:30.650775 3612 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-13\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-13" Apr 24 23:37:30.736606 kubelet[3612]: I0424 23:37:30.735049 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-13" podStartSLOduration=1.73502607 podStartE2EDuration="1.73502607s" podCreationTimestamp="2026-04-24 23:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:37:30.699413658 +0000 UTC m=+1.457144901" watchObservedRunningTime="2026-04-24 23:37:30.73502607 +0000 UTC m=+1.492757277" Apr 24 23:37:30.768059 kubelet[3612]: I0424 23:37:30.767210 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-13" podStartSLOduration=1.767187078 podStartE2EDuration="1.767187078s" podCreationTimestamp="2026-04-24 23:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:37:30.738167754 +0000 UTC m=+1.495899273" watchObservedRunningTime="2026-04-24 23:37:30.767187078 +0000 UTC m=+1.524918309" Apr 24 23:37:30.805408 kubelet[3612]: I0424 23:37:30.805306 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-13" podStartSLOduration=1.80528307 podStartE2EDuration="1.80528307s" podCreationTimestamp="2026-04-24 23:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:37:30.768247842 +0000 UTC m=+1.525979073" watchObservedRunningTime="2026-04-24 23:37:30.80528307 +0000 UTC m=+1.563014289" Apr 24 23:37:32.700517 kubelet[3612]: I0424 23:37:32.700479 3612 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:37:32.702313 containerd[2134]: time="2026-04-24T23:37:32.702243812Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:37:32.704022 kubelet[3612]: I0424 23:37:32.702792 3612 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:37:33.683022 kubelet[3612]: I0424 23:37:33.682184 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddfb68b7-cc14-4893-aa6d-7dff8fe31124-xtables-lock\") pod \"kube-proxy-qg27j\" (UID: \"ddfb68b7-cc14-4893-aa6d-7dff8fe31124\") " pod="kube-system/kube-proxy-qg27j" Apr 24 23:37:33.683022 kubelet[3612]: I0424 23:37:33.682273 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ddfb68b7-cc14-4893-aa6d-7dff8fe31124-kube-proxy\") pod \"kube-proxy-qg27j\" (UID: \"ddfb68b7-cc14-4893-aa6d-7dff8fe31124\") " pod="kube-system/kube-proxy-qg27j" Apr 24 23:37:33.683022 kubelet[3612]: I0424 23:37:33.682328 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddfb68b7-cc14-4893-aa6d-7dff8fe31124-lib-modules\") pod \"kube-proxy-qg27j\" (UID: \"ddfb68b7-cc14-4893-aa6d-7dff8fe31124\") " pod="kube-system/kube-proxy-qg27j" Apr 24 23:37:33.683022 kubelet[3612]: I0424 23:37:33.682386 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9ptq\" (UniqueName: \"kubernetes.io/projected/ddfb68b7-cc14-4893-aa6d-7dff8fe31124-kube-api-access-p9ptq\") pod \"kube-proxy-qg27j\" (UID: \"ddfb68b7-cc14-4893-aa6d-7dff8fe31124\") " pod="kube-system/kube-proxy-qg27j" Apr 24 23:37:33.961745 containerd[2134]: time="2026-04-24T23:37:33.961612774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qg27j,Uid:ddfb68b7-cc14-4893-aa6d-7dff8fe31124,Namespace:kube-system,Attempt:0,}" Apr 24 23:37:33.984112 kubelet[3612]: I0424 23:37:33.983909 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b5e1596f-2102-4568-8bf0-0f0223a2bf50-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-hznk6\" (UID: \"b5e1596f-2102-4568-8bf0-0f0223a2bf50\") " pod="tigera-operator/tigera-operator-6bf85f8dd-hznk6" Apr 24 23:37:33.984112 kubelet[3612]: I0424 23:37:33.984006 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpzbx\" (UniqueName: \"kubernetes.io/projected/b5e1596f-2102-4568-8bf0-0f0223a2bf50-kube-api-access-lpzbx\") pod \"tigera-operator-6bf85f8dd-hznk6\" (UID: \"b5e1596f-2102-4568-8bf0-0f0223a2bf50\") " pod="tigera-operator/tigera-operator-6bf85f8dd-hznk6" Apr 24 23:37:34.021165 containerd[2134]: time="2026-04-24T23:37:34.020927466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:34.021577 containerd[2134]: time="2026-04-24T23:37:34.021434946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:34.021577 containerd[2134]: time="2026-04-24T23:37:34.021516186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:34.021772 containerd[2134]: time="2026-04-24T23:37:34.021718254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:34.101445 containerd[2134]: time="2026-04-24T23:37:34.101249275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qg27j,Uid:ddfb68b7-cc14-4893-aa6d-7dff8fe31124,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdb745544556ebe18a7d01b51609b48138ff29a080c2ef2b6edf05d9bec9d163\"" Apr 24 23:37:34.118337 containerd[2134]: time="2026-04-24T23:37:34.118249471Z" level=info msg="CreateContainer within sandbox \"cdb745544556ebe18a7d01b51609b48138ff29a080c2ef2b6edf05d9bec9d163\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 23:37:34.145786 containerd[2134]: time="2026-04-24T23:37:34.145706539Z" level=info msg="CreateContainer within sandbox \"cdb745544556ebe18a7d01b51609b48138ff29a080c2ef2b6edf05d9bec9d163\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"39ffec7d8e224096a9d18bff42e5d5be5fb8fa2190552a9b0c05c59e77b77e7a\"" Apr 24 23:37:34.147891 containerd[2134]: time="2026-04-24T23:37:34.147850639Z" level=info msg="StartContainer for \"39ffec7d8e224096a9d18bff42e5d5be5fb8fa2190552a9b0c05c59e77b77e7a\"" Apr 24 23:37:34.201028 containerd[2134]: time="2026-04-24T23:37:34.200924791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-hznk6,Uid:b5e1596f-2102-4568-8bf0-0f0223a2bf50,Namespace:tigera-operator,Attempt:0,}" Apr 24 23:37:34.267820 containerd[2134]: time="2026-04-24T23:37:34.266278687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:34.268356 containerd[2134]: time="2026-04-24T23:37:34.268061779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:34.268356 containerd[2134]: time="2026-04-24T23:37:34.268145467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:34.269081 containerd[2134]: time="2026-04-24T23:37:34.268868971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:34.280343 containerd[2134]: time="2026-04-24T23:37:34.280194103Z" level=info msg="StartContainer for \"39ffec7d8e224096a9d18bff42e5d5be5fb8fa2190552a9b0c05c59e77b77e7a\" returns successfully" Apr 24 23:37:34.378649 containerd[2134]: time="2026-04-24T23:37:34.378465068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-hznk6,Uid:b5e1596f-2102-4568-8bf0-0f0223a2bf50,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"617bd55722e4edf92125c67656f87910c1479e361bec00ea887957b34b40e939\"" Apr 24 23:37:34.385325 containerd[2134]: time="2026-04-24T23:37:34.385253408Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 24 23:37:34.663348 kubelet[3612]: I0424 23:37:34.663232 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qg27j" podStartSLOduration=1.663187425 podStartE2EDuration="1.663187425s" podCreationTimestamp="2026-04-24 23:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:37:34.662554677 +0000 UTC m=+5.420285908" watchObservedRunningTime="2026-04-24 23:37:34.663187425 +0000 UTC m=+5.420918656" Apr 24 23:37:35.552070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3669220383.mount: Deactivated successfully. Apr 24 23:37:36.842480 containerd[2134]: time="2026-04-24T23:37:36.842400492Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:36.844967 containerd[2134]: time="2026-04-24T23:37:36.844900476Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 24 23:37:36.846313 containerd[2134]: time="2026-04-24T23:37:36.846227256Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:36.850127 containerd[2134]: time="2026-04-24T23:37:36.850050516Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:36.852035 containerd[2134]: time="2026-04-24T23:37:36.851757852Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.466441288s" Apr 24 23:37:36.852035 containerd[2134]: time="2026-04-24T23:37:36.851820432Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 24 23:37:36.866892 containerd[2134]: time="2026-04-24T23:37:36.866565864Z" level=info msg="CreateContainer within sandbox \"617bd55722e4edf92125c67656f87910c1479e361bec00ea887957b34b40e939\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 23:37:36.891601 containerd[2134]: time="2026-04-24T23:37:36.891533316Z" level=info msg="CreateContainer within sandbox \"617bd55722e4edf92125c67656f87910c1479e361bec00ea887957b34b40e939\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0921298a45e4760a23e77bb18298f72e78ed634f79bd3797a4a7ea9ea5637ab0\"" Apr 24 23:37:36.892592 containerd[2134]: time="2026-04-24T23:37:36.892508076Z" level=info msg="StartContainer for \"0921298a45e4760a23e77bb18298f72e78ed634f79bd3797a4a7ea9ea5637ab0\"" Apr 24 23:37:36.994006 containerd[2134]: time="2026-04-24T23:37:36.993906973Z" level=info msg="StartContainer for \"0921298a45e4760a23e77bb18298f72e78ed634f79bd3797a4a7ea9ea5637ab0\" returns successfully" Apr 24 23:37:37.697017 kubelet[3612]: I0424 23:37:37.695308 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-hznk6" podStartSLOduration=2.2213773039999998 podStartE2EDuration="4.69525756s" podCreationTimestamp="2026-04-24 23:37:33 +0000 UTC" firstStartedPulling="2026-04-24 23:37:34.382310384 +0000 UTC m=+5.140041603" lastFinishedPulling="2026-04-24 23:37:36.85619064 +0000 UTC m=+7.613921859" observedRunningTime="2026-04-24 23:37:37.694133112 +0000 UTC m=+8.451864331" watchObservedRunningTime="2026-04-24 23:37:37.69525756 +0000 UTC m=+8.452988779" Apr 24 23:37:45.938333 sudo[2501]: pam_unix(sudo:session): session closed for user root Apr 24 23:37:46.104083 sshd[2477]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:46.124726 systemd[1]: sshd@6-172.31.28.13:22-20.229.252.112:33630.service: Deactivated successfully. Apr 24 23:37:46.141706 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:37:46.150787 systemd-logind[2104]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:37:46.163113 systemd-logind[2104]: Removed session 7. Apr 24 23:37:56.054707 kubelet[3612]: I0424 23:37:56.054565 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6ec51516-7dd3-4654-bf40-c77f3b95bc70-typha-certs\") pod \"calico-typha-57c95c7f7f-v64w8\" (UID: \"6ec51516-7dd3-4654-bf40-c77f3b95bc70\") " pod="calico-system/calico-typha-57c95c7f7f-v64w8" Apr 24 23:37:56.054707 kubelet[3612]: I0424 23:37:56.054687 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkfxc\" (UniqueName: \"kubernetes.io/projected/6ec51516-7dd3-4654-bf40-c77f3b95bc70-kube-api-access-gkfxc\") pod \"calico-typha-57c95c7f7f-v64w8\" (UID: \"6ec51516-7dd3-4654-bf40-c77f3b95bc70\") " pod="calico-system/calico-typha-57c95c7f7f-v64w8" Apr 24 23:37:56.055672 kubelet[3612]: I0424 23:37:56.054757 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ec51516-7dd3-4654-bf40-c77f3b95bc70-tigera-ca-bundle\") pod \"calico-typha-57c95c7f7f-v64w8\" (UID: \"6ec51516-7dd3-4654-bf40-c77f3b95bc70\") " pod="calico-system/calico-typha-57c95c7f7f-v64w8" Apr 24 23:37:56.332352 containerd[2134]: time="2026-04-24T23:37:56.332114561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57c95c7f7f-v64w8,Uid:6ec51516-7dd3-4654-bf40-c77f3b95bc70,Namespace:calico-system,Attempt:0,}" Apr 24 23:37:56.357252 kubelet[3612]: I0424 23:37:56.357157 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0a0253e2-2e87-465e-9010-94b595528eec-node-certs\") pod \"calico-node-skkl6\" (UID: \"0a0253e2-2e87-465e-9010-94b595528eec\") " pod="calico-system/calico-node-skkl6" Apr 24 23:37:56.357252 kubelet[3612]: I0424 23:37:56.357253 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/0a0253e2-2e87-465e-9010-94b595528eec-nodeproc\") pod \"calico-node-skkl6\" (UID: \"0a0253e2-2e87-465e-9010-94b595528eec\") " pod="calico-system/calico-node-skkl6" Apr 24 23:37:56.357507 kubelet[3612]: I0424 23:37:56.357348 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0a0253e2-2e87-465e-9010-94b595528eec-sys-fs\") pod \"calico-node-skkl6\" (UID: \"0a0253e2-2e87-465e-9010-94b595528eec\") " pod="calico-system/calico-node-skkl6" Apr 24 23:37:56.357507 kubelet[3612]: I0424 23:37:56.357398 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0a0253e2-2e87-465e-9010-94b595528eec-policysync\") pod \"calico-node-skkl6\" (UID: \"0a0253e2-2e87-465e-9010-94b595528eec\") " pod="calico-system/calico-node-skkl6" Apr 24 23:37:56.357507 kubelet[3612]: I0424 23:37:56.357438 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0a0253e2-2e87-465e-9010-94b595528eec-var-run-calico\") pod \"calico-node-skkl6\" (UID: \"0a0253e2-2e87-465e-9010-94b595528eec\") " pod="calico-system/calico-node-skkl6" Apr 24 23:37:56.357507 kubelet[3612]: I0424 23:37:56.357476 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a0253e2-2e87-465e-9010-94b595528eec-xtables-lock\") pod \"calico-node-skkl6\" (UID: \"0a0253e2-2e87-465e-9010-94b595528eec\") " pod="calico-system/calico-node-skkl6" Apr 24 23:37:56.357778 kubelet[3612]: I0424 23:37:56.357515 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/0a0253e2-2e87-465e-9010-94b595528eec-bpffs\") pod \"calico-node-skkl6\" (UID: \"0a0253e2-2e87-465e-9010-94b595528eec\") " pod="calico-system/calico-node-skkl6" Apr 24 23:37:56.357778 kubelet[3612]: I0424 23:37:56.357553 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0a0253e2-2e87-465e-9010-94b595528eec-var-lib-calico\") pod \"calico-node-skkl6\" (UID: \"0a0253e2-2e87-465e-9010-94b595528eec\") " pod="calico-system/calico-node-skkl6" Apr 24 23:37:56.357778 kubelet[3612]: I0424 23:37:56.357591 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvkh2\" (UniqueName: \"kubernetes.io/projected/0a0253e2-2e87-465e-9010-94b595528eec-kube-api-access-wvkh2\") pod \"calico-node-skkl6\" (UID: \"0a0253e2-2e87-465e-9010-94b595528eec\") " pod="calico-system/calico-node-skkl6" Apr 24 23:37:56.357778 kubelet[3612]: I0424 23:37:56.357653 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0a0253e2-2e87-465e-9010-94b595528eec-cni-net-dir\") pod \"calico-node-skkl6\" (UID: \"0a0253e2-2e87-465e-9010-94b595528eec\") " pod="calico-system/calico-node-skkl6" Apr 24 23:37:56.357778 kubelet[3612]: I0424 23:37:56.357693 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0a0253e2-2e87-465e-9010-94b595528eec-cni-bin-dir\") pod \"calico-node-skkl6\" (UID: \"0a0253e2-2e87-465e-9010-94b595528eec\") " pod="calico-system/calico-node-skkl6" Apr 24 23:37:56.358111 kubelet[3612]: I0424 23:37:56.357733 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0a0253e2-2e87-465e-9010-94b595528eec-flexvol-driver-host\") pod \"calico-node-skkl6\" (UID: \"0a0253e2-2e87-465e-9010-94b595528eec\") " pod="calico-system/calico-node-skkl6" Apr 24 23:37:56.358111 kubelet[3612]: I0424 23:37:56.357777 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0a0253e2-2e87-465e-9010-94b595528eec-cni-log-dir\") pod \"calico-node-skkl6\" (UID: \"0a0253e2-2e87-465e-9010-94b595528eec\") " pod="calico-system/calico-node-skkl6" Apr 24 23:37:56.358111 kubelet[3612]: I0424 23:37:56.357812 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a0253e2-2e87-465e-9010-94b595528eec-lib-modules\") pod \"calico-node-skkl6\" (UID: \"0a0253e2-2e87-465e-9010-94b595528eec\") " pod="calico-system/calico-node-skkl6" Apr 24 23:37:56.358111 kubelet[3612]: I0424 23:37:56.357851 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a0253e2-2e87-465e-9010-94b595528eec-tigera-ca-bundle\") pod \"calico-node-skkl6\" (UID: \"0a0253e2-2e87-465e-9010-94b595528eec\") " pod="calico-system/calico-node-skkl6" Apr 24 23:37:56.495154 kubelet[3612]: E0424 23:37:56.490343 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.496422 kubelet[3612]: W0424 23:37:56.496069 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.497661 kubelet[3612]: E0424 23:37:56.496302 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.543889 kubelet[3612]: E0424 23:37:56.537770 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.543889 kubelet[3612]: W0424 23:37:56.537808 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.543889 kubelet[3612]: E0424 23:37:56.537844 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.546383 kubelet[3612]: E0424 23:37:56.546298 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.546668 kubelet[3612]: W0424 23:37:56.546337 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.546668 kubelet[3612]: E0424 23:37:56.546620 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.549272 containerd[2134]: time="2026-04-24T23:37:56.541169178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:56.549272 containerd[2134]: time="2026-04-24T23:37:56.543204474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:56.549272 containerd[2134]: time="2026-04-24T23:37:56.543258786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:56.549272 containerd[2134]: time="2026-04-24T23:37:56.546315474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:56.624665 kubelet[3612]: E0424 23:37:56.622509 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zvsb9" podUID="70f656aa-464e-42e4-84a2-cf156c522759" Apr 24 23:37:56.629059 containerd[2134]: time="2026-04-24T23:37:56.627151854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-skkl6,Uid:0a0253e2-2e87-465e-9010-94b595528eec,Namespace:calico-system,Attempt:0,}" Apr 24 23:37:56.642602 kubelet[3612]: E0424 23:37:56.641509 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.642602 kubelet[3612]: W0424 23:37:56.641555 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.642602 kubelet[3612]: E0424 23:37:56.642266 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.649036 kubelet[3612]: E0424 23:37:56.647291 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.650570 kubelet[3612]: W0424 23:37:56.649265 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.650570 kubelet[3612]: E0424 23:37:56.649365 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.655092 kubelet[3612]: E0424 23:37:56.653224 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.655092 kubelet[3612]: W0424 23:37:56.653253 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.655092 kubelet[3612]: E0424 23:37:56.653288 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.661041 kubelet[3612]: E0424 23:37:56.658551 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.661041 kubelet[3612]: W0424 23:37:56.658591 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.661041 kubelet[3612]: E0424 23:37:56.658630 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.662061 kubelet[3612]: E0424 23:37:56.661952 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.662234 kubelet[3612]: W0424 23:37:56.662126 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.662234 kubelet[3612]: E0424 23:37:56.662174 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.665492 kubelet[3612]: E0424 23:37:56.663037 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.665492 kubelet[3612]: W0424 23:37:56.663080 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.665492 kubelet[3612]: E0424 23:37:56.663119 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.665492 kubelet[3612]: E0424 23:37:56.664560 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.665492 kubelet[3612]: W0424 23:37:56.664773 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.665492 kubelet[3612]: E0424 23:37:56.664812 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.670124 kubelet[3612]: E0424 23:37:56.667225 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.670124 kubelet[3612]: W0424 23:37:56.667268 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.670124 kubelet[3612]: E0424 23:37:56.667326 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.677101 kubelet[3612]: E0424 23:37:56.674355 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.677272 kubelet[3612]: W0424 23:37:56.677124 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.677272 kubelet[3612]: E0424 23:37:56.677166 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.685864 kubelet[3612]: E0424 23:37:56.684761 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.685864 kubelet[3612]: W0424 23:37:56.684813 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.685864 kubelet[3612]: E0424 23:37:56.684852 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.691812 kubelet[3612]: E0424 23:37:56.690197 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.691812 kubelet[3612]: W0424 23:37:56.690290 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.691812 kubelet[3612]: E0424 23:37:56.690329 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.691812 kubelet[3612]: E0424 23:37:56.691023 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.691812 kubelet[3612]: W0424 23:37:56.691055 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.691812 kubelet[3612]: E0424 23:37:56.691124 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.692293 kubelet[3612]: E0424 23:37:56.692043 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.692293 kubelet[3612]: W0424 23:37:56.692071 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.692293 kubelet[3612]: E0424 23:37:56.692135 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.694152 kubelet[3612]: E0424 23:37:56.692858 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.696932 kubelet[3612]: W0424 23:37:56.696415 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.698440 kubelet[3612]: E0424 23:37:56.697123 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.699102 kubelet[3612]: E0424 23:37:56.698579 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.699102 kubelet[3612]: W0424 23:37:56.698620 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.699102 kubelet[3612]: E0424 23:37:56.698654 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.703355 kubelet[3612]: E0424 23:37:56.702588 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.703355 kubelet[3612]: W0424 23:37:56.702633 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.703355 kubelet[3612]: E0424 23:37:56.702670 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.710551 kubelet[3612]: E0424 23:37:56.708831 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.710551 kubelet[3612]: W0424 23:37:56.708872 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.710551 kubelet[3612]: E0424 23:37:56.708906 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.722550 kubelet[3612]: E0424 23:37:56.722308 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.725110 kubelet[3612]: W0424 23:37:56.723669 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.725110 kubelet[3612]: E0424 23:37:56.723729 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.729259 kubelet[3612]: E0424 23:37:56.728101 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.729259 kubelet[3612]: W0424 23:37:56.728181 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.729259 kubelet[3612]: E0424 23:37:56.729173 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.736063 kubelet[3612]: E0424 23:37:56.734110 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.736063 kubelet[3612]: W0424 23:37:56.734170 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.736063 kubelet[3612]: E0424 23:37:56.734206 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.743738 kubelet[3612]: E0424 23:37:56.743164 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.743738 kubelet[3612]: W0424 23:37:56.743204 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.743738 kubelet[3612]: E0424 23:37:56.743239 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.743738 kubelet[3612]: I0424 23:37:56.743306 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/70f656aa-464e-42e4-84a2-cf156c522759-varrun\") pod \"csi-node-driver-zvsb9\" (UID: \"70f656aa-464e-42e4-84a2-cf156c522759\") " pod="calico-system/csi-node-driver-zvsb9" Apr 24 23:37:56.747953 kubelet[3612]: E0424 23:37:56.747257 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.747953 kubelet[3612]: W0424 23:37:56.747698 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.748694 kubelet[3612]: E0424 23:37:56.748131 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.750579 kubelet[3612]: I0424 23:37:56.750090 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70f656aa-464e-42e4-84a2-cf156c522759-kubelet-dir\") pod \"csi-node-driver-zvsb9\" (UID: \"70f656aa-464e-42e4-84a2-cf156c522759\") " pod="calico-system/csi-node-driver-zvsb9" Apr 24 23:37:56.750579 kubelet[3612]: E0424 23:37:56.750322 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.750579 kubelet[3612]: W0424 23:37:56.750344 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.750579 kubelet[3612]: E0424 23:37:56.750376 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.757011 kubelet[3612]: E0424 23:37:56.755742 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.757011 kubelet[3612]: W0424 23:37:56.755851 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.757011 kubelet[3612]: E0424 23:37:56.755894 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.762315 kubelet[3612]: E0424 23:37:56.762162 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.762315 kubelet[3612]: W0424 23:37:56.762207 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.762315 kubelet[3612]: E0424 23:37:56.762243 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.764445 kubelet[3612]: I0424 23:37:56.763229 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/70f656aa-464e-42e4-84a2-cf156c522759-registration-dir\") pod \"csi-node-driver-zvsb9\" (UID: \"70f656aa-464e-42e4-84a2-cf156c522759\") " pod="calico-system/csi-node-driver-zvsb9" Apr 24 23:37:56.764445 kubelet[3612]: E0424 23:37:56.764328 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.764445 kubelet[3612]: W0424 23:37:56.764397 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.764445 kubelet[3612]: E0424 23:37:56.764435 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.772915 kubelet[3612]: E0424 23:37:56.771344 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.772915 kubelet[3612]: W0424 23:37:56.771389 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.772915 kubelet[3612]: E0424 23:37:56.771427 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.777120 kubelet[3612]: E0424 23:37:56.773659 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.777120 kubelet[3612]: W0424 23:37:56.773706 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.777120 kubelet[3612]: E0424 23:37:56.773746 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.777120 kubelet[3612]: I0424 23:37:56.776176 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/70f656aa-464e-42e4-84a2-cf156c522759-socket-dir\") pod \"csi-node-driver-zvsb9\" (UID: \"70f656aa-464e-42e4-84a2-cf156c522759\") " pod="calico-system/csi-node-driver-zvsb9" Apr 24 23:37:56.780016 kubelet[3612]: E0424 23:37:56.778712 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.780016 kubelet[3612]: W0424 23:37:56.778765 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.780016 kubelet[3612]: E0424 23:37:56.778799 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.782502 kubelet[3612]: E0424 23:37:56.782437 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.782502 kubelet[3612]: W0424 23:37:56.782483 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.784411 kubelet[3612]: E0424 23:37:56.782520 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.786526 kubelet[3612]: E0424 23:37:56.785256 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.788313 kubelet[3612]: W0424 23:37:56.787677 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.788313 kubelet[3612]: E0424 23:37:56.787746 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.788313 kubelet[3612]: I0424 23:37:56.787802 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkzkj\" (UniqueName: \"kubernetes.io/projected/70f656aa-464e-42e4-84a2-cf156c522759-kube-api-access-tkzkj\") pod \"csi-node-driver-zvsb9\" (UID: \"70f656aa-464e-42e4-84a2-cf156c522759\") " pod="calico-system/csi-node-driver-zvsb9" Apr 24 23:37:56.794073 kubelet[3612]: E0424 23:37:56.793919 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.794073 kubelet[3612]: W0424 23:37:56.793967 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.794073 kubelet[3612]: E0424 23:37:56.794036 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.797790 kubelet[3612]: E0424 23:37:56.797749 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.798656 kubelet[3612]: W0424 23:37:56.798601 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.798908 kubelet[3612]: E0424 23:37:56.798878 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.803389 kubelet[3612]: E0424 23:37:56.803341 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.803759 kubelet[3612]: W0424 23:37:56.803721 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.804741 kubelet[3612]: E0424 23:37:56.804101 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.807810 kubelet[3612]: E0424 23:37:56.807570 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.808922 kubelet[3612]: W0424 23:37:56.807978 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.808922 kubelet[3612]: E0424 23:37:56.808074 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.814052 containerd[2134]: time="2026-04-24T23:37:56.813741799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:56.814052 containerd[2134]: time="2026-04-24T23:37:56.813887575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:56.814052 containerd[2134]: time="2026-04-24T23:37:56.813930019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:56.815211 containerd[2134]: time="2026-04-24T23:37:56.814616971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:56.891964 kubelet[3612]: E0424 23:37:56.890177 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.891964 kubelet[3612]: W0424 23:37:56.891154 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.891964 kubelet[3612]: E0424 23:37:56.891205 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.894302 kubelet[3612]: E0424 23:37:56.894128 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.894639 kubelet[3612]: W0424 23:37:56.894166 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.894639 kubelet[3612]: E0424 23:37:56.894544 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.903762 kubelet[3612]: E0424 23:37:56.902254 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.903762 kubelet[3612]: W0424 23:37:56.902701 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.905831 kubelet[3612]: E0424 23:37:56.902785 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.910613 kubelet[3612]: E0424 23:37:56.909670 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.910613 kubelet[3612]: W0424 23:37:56.909714 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.910613 kubelet[3612]: E0424 23:37:56.909751 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.919886 kubelet[3612]: E0424 23:37:56.919279 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.919886 kubelet[3612]: W0424 23:37:56.919327 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.919886 kubelet[3612]: E0424 23:37:56.919596 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.920263 containerd[2134]: time="2026-04-24T23:37:56.919723604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57c95c7f7f-v64w8,Uid:6ec51516-7dd3-4654-bf40-c77f3b95bc70,Namespace:calico-system,Attempt:0,} returns sandbox id \"96f8d26a9e14f9c7d685845d6cbb4318a45a9f6cc146e0147e464bbb1c5a9e80\"" Apr 24 23:37:56.921373 kubelet[3612]: E0424 23:37:56.921190 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.921373 kubelet[3612]: W0424 23:37:56.921232 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.921373 kubelet[3612]: E0424 23:37:56.921289 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.923808 kubelet[3612]: E0424 23:37:56.923767 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.924318 kubelet[3612]: W0424 23:37:56.924035 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.924318 kubelet[3612]: E0424 23:37:56.924080 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.925281 kubelet[3612]: E0424 23:37:56.925110 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.925281 kubelet[3612]: W0424 23:37:56.925144 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.925658 kubelet[3612]: E0424 23:37:56.925522 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.927680 containerd[2134]: time="2026-04-24T23:37:56.927341684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 24 23:37:56.928499 kubelet[3612]: E0424 23:37:56.927475 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.929393 kubelet[3612]: W0424 23:37:56.928436 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.929681 kubelet[3612]: E0424 23:37:56.929612 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.931781 kubelet[3612]: E0424 23:37:56.931696 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.932419 kubelet[3612]: W0424 23:37:56.932142 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.932419 kubelet[3612]: E0424 23:37:56.932209 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.935921 kubelet[3612]: E0424 23:37:56.935878 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.936446 kubelet[3612]: W0424 23:37:56.936188 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.936446 kubelet[3612]: E0424 23:37:56.936242 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.938379 kubelet[3612]: E0424 23:37:56.938211 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.938379 kubelet[3612]: W0424 23:37:56.938292 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.938379 kubelet[3612]: E0424 23:37:56.938330 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.942049 kubelet[3612]: E0424 23:37:56.941284 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.942049 kubelet[3612]: W0424 23:37:56.941353 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.942049 kubelet[3612]: E0424 23:37:56.941646 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.943066 containerd[2134]: time="2026-04-24T23:37:56.942788408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-skkl6,Uid:0a0253e2-2e87-465e-9010-94b595528eec,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f7bfcdafae24b64b8e218368edd59246f115e1c469d341e5174e4344883cd1d\"" Apr 24 23:37:56.943918 kubelet[3612]: E0424 23:37:56.943875 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.943918 kubelet[3612]: W0424 23:37:56.943914 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.944412 kubelet[3612]: E0424 23:37:56.943951 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.946243 kubelet[3612]: E0424 23:37:56.946168 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.946243 kubelet[3612]: W0424 23:37:56.946215 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.946471 kubelet[3612]: E0424 23:37:56.946280 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.948139 kubelet[3612]: E0424 23:37:56.947416 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.948139 kubelet[3612]: W0424 23:37:56.947578 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.948139 kubelet[3612]: E0424 23:37:56.947619 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.950329 kubelet[3612]: E0424 23:37:56.950073 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.950329 kubelet[3612]: W0424 23:37:56.950136 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.950329 kubelet[3612]: E0424 23:37:56.950172 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.951116 kubelet[3612]: E0424 23:37:56.951073 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.951116 kubelet[3612]: W0424 23:37:56.951115 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.951429 kubelet[3612]: E0424 23:37:56.951149 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.952729 kubelet[3612]: E0424 23:37:56.952523 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.952729 kubelet[3612]: W0424 23:37:56.952572 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.952729 kubelet[3612]: E0424 23:37:56.952609 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.953621 kubelet[3612]: E0424 23:37:56.953173 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.953621 kubelet[3612]: W0424 23:37:56.953215 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.953621 kubelet[3612]: E0424 23:37:56.953247 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.953890 kubelet[3612]: E0424 23:37:56.953639 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.953890 kubelet[3612]: W0424 23:37:56.953661 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.953890 kubelet[3612]: E0424 23:37:56.953688 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.954612 kubelet[3612]: E0424 23:37:56.954151 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.954612 kubelet[3612]: W0424 23:37:56.954191 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.954612 kubelet[3612]: E0424 23:37:56.954225 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.956312 kubelet[3612]: E0424 23:37:56.955022 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.956312 kubelet[3612]: W0424 23:37:56.955064 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.956312 kubelet[3612]: E0424 23:37:56.955096 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.956312 kubelet[3612]: E0424 23:37:56.955664 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.956312 kubelet[3612]: W0424 23:37:56.955692 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.956312 kubelet[3612]: E0424 23:37:56.955723 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.957578 kubelet[3612]: E0424 23:37:56.956410 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.957578 kubelet[3612]: W0424 23:37:56.956438 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.957578 kubelet[3612]: E0424 23:37:56.956488 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:56.978236 kubelet[3612]: E0424 23:37:56.978177 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:56.978236 kubelet[3612]: W0424 23:37:56.978217 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:56.978458 kubelet[3612]: E0424 23:37:56.978254 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:58.284419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2567194611.mount: Deactivated successfully. Apr 24 23:37:58.574363 kubelet[3612]: E0424 23:37:58.573372 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zvsb9" podUID="70f656aa-464e-42e4-84a2-cf156c522759" Apr 24 23:37:59.312687 containerd[2134]: time="2026-04-24T23:37:59.312625808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:59.315291 containerd[2134]: time="2026-04-24T23:37:59.315238676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 24 23:37:59.320195 containerd[2134]: time="2026-04-24T23:37:59.320140400Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:59.330080 containerd[2134]: time="2026-04-24T23:37:59.329946908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:59.333745 containerd[2134]: time="2026-04-24T23:37:59.333521864Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.40609984s" Apr 24 23:37:59.333745 containerd[2134]: time="2026-04-24T23:37:59.333591860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 24 23:37:59.348758 containerd[2134]: time="2026-04-24T23:37:59.347189492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 24 23:37:59.391243 containerd[2134]: time="2026-04-24T23:37:59.391011620Z" level=info msg="CreateContainer within sandbox \"96f8d26a9e14f9c7d685845d6cbb4318a45a9f6cc146e0147e464bbb1c5a9e80\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 24 23:37:59.422578 containerd[2134]: time="2026-04-24T23:37:59.422481068Z" level=info msg="CreateContainer within sandbox \"96f8d26a9e14f9c7d685845d6cbb4318a45a9f6cc146e0147e464bbb1c5a9e80\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"58f2432487a83b56e9b0a52932a1506c9e95b493931cd8c1fe6c8f81198503c9\"" Apr 24 23:37:59.423761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608607752.mount: Deactivated successfully. Apr 24 23:37:59.428739 containerd[2134]: time="2026-04-24T23:37:59.426730916Z" level=info msg="StartContainer for \"58f2432487a83b56e9b0a52932a1506c9e95b493931cd8c1fe6c8f81198503c9\"" Apr 24 23:37:59.561791 containerd[2134]: time="2026-04-24T23:37:59.561706845Z" level=info msg="StartContainer for \"58f2432487a83b56e9b0a52932a1506c9e95b493931cd8c1fe6c8f81198503c9\" returns successfully" Apr 24 23:37:59.882622 kubelet[3612]: E0424 23:37:59.882557 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.882622 kubelet[3612]: W0424 23:37:59.882605 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.883424 kubelet[3612]: E0424 23:37:59.882645 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.886470 kubelet[3612]: E0424 23:37:59.885961 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.886470 kubelet[3612]: W0424 23:37:59.886056 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.886470 kubelet[3612]: E0424 23:37:59.886164 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.891850 kubelet[3612]: E0424 23:37:59.891786 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.891850 kubelet[3612]: W0424 23:37:59.891834 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.892158 kubelet[3612]: E0424 23:37:59.891873 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.894482 kubelet[3612]: E0424 23:37:59.894415 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.894482 kubelet[3612]: W0424 23:37:59.894468 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.894974 kubelet[3612]: E0424 23:37:59.894510 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.897468 kubelet[3612]: E0424 23:37:59.897411 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.897468 kubelet[3612]: W0424 23:37:59.897455 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.897468 kubelet[3612]: E0424 23:37:59.897491 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.900265 kubelet[3612]: E0424 23:37:59.898125 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.900265 kubelet[3612]: W0424 23:37:59.898168 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.900265 kubelet[3612]: E0424 23:37:59.898204 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.900738 kubelet[3612]: E0424 23:37:59.900691 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.900738 kubelet[3612]: W0424 23:37:59.900732 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.900952 kubelet[3612]: E0424 23:37:59.900767 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.903485 kubelet[3612]: E0424 23:37:59.903430 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.903485 kubelet[3612]: W0424 23:37:59.903473 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.904621 kubelet[3612]: E0424 23:37:59.903510 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.906263 kubelet[3612]: E0424 23:37:59.906202 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.906263 kubelet[3612]: W0424 23:37:59.906250 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.906514 kubelet[3612]: E0424 23:37:59.906288 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.911514 kubelet[3612]: E0424 23:37:59.911252 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.911514 kubelet[3612]: W0424 23:37:59.911293 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.911514 kubelet[3612]: E0424 23:37:59.911331 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.915516 kubelet[3612]: E0424 23:37:59.915139 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.916819 kubelet[3612]: W0424 23:37:59.915836 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.916819 kubelet[3612]: E0424 23:37:59.915899 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.920499 kubelet[3612]: E0424 23:37:59.919964 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.920499 kubelet[3612]: W0424 23:37:59.920040 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.920499 kubelet[3612]: E0424 23:37:59.920076 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.927187 kubelet[3612]: E0424 23:37:59.927143 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.927780 kubelet[3612]: W0424 23:37:59.927343 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.927780 kubelet[3612]: E0424 23:37:59.927383 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.930069 kubelet[3612]: E0424 23:37:59.928552 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.930543 kubelet[3612]: W0424 23:37:59.930266 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.930543 kubelet[3612]: E0424 23:37:59.930322 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.935768 kubelet[3612]: E0424 23:37:59.935714 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.936648 kubelet[3612]: W0424 23:37:59.935977 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.936648 kubelet[3612]: E0424 23:37:59.936056 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.964775 kubelet[3612]: E0424 23:37:59.964606 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.964775 kubelet[3612]: W0424 23:37:59.964769 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.965245 kubelet[3612]: E0424 23:37:59.964973 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.969209 kubelet[3612]: E0424 23:37:59.968052 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.969209 kubelet[3612]: W0424 23:37:59.968086 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.969209 kubelet[3612]: E0424 23:37:59.968119 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.972241 kubelet[3612]: E0424 23:37:59.971132 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.972241 kubelet[3612]: W0424 23:37:59.971169 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.972241 kubelet[3612]: E0424 23:37:59.971204 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.977048 kubelet[3612]: E0424 23:37:59.976974 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.979905 kubelet[3612]: W0424 23:37:59.977593 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.979905 kubelet[3612]: E0424 23:37:59.977646 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.979905 kubelet[3612]: E0424 23:37:59.979656 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.979905 kubelet[3612]: W0424 23:37:59.979688 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.979905 kubelet[3612]: E0424 23:37:59.979723 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.992947 kubelet[3612]: E0424 23:37:59.984364 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.992947 kubelet[3612]: W0424 23:37:59.990295 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.992947 kubelet[3612]: E0424 23:37:59.990338 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:59.995451 kubelet[3612]: E0424 23:37:59.995414 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:59.996182 kubelet[3612]: W0424 23:37:59.995632 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:59.997143 kubelet[3612]: E0424 23:37:59.996867 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.000627 kubelet[3612]: E0424 23:38:00.000359 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.002076 kubelet[3612]: W0424 23:38:00.001246 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.002814 kubelet[3612]: E0424 23:38:00.002311 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.005956 kubelet[3612]: E0424 23:38:00.004681 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.005956 kubelet[3612]: W0424 23:38:00.004731 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.005956 kubelet[3612]: E0424 23:38:00.004769 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.014562 kubelet[3612]: E0424 23:38:00.012346 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.014562 kubelet[3612]: W0424 23:38:00.012383 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.014562 kubelet[3612]: E0424 23:38:00.012931 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.014562 kubelet[3612]: E0424 23:38:00.013729 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.014562 kubelet[3612]: W0424 23:38:00.013759 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.014562 kubelet[3612]: E0424 23:38:00.013789 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.018036 kubelet[3612]: E0424 23:38:00.016653 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.018614 kubelet[3612]: W0424 23:38:00.018277 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.018614 kubelet[3612]: E0424 23:38:00.018368 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.023358 kubelet[3612]: E0424 23:38:00.023203 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.025515 kubelet[3612]: W0424 23:38:00.025120 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.025515 kubelet[3612]: E0424 23:38:00.025291 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.029060 kubelet[3612]: E0424 23:38:00.028797 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.029060 kubelet[3612]: W0424 23:38:00.028834 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.029060 kubelet[3612]: E0424 23:38:00.028869 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.034022 kubelet[3612]: E0424 23:38:00.032048 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.034022 kubelet[3612]: W0424 23:38:00.032097 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.034022 kubelet[3612]: E0424 23:38:00.032142 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.042151 kubelet[3612]: E0424 23:38:00.041385 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.046031 kubelet[3612]: W0424 23:38:00.044317 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.046031 kubelet[3612]: E0424 23:38:00.044372 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.051016 kubelet[3612]: E0424 23:38:00.048161 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.051016 kubelet[3612]: W0424 23:38:00.048203 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.051016 kubelet[3612]: E0424 23:38:00.048239 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.054052 kubelet[3612]: E0424 23:38:00.052630 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.054308 kubelet[3612]: W0424 23:38:00.054272 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.055778 kubelet[3612]: E0424 23:38:00.054407 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.573310 kubelet[3612]: E0424 23:38:00.573254 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zvsb9" podUID="70f656aa-464e-42e4-84a2-cf156c522759" Apr 24 23:38:00.654282 containerd[2134]: time="2026-04-24T23:38:00.653970886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:00.658880 containerd[2134]: time="2026-04-24T23:38:00.658795006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 24 23:38:00.661221 containerd[2134]: time="2026-04-24T23:38:00.660431182Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:00.670216 containerd[2134]: time="2026-04-24T23:38:00.670114619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:00.671607 containerd[2134]: time="2026-04-24T23:38:00.671515583Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.324224211s" Apr 24 23:38:00.671875 containerd[2134]: time="2026-04-24T23:38:00.671825339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 24 23:38:00.684353 containerd[2134]: time="2026-04-24T23:38:00.684062543Z" level=info msg="CreateContainer within sandbox \"5f7bfcdafae24b64b8e218368edd59246f115e1c469d341e5174e4344883cd1d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 24 23:38:00.733934 containerd[2134]: time="2026-04-24T23:38:00.733149347Z" level=info msg="CreateContainer within sandbox \"5f7bfcdafae24b64b8e218368edd59246f115e1c469d341e5174e4344883cd1d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4b65bbbc4ed2f082fa5a21ddf44b58b22a7e8ce7a22f031e0a7e6cf599c6beaa\"" Apr 24 23:38:00.736687 containerd[2134]: time="2026-04-24T23:38:00.736192487Z" level=info msg="StartContainer for \"4b65bbbc4ed2f082fa5a21ddf44b58b22a7e8ce7a22f031e0a7e6cf599c6beaa\"" Apr 24 23:38:00.798265 kubelet[3612]: I0424 23:38:00.798206 3612 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:38:00.846658 kubelet[3612]: E0424 23:38:00.846178 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.846658 kubelet[3612]: W0424 23:38:00.846228 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.846658 kubelet[3612]: E0424 23:38:00.846276 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.847718 kubelet[3612]: E0424 23:38:00.847669 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.848140 kubelet[3612]: W0424 23:38:00.847972 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.848403 kubelet[3612]: E0424 23:38:00.848364 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.849552 kubelet[3612]: E0424 23:38:00.849247 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.849552 kubelet[3612]: W0424 23:38:00.849286 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.849552 kubelet[3612]: E0424 23:38:00.849332 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.852188 kubelet[3612]: E0424 23:38:00.852117 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.852188 kubelet[3612]: W0424 23:38:00.852171 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.852483 kubelet[3612]: E0424 23:38:00.852216 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.852951 kubelet[3612]: E0424 23:38:00.852893 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.852951 kubelet[3612]: W0424 23:38:00.852936 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.853228 kubelet[3612]: E0424 23:38:00.852972 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.853671 kubelet[3612]: E0424 23:38:00.853594 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.853671 kubelet[3612]: W0424 23:38:00.853645 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.854080 kubelet[3612]: E0424 23:38:00.853684 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.854489 kubelet[3612]: E0424 23:38:00.854431 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.854489 kubelet[3612]: W0424 23:38:00.854475 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.854684 kubelet[3612]: E0424 23:38:00.854514 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.855191 kubelet[3612]: E0424 23:38:00.855129 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.855191 kubelet[3612]: W0424 23:38:00.855177 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.855417 kubelet[3612]: E0424 23:38:00.855214 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.855831 kubelet[3612]: E0424 23:38:00.855772 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.855831 kubelet[3612]: W0424 23:38:00.855816 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.856160 kubelet[3612]: E0424 23:38:00.855852 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.856713 kubelet[3612]: E0424 23:38:00.856643 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.856713 kubelet[3612]: W0424 23:38:00.856692 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.857036 kubelet[3612]: E0424 23:38:00.856731 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.857461 kubelet[3612]: E0424 23:38:00.857389 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.857461 kubelet[3612]: W0424 23:38:00.857440 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.857642 kubelet[3612]: E0424 23:38:00.857478 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.858192 kubelet[3612]: E0424 23:38:00.858130 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.858192 kubelet[3612]: W0424 23:38:00.858179 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.858463 kubelet[3612]: E0424 23:38:00.858218 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.858962 kubelet[3612]: E0424 23:38:00.858877 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.858962 kubelet[3612]: W0424 23:38:00.858936 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.858962 kubelet[3612]: E0424 23:38:00.859053 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.860616 kubelet[3612]: E0424 23:38:00.859703 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.860616 kubelet[3612]: W0424 23:38:00.859756 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.860616 kubelet[3612]: E0424 23:38:00.859799 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.862470 kubelet[3612]: E0424 23:38:00.862224 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.862470 kubelet[3612]: W0424 23:38:00.862277 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.862470 kubelet[3612]: E0424 23:38:00.862321 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.906604 containerd[2134]: time="2026-04-24T23:38:00.906408264Z" level=info msg="StartContainer for \"4b65bbbc4ed2f082fa5a21ddf44b58b22a7e8ce7a22f031e0a7e6cf599c6beaa\" returns successfully" Apr 24 23:38:00.908764 kubelet[3612]: E0424 23:38:00.908616 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.908764 kubelet[3612]: W0424 23:38:00.908669 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.908764 kubelet[3612]: E0424 23:38:00.908711 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.915354 kubelet[3612]: E0424 23:38:00.915284 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.917117 kubelet[3612]: W0424 23:38:00.915977 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.917289 kubelet[3612]: E0424 23:38:00.917134 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.921158 kubelet[3612]: E0424 23:38:00.920714 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.921158 kubelet[3612]: W0424 23:38:00.920936 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.921158 kubelet[3612]: E0424 23:38:00.921043 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.924728 kubelet[3612]: E0424 23:38:00.924640 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.927299 kubelet[3612]: W0424 23:38:00.927212 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.927299 kubelet[3612]: E0424 23:38:00.927298 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.930429 kubelet[3612]: E0424 23:38:00.930356 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.930654 kubelet[3612]: W0424 23:38:00.930437 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.930654 kubelet[3612]: E0424 23:38:00.930486 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.931880 kubelet[3612]: E0424 23:38:00.931818 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.932134 kubelet[3612]: W0424 23:38:00.931942 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.932134 kubelet[3612]: E0424 23:38:00.932040 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.933158 kubelet[3612]: E0424 23:38:00.933088 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.933158 kubelet[3612]: W0424 23:38:00.933149 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.933462 kubelet[3612]: E0424 23:38:00.933191 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.934197 kubelet[3612]: E0424 23:38:00.934131 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.934197 kubelet[3612]: W0424 23:38:00.934185 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.934446 kubelet[3612]: E0424 23:38:00.934233 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.934964 kubelet[3612]: E0424 23:38:00.934913 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.935206 kubelet[3612]: W0424 23:38:00.935172 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.935274 kubelet[3612]: E0424 23:38:00.935220 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.936359 kubelet[3612]: E0424 23:38:00.936293 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.936580 kubelet[3612]: W0424 23:38:00.936346 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.936580 kubelet[3612]: E0424 23:38:00.936418 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.937339 kubelet[3612]: E0424 23:38:00.937268 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.937339 kubelet[3612]: W0424 23:38:00.937326 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.937624 kubelet[3612]: E0424 23:38:00.937372 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.938584 kubelet[3612]: E0424 23:38:00.938524 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.938762 kubelet[3612]: W0424 23:38:00.938653 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.938762 kubelet[3612]: E0424 23:38:00.938701 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.940161 kubelet[3612]: E0424 23:38:00.940094 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.940161 kubelet[3612]: W0424 23:38:00.940148 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.940496 kubelet[3612]: E0424 23:38:00.940197 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.942238 kubelet[3612]: E0424 23:38:00.942193 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.942457 kubelet[3612]: W0424 23:38:00.942424 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.942682 kubelet[3612]: E0424 23:38:00.942648 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.943536 kubelet[3612]: E0424 23:38:00.943489 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.943801 kubelet[3612]: W0424 23:38:00.943763 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.944044 kubelet[3612]: E0424 23:38:00.943940 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.945407 kubelet[3612]: E0424 23:38:00.945199 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.945407 kubelet[3612]: W0424 23:38:00.945242 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.945407 kubelet[3612]: E0424 23:38:00.945280 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.945889 kubelet[3612]: E0424 23:38:00.945848 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.945889 kubelet[3612]: W0424 23:38:00.945889 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.946177 kubelet[3612]: E0424 23:38:00.945927 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:00.946579 kubelet[3612]: E0424 23:38:00.946538 3612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:00.946715 kubelet[3612]: W0424 23:38:00.946584 3612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:00.946715 kubelet[3612]: E0424 23:38:00.946622 3612 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:01.363523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b65bbbc4ed2f082fa5a21ddf44b58b22a7e8ce7a22f031e0a7e6cf599c6beaa-rootfs.mount: Deactivated successfully. Apr 24 23:38:01.480569 containerd[2134]: time="2026-04-24T23:38:01.480476339Z" level=info msg="shim disconnected" id=4b65bbbc4ed2f082fa5a21ddf44b58b22a7e8ce7a22f031e0a7e6cf599c6beaa namespace=k8s.io Apr 24 23:38:01.480569 containerd[2134]: time="2026-04-24T23:38:01.480554891Z" level=warning msg="cleaning up after shim disconnected" id=4b65bbbc4ed2f082fa5a21ddf44b58b22a7e8ce7a22f031e0a7e6cf599c6beaa namespace=k8s.io Apr 24 23:38:01.480897 containerd[2134]: time="2026-04-24T23:38:01.480576899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:38:01.808333 containerd[2134]: time="2026-04-24T23:38:01.807920820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 24 23:38:01.908379 kubelet[3612]: I0424 23:38:01.908269 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57c95c7f7f-v64w8" podStartSLOduration=4.491426121 podStartE2EDuration="6.908238277s" podCreationTimestamp="2026-04-24 23:37:55 +0000 UTC" firstStartedPulling="2026-04-24 23:37:56.92490152 +0000 UTC m=+27.682632727" lastFinishedPulling="2026-04-24 23:37:59.341713664 +0000 UTC m=+30.099444883" observedRunningTime="2026-04-24 23:37:59.891613115 +0000 UTC m=+30.649344358" watchObservedRunningTime="2026-04-24 23:38:01.908238277 +0000 UTC m=+32.665969496" Apr 24 23:38:02.574119 kubelet[3612]: E0424 23:38:02.573467 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zvsb9" podUID="70f656aa-464e-42e4-84a2-cf156c522759" Apr 24 23:38:04.574094 kubelet[3612]: E0424 23:38:04.574017 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zvsb9" podUID="70f656aa-464e-42e4-84a2-cf156c522759" Apr 24 23:38:06.574108 kubelet[3612]: E0424 23:38:06.573878 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zvsb9" podUID="70f656aa-464e-42e4-84a2-cf156c522759" Apr 24 23:38:08.573121 kubelet[3612]: E0424 23:38:08.572941 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zvsb9" podUID="70f656aa-464e-42e4-84a2-cf156c522759" Apr 24 23:38:08.579956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4155452621.mount: Deactivated successfully. Apr 24 23:38:08.653046 containerd[2134]: time="2026-04-24T23:38:08.651398874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:08.654109 containerd[2134]: time="2026-04-24T23:38:08.654041742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 24 23:38:08.658526 containerd[2134]: time="2026-04-24T23:38:08.657531570Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:08.670433 containerd[2134]: time="2026-04-24T23:38:08.670365450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:08.672247 containerd[2134]: time="2026-04-24T23:38:08.672154950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.864163618s" Apr 24 23:38:08.672247 containerd[2134]: time="2026-04-24T23:38:08.672236106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 24 23:38:08.683361 containerd[2134]: time="2026-04-24T23:38:08.683299338Z" level=info msg="CreateContainer within sandbox \"5f7bfcdafae24b64b8e218368edd59246f115e1c469d341e5174e4344883cd1d\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 24 23:38:08.718573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2613949013.mount: Deactivated successfully. Apr 24 23:38:08.722694 containerd[2134]: time="2026-04-24T23:38:08.722578867Z" level=info msg="CreateContainer within sandbox \"5f7bfcdafae24b64b8e218368edd59246f115e1c469d341e5174e4344883cd1d\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"5e43f0a7b75d8bd4ba13233c03b8e44fbf28b84a1e7e14b397564801225e6602\"" Apr 24 23:38:08.725081 containerd[2134]: time="2026-04-24T23:38:08.723744415Z" level=info msg="StartContainer for \"5e43f0a7b75d8bd4ba13233c03b8e44fbf28b84a1e7e14b397564801225e6602\"" Apr 24 23:38:08.861751 containerd[2134]: time="2026-04-24T23:38:08.861571495Z" level=info msg="StartContainer for \"5e43f0a7b75d8bd4ba13233c03b8e44fbf28b84a1e7e14b397564801225e6602\" returns successfully" Apr 24 23:38:09.523400 containerd[2134]: time="2026-04-24T23:38:09.523043742Z" level=info msg="shim disconnected" id=5e43f0a7b75d8bd4ba13233c03b8e44fbf28b84a1e7e14b397564801225e6602 namespace=k8s.io Apr 24 23:38:09.523400 containerd[2134]: time="2026-04-24T23:38:09.523125450Z" level=warning msg="cleaning up after shim disconnected" id=5e43f0a7b75d8bd4ba13233c03b8e44fbf28b84a1e7e14b397564801225e6602 namespace=k8s.io Apr 24 23:38:09.523400 containerd[2134]: time="2026-04-24T23:38:09.523146714Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:38:09.578717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e43f0a7b75d8bd4ba13233c03b8e44fbf28b84a1e7e14b397564801225e6602-rootfs.mount: Deactivated successfully. Apr 24 23:38:09.843907 containerd[2134]: time="2026-04-24T23:38:09.842770820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 24 23:38:10.572862 kubelet[3612]: E0424 23:38:10.572750 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zvsb9" podUID="70f656aa-464e-42e4-84a2-cf156c522759" Apr 24 23:38:12.573331 kubelet[3612]: E0424 23:38:12.572757 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zvsb9" podUID="70f656aa-464e-42e4-84a2-cf156c522759" Apr 24 23:38:13.004227 containerd[2134]: time="2026-04-24T23:38:13.003472952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:13.005856 containerd[2134]: time="2026-04-24T23:38:13.005767316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 24 23:38:13.008704 containerd[2134]: time="2026-04-24T23:38:13.007832192Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:13.013859 containerd[2134]: time="2026-04-24T23:38:13.013794584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:13.015727 containerd[2134]: time="2026-04-24T23:38:13.015639956Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.172798036s" Apr 24 23:38:13.015727 containerd[2134]: time="2026-04-24T23:38:13.015716816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 24 23:38:13.027043 containerd[2134]: time="2026-04-24T23:38:13.026957156Z" level=info msg="CreateContainer within sandbox \"5f7bfcdafae24b64b8e218368edd59246f115e1c469d341e5174e4344883cd1d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 24 23:38:13.060724 containerd[2134]: time="2026-04-24T23:38:13.060655460Z" level=info msg="CreateContainer within sandbox \"5f7bfcdafae24b64b8e218368edd59246f115e1c469d341e5174e4344883cd1d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7a7acfec747b16eb7db40e66cb8fabe0d38e4e3391185e826305bd1f986feb8b\"" Apr 24 23:38:13.062250 containerd[2134]: time="2026-04-24T23:38:13.062151044Z" level=info msg="StartContainer for \"7a7acfec747b16eb7db40e66cb8fabe0d38e4e3391185e826305bd1f986feb8b\"" Apr 24 23:38:13.193026 containerd[2134]: time="2026-04-24T23:38:13.192901353Z" level=info msg="StartContainer for \"7a7acfec747b16eb7db40e66cb8fabe0d38e4e3391185e826305bd1f986feb8b\" returns successfully" Apr 24 23:38:14.573750 kubelet[3612]: E0424 23:38:14.573665 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zvsb9" podUID="70f656aa-464e-42e4-84a2-cf156c522759" Apr 24 23:38:15.093276 containerd[2134]: time="2026-04-24T23:38:15.092876302Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:38:15.117632 kubelet[3612]: I0424 23:38:15.115648 3612 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 24 23:38:15.180105 containerd[2134]: time="2026-04-24T23:38:15.176260979Z" level=info msg="shim disconnected" id=7a7acfec747b16eb7db40e66cb8fabe0d38e4e3391185e826305bd1f986feb8b namespace=k8s.io Apr 24 23:38:15.180105 containerd[2134]: time="2026-04-24T23:38:15.177327239Z" level=warning msg="cleaning up after shim disconnected" id=7a7acfec747b16eb7db40e66cb8fabe0d38e4e3391185e826305bd1f986feb8b namespace=k8s.io Apr 24 23:38:15.180105 containerd[2134]: time="2026-04-24T23:38:15.177358571Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:38:15.187354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a7acfec747b16eb7db40e66cb8fabe0d38e4e3391185e826305bd1f986feb8b-rootfs.mount: Deactivated successfully. Apr 24 23:38:15.350162 kubelet[3612]: I0424 23:38:15.348470 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e7ff394-4d7c-4f1e-b093-f94b4553f9bc-config\") pod \"goldmane-5b85766d88-vgkrr\" (UID: \"5e7ff394-4d7c-4f1e-b093-f94b4553f9bc\") " pod="calico-system/goldmane-5b85766d88-vgkrr" Apr 24 23:38:15.350162 kubelet[3612]: I0424 23:38:15.348551 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5340818-2c6e-4850-b9c6-6c45955ef8bc-config-volume\") pod \"coredns-674b8bbfcf-rxbn7\" (UID: \"c5340818-2c6e-4850-b9c6-6c45955ef8bc\") " pod="kube-system/coredns-674b8bbfcf-rxbn7" Apr 24 23:38:15.350162 kubelet[3612]: I0424 23:38:15.348599 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c768fe2c-5b19-4c7a-88e7-ffec16fc16fe-calico-apiserver-certs\") pod \"calico-apiserver-6845df7cfd-kfbzs\" (UID: \"c768fe2c-5b19-4c7a-88e7-ffec16fc16fe\") " pod="calico-system/calico-apiserver-6845df7cfd-kfbzs" Apr 24 23:38:15.350162 kubelet[3612]: I0424 23:38:15.348649 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5wv4\" (UniqueName: \"kubernetes.io/projected/5e7ff394-4d7c-4f1e-b093-f94b4553f9bc-kube-api-access-l5wv4\") pod \"goldmane-5b85766d88-vgkrr\" (UID: \"5e7ff394-4d7c-4f1e-b093-f94b4553f9bc\") " pod="calico-system/goldmane-5b85766d88-vgkrr" Apr 24 23:38:15.350162 kubelet[3612]: I0424 23:38:15.348688 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bqgg\" (UniqueName: \"kubernetes.io/projected/99243418-611d-4ac7-9803-be2a0a09e3b8-kube-api-access-5bqgg\") pod \"calico-kube-controllers-5b4f49844-22zct\" (UID: \"99243418-611d-4ac7-9803-be2a0a09e3b8\") " pod="calico-system/calico-kube-controllers-5b4f49844-22zct" Apr 24 23:38:15.350618 kubelet[3612]: I0424 23:38:15.348739 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/56de8078-593a-4547-8372-d1bfdb078e2b-nginx-config\") pod \"whisker-c67d58b7d-v97dz\" (UID: \"56de8078-593a-4547-8372-d1bfdb078e2b\") " pod="calico-system/whisker-c67d58b7d-v97dz" Apr 24 23:38:15.350618 kubelet[3612]: I0424 23:38:15.348778 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n55kn\" (UniqueName: \"kubernetes.io/projected/9ef2e298-8143-4f28-a14a-2f167f054ba4-kube-api-access-n55kn\") pod \"coredns-674b8bbfcf-cprf8\" (UID: \"9ef2e298-8143-4f28-a14a-2f167f054ba4\") " pod="kube-system/coredns-674b8bbfcf-cprf8" Apr 24 23:38:15.350618 kubelet[3612]: I0424 23:38:15.349581 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e7ff394-4d7c-4f1e-b093-f94b4553f9bc-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-vgkrr\" (UID: \"5e7ff394-4d7c-4f1e-b093-f94b4553f9bc\") " pod="calico-system/goldmane-5b85766d88-vgkrr" Apr 24 23:38:15.350618 kubelet[3612]: I0424 23:38:15.349641 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99243418-611d-4ac7-9803-be2a0a09e3b8-tigera-ca-bundle\") pod \"calico-kube-controllers-5b4f49844-22zct\" (UID: \"99243418-611d-4ac7-9803-be2a0a09e3b8\") " pod="calico-system/calico-kube-controllers-5b4f49844-22zct" Apr 24 23:38:15.350618 kubelet[3612]: I0424 23:38:15.349681 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-472c9\" (UniqueName: \"kubernetes.io/projected/c5340818-2c6e-4850-b9c6-6c45955ef8bc-kube-api-access-472c9\") pod \"coredns-674b8bbfcf-rxbn7\" (UID: \"c5340818-2c6e-4850-b9c6-6c45955ef8bc\") " pod="kube-system/coredns-674b8bbfcf-rxbn7" Apr 24 23:38:15.350915 kubelet[3612]: I0424 23:38:15.349729 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9bhw\" (UniqueName: \"kubernetes.io/projected/c768fe2c-5b19-4c7a-88e7-ffec16fc16fe-kube-api-access-x9bhw\") pod \"calico-apiserver-6845df7cfd-kfbzs\" (UID: \"c768fe2c-5b19-4c7a-88e7-ffec16fc16fe\") " pod="calico-system/calico-apiserver-6845df7cfd-kfbzs" Apr 24 23:38:15.350915 kubelet[3612]: I0424 23:38:15.349771 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56de8078-593a-4547-8372-d1bfdb078e2b-whisker-ca-bundle\") pod \"whisker-c67d58b7d-v97dz\" (UID: \"56de8078-593a-4547-8372-d1bfdb078e2b\") " pod="calico-system/whisker-c67d58b7d-v97dz" Apr 24 23:38:15.350915 kubelet[3612]: I0424 23:38:15.349821 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5e7ff394-4d7c-4f1e-b093-f94b4553f9bc-goldmane-key-pair\") pod \"goldmane-5b85766d88-vgkrr\" (UID: \"5e7ff394-4d7c-4f1e-b093-f94b4553f9bc\") " pod="calico-system/goldmane-5b85766d88-vgkrr" Apr 24 23:38:15.350915 kubelet[3612]: I0424 23:38:15.349862 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7hl2\" (UniqueName: \"kubernetes.io/projected/56de8078-593a-4547-8372-d1bfdb078e2b-kube-api-access-g7hl2\") pod \"whisker-c67d58b7d-v97dz\" (UID: \"56de8078-593a-4547-8372-d1bfdb078e2b\") " pod="calico-system/whisker-c67d58b7d-v97dz" Apr 24 23:38:15.350915 kubelet[3612]: I0424 23:38:15.349907 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/56de8078-593a-4547-8372-d1bfdb078e2b-whisker-backend-key-pair\") pod \"whisker-c67d58b7d-v97dz\" (UID: \"56de8078-593a-4547-8372-d1bfdb078e2b\") " pod="calico-system/whisker-c67d58b7d-v97dz" Apr 24 23:38:15.351279 kubelet[3612]: I0424 23:38:15.349943 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ef2e298-8143-4f28-a14a-2f167f054ba4-config-volume\") pod \"coredns-674b8bbfcf-cprf8\" (UID: \"9ef2e298-8143-4f28-a14a-2f167f054ba4\") " pod="kube-system/coredns-674b8bbfcf-cprf8" Apr 24 23:38:15.453268 kubelet[3612]: I0424 23:38:15.450537 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/afad92e3-d16f-4f0f-986f-20a0bd89790e-calico-apiserver-certs\") pod \"calico-apiserver-6845df7cfd-nmdrp\" (UID: \"afad92e3-d16f-4f0f-986f-20a0bd89790e\") " pod="calico-system/calico-apiserver-6845df7cfd-nmdrp" Apr 24 23:38:15.453268 kubelet[3612]: I0424 23:38:15.450711 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8th6\" (UniqueName: \"kubernetes.io/projected/afad92e3-d16f-4f0f-986f-20a0bd89790e-kube-api-access-k8th6\") pod \"calico-apiserver-6845df7cfd-nmdrp\" (UID: \"afad92e3-d16f-4f0f-986f-20a0bd89790e\") " pod="calico-system/calico-apiserver-6845df7cfd-nmdrp" Apr 24 23:38:15.573359 containerd[2134]: time="2026-04-24T23:38:15.573301501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b4f49844-22zct,Uid:99243418-611d-4ac7-9803-be2a0a09e3b8,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:15.632870 containerd[2134]: time="2026-04-24T23:38:15.630649753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6845df7cfd-nmdrp,Uid:afad92e3-d16f-4f0f-986f-20a0bd89790e,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:15.636438 containerd[2134]: time="2026-04-24T23:38:15.635956957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cprf8,Uid:9ef2e298-8143-4f28-a14a-2f167f054ba4,Namespace:kube-system,Attempt:0,}" Apr 24 23:38:15.653706 containerd[2134]: time="2026-04-24T23:38:15.653328985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c67d58b7d-v97dz,Uid:56de8078-593a-4547-8372-d1bfdb078e2b,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:15.656366 containerd[2134]: time="2026-04-24T23:38:15.656148721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6845df7cfd-kfbzs,Uid:c768fe2c-5b19-4c7a-88e7-ffec16fc16fe,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:15.853144 containerd[2134]: time="2026-04-24T23:38:15.853086134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxbn7,Uid:c5340818-2c6e-4850-b9c6-6c45955ef8bc,Namespace:kube-system,Attempt:0,}" Apr 24 23:38:15.878137 containerd[2134]: time="2026-04-24T23:38:15.878048234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-vgkrr,Uid:5e7ff394-4d7c-4f1e-b093-f94b4553f9bc,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:15.953149 containerd[2134]: time="2026-04-24T23:38:15.952333802Z" level=info msg="CreateContainer within sandbox \"5f7bfcdafae24b64b8e218368edd59246f115e1c469d341e5174e4344883cd1d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 24 23:38:16.063909 containerd[2134]: time="2026-04-24T23:38:16.063658379Z" level=info msg="CreateContainer within sandbox \"5f7bfcdafae24b64b8e218368edd59246f115e1c469d341e5174e4344883cd1d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cfd044777a0ffe1dce14c9c237d19b074640ceda4e2c8e83d6c82658e8da84b9\"" Apr 24 23:38:16.065469 containerd[2134]: time="2026-04-24T23:38:16.065399063Z" level=info msg="StartContainer for \"cfd044777a0ffe1dce14c9c237d19b074640ceda4e2c8e83d6c82658e8da84b9\"" Apr 24 23:38:16.092650 containerd[2134]: time="2026-04-24T23:38:16.092579327Z" level=error msg="Failed to destroy network for sandbox \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.103781 containerd[2134]: time="2026-04-24T23:38:16.101832035Z" level=error msg="encountered an error cleaning up failed sandbox \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.107622 containerd[2134]: time="2026-04-24T23:38:16.107481599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6845df7cfd-kfbzs,Uid:c768fe2c-5b19-4c7a-88e7-ffec16fc16fe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.108576 kubelet[3612]: E0424 23:38:16.108299 3612 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.108576 kubelet[3612]: E0424 23:38:16.108422 3612 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6845df7cfd-kfbzs" Apr 24 23:38:16.108576 kubelet[3612]: E0424 23:38:16.108460 3612 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6845df7cfd-kfbzs" Apr 24 23:38:16.110655 kubelet[3612]: E0424 23:38:16.108579 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6845df7cfd-kfbzs_calico-system(c768fe2c-5b19-4c7a-88e7-ffec16fc16fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6845df7cfd-kfbzs_calico-system(c768fe2c-5b19-4c7a-88e7-ffec16fc16fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6845df7cfd-kfbzs" podUID="c768fe2c-5b19-4c7a-88e7-ffec16fc16fe" Apr 24 23:38:16.152407 containerd[2134]: time="2026-04-24T23:38:16.151337783Z" level=error msg="Failed to destroy network for sandbox \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.155342 containerd[2134]: time="2026-04-24T23:38:16.155243963Z" level=error msg="encountered an error cleaning up failed sandbox \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.155647 containerd[2134]: time="2026-04-24T23:38:16.155364887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c67d58b7d-v97dz,Uid:56de8078-593a-4547-8372-d1bfdb078e2b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.156143 kubelet[3612]: E0424 23:38:16.155768 3612 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.156143 kubelet[3612]: E0424 23:38:16.156073 3612 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c67d58b7d-v97dz" Apr 24 23:38:16.156143 kubelet[3612]: E0424 23:38:16.156127 3612 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c67d58b7d-v97dz" Apr 24 23:38:16.159834 kubelet[3612]: E0424 23:38:16.156234 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c67d58b7d-v97dz_calico-system(56de8078-593a-4547-8372-d1bfdb078e2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c67d58b7d-v97dz_calico-system(56de8078-593a-4547-8372-d1bfdb078e2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c67d58b7d-v97dz" podUID="56de8078-593a-4547-8372-d1bfdb078e2b" Apr 24 23:38:16.221160 containerd[2134]: time="2026-04-24T23:38:16.218550120Z" level=error msg="Failed to destroy network for sandbox \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.244042 containerd[2134]: time="2026-04-24T23:38:16.237437568Z" level=error msg="encountered an error cleaning up failed sandbox \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.244042 containerd[2134]: time="2026-04-24T23:38:16.237545880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cprf8,Uid:9ef2e298-8143-4f28-a14a-2f167f054ba4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.244256 kubelet[3612]: E0424 23:38:16.238338 3612 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.244256 kubelet[3612]: E0424 23:38:16.238425 3612 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cprf8" Apr 24 23:38:16.244256 kubelet[3612]: E0424 23:38:16.238461 3612 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cprf8" Apr 24 23:38:16.244484 kubelet[3612]: E0424 23:38:16.238544 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cprf8_kube-system(9ef2e298-8143-4f28-a14a-2f167f054ba4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cprf8_kube-system(9ef2e298-8143-4f28-a14a-2f167f054ba4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cprf8" podUID="9ef2e298-8143-4f28-a14a-2f167f054ba4" Apr 24 23:38:16.248485 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002-shm.mount: Deactivated successfully. Apr 24 23:38:16.280474 containerd[2134]: time="2026-04-24T23:38:16.280409436Z" level=error msg="Failed to destroy network for sandbox \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.291006 containerd[2134]: time="2026-04-24T23:38:16.290718636Z" level=error msg="encountered an error cleaning up failed sandbox \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.291006 containerd[2134]: time="2026-04-24T23:38:16.290817132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b4f49844-22zct,Uid:99243418-611d-4ac7-9803-be2a0a09e3b8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.293032 kubelet[3612]: E0424 23:38:16.292175 3612 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.293032 kubelet[3612]: E0424 23:38:16.292353 3612 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b4f49844-22zct" Apr 24 23:38:16.293032 kubelet[3612]: E0424 23:38:16.292410 3612 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b4f49844-22zct" Apr 24 23:38:16.293354 kubelet[3612]: E0424 23:38:16.292495 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b4f49844-22zct_calico-system(99243418-611d-4ac7-9803-be2a0a09e3b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b4f49844-22zct_calico-system(99243418-611d-4ac7-9803-be2a0a09e3b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b4f49844-22zct" podUID="99243418-611d-4ac7-9803-be2a0a09e3b8" Apr 24 23:38:16.293721 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509-shm.mount: Deactivated successfully. Apr 24 23:38:16.314521 containerd[2134]: time="2026-04-24T23:38:16.314447700Z" level=error msg="Failed to destroy network for sandbox \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.320630 containerd[2134]: time="2026-04-24T23:38:16.315408684Z" level=error msg="encountered an error cleaning up failed sandbox \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.320630 containerd[2134]: time="2026-04-24T23:38:16.315505524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6845df7cfd-nmdrp,Uid:afad92e3-d16f-4f0f-986f-20a0bd89790e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.320917 kubelet[3612]: E0424 23:38:16.315816 3612 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.320917 kubelet[3612]: E0424 23:38:16.315905 3612 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6845df7cfd-nmdrp" Apr 24 23:38:16.320917 kubelet[3612]: E0424 23:38:16.316049 3612 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6845df7cfd-nmdrp" Apr 24 23:38:16.322438 kubelet[3612]: E0424 23:38:16.316403 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6845df7cfd-nmdrp_calico-system(afad92e3-d16f-4f0f-986f-20a0bd89790e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6845df7cfd-nmdrp_calico-system(afad92e3-d16f-4f0f-986f-20a0bd89790e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6845df7cfd-nmdrp" podUID="afad92e3-d16f-4f0f-986f-20a0bd89790e" Apr 24 23:38:16.331216 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c-shm.mount: Deactivated successfully. Apr 24 23:38:16.421927 containerd[2134]: time="2026-04-24T23:38:16.421864981Z" level=error msg="Failed to destroy network for sandbox \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.423418 containerd[2134]: time="2026-04-24T23:38:16.423347845Z" level=error msg="encountered an error cleaning up failed sandbox \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.423768 containerd[2134]: time="2026-04-24T23:38:16.423636373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-vgkrr,Uid:5e7ff394-4d7c-4f1e-b093-f94b4553f9bc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.424248 containerd[2134]: time="2026-04-24T23:38:16.424094077Z" level=error msg="Failed to destroy network for sandbox \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.424599 kubelet[3612]: E0424 23:38:16.424522 3612 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.424698 kubelet[3612]: E0424 23:38:16.424611 3612 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-vgkrr" Apr 24 23:38:16.424698 kubelet[3612]: E0424 23:38:16.424647 3612 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-vgkrr" Apr 24 23:38:16.424849 kubelet[3612]: E0424 23:38:16.424736 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-vgkrr_calico-system(5e7ff394-4d7c-4f1e-b093-f94b4553f9bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-vgkrr_calico-system(5e7ff394-4d7c-4f1e-b093-f94b4553f9bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-vgkrr" podUID="5e7ff394-4d7c-4f1e-b093-f94b4553f9bc" Apr 24 23:38:16.428330 containerd[2134]: time="2026-04-24T23:38:16.427453633Z" level=error msg="encountered an error cleaning up failed sandbox \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.428330 containerd[2134]: time="2026-04-24T23:38:16.428248153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxbn7,Uid:c5340818-2c6e-4850-b9c6-6c45955ef8bc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.429830 kubelet[3612]: E0424 23:38:16.429323 3612 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:16.429830 kubelet[3612]: E0424 23:38:16.429649 3612 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rxbn7" Apr 24 23:38:16.430467 kubelet[3612]: E0424 23:38:16.429927 3612 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rxbn7" Apr 24 23:38:16.431688 kubelet[3612]: E0424 23:38:16.431180 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rxbn7_kube-system(c5340818-2c6e-4850-b9c6-6c45955ef8bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rxbn7_kube-system(c5340818-2c6e-4850-b9c6-6c45955ef8bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rxbn7" podUID="c5340818-2c6e-4850-b9c6-6c45955ef8bc" Apr 24 23:38:16.461161 containerd[2134]: time="2026-04-24T23:38:16.460967329Z" level=info msg="StartContainer for \"cfd044777a0ffe1dce14c9c237d19b074640ceda4e2c8e83d6c82658e8da84b9\" returns successfully" Apr 24 23:38:16.589122 containerd[2134]: time="2026-04-24T23:38:16.587615354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zvsb9,Uid:70f656aa-464e-42e4-84a2-cf156c522759,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:16.902704 kubelet[3612]: I0424 23:38:16.902461 3612 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Apr 24 23:38:16.920051 containerd[2134]: time="2026-04-24T23:38:16.918502131Z" level=info msg="StopPodSandbox for \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\"" Apr 24 23:38:16.920196 kubelet[3612]: I0424 23:38:16.918798 3612 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Apr 24 23:38:16.925697 containerd[2134]: time="2026-04-24T23:38:16.923589471Z" level=info msg="Ensure that sandbox 268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c in task-service has been cleanup successfully" Apr 24 23:38:16.930455 containerd[2134]: time="2026-04-24T23:38:16.929148459Z" level=info msg="StopPodSandbox for \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\"" Apr 24 23:38:16.936322 containerd[2134]: time="2026-04-24T23:38:16.934663215Z" level=info msg="Ensure that sandbox ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509 in task-service has been cleanup successfully" Apr 24 23:38:16.965040 kubelet[3612]: I0424 23:38:16.962845 3612 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Apr 24 23:38:16.973620 containerd[2134]: time="2026-04-24T23:38:16.971251455Z" level=info msg="StopPodSandbox for \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\"" Apr 24 23:38:16.973620 containerd[2134]: time="2026-04-24T23:38:16.973406763Z" level=info msg="Ensure that sandbox 7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748 in task-service has been cleanup successfully" Apr 24 23:38:17.009857 kubelet[3612]: I0424 23:38:17.008510 3612 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Apr 24 23:38:17.045891 containerd[2134]: time="2026-04-24T23:38:17.042902424Z" level=info msg="StopPodSandbox for \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\"" Apr 24 23:38:17.046132 containerd[2134]: time="2026-04-24T23:38:17.045798204Z" level=info msg="Ensure that sandbox a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002 in task-service has been cleanup successfully" Apr 24 23:38:17.164628 kubelet[3612]: I0424 23:38:17.163931 3612 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Apr 24 23:38:17.199081 containerd[2134]: time="2026-04-24T23:38:17.195911569Z" level=info msg="StopPodSandbox for \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\"" Apr 24 23:38:17.198883 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc-shm.mount: Deactivated successfully. Apr 24 23:38:17.199317 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16-shm.mount: Deactivated successfully. Apr 24 23:38:17.209626 containerd[2134]: time="2026-04-24T23:38:17.201593761Z" level=info msg="Ensure that sandbox de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc in task-service has been cleanup successfully" Apr 24 23:38:17.231249 kubelet[3612]: I0424 23:38:17.230886 3612 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Apr 24 23:38:17.243714 containerd[2134]: time="2026-04-24T23:38:17.242618449Z" level=info msg="StopPodSandbox for \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\"" Apr 24 23:38:17.243714 containerd[2134]: time="2026-04-24T23:38:17.243020449Z" level=info msg="Ensure that sandbox ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16 in task-service has been cleanup successfully" Apr 24 23:38:17.281483 kubelet[3612]: I0424 23:38:17.280937 3612 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Apr 24 23:38:17.292603 containerd[2134]: time="2026-04-24T23:38:17.288906037Z" level=info msg="StopPodSandbox for \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\"" Apr 24 23:38:17.292603 containerd[2134]: time="2026-04-24T23:38:17.289253965Z" level=info msg="Ensure that sandbox 72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62 in task-service has been cleanup successfully" Apr 24 23:38:17.555361 kubelet[3612]: I0424 23:38:17.553368 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-skkl6" podStartSLOduration=5.481349182 podStartE2EDuration="21.553343834s" podCreationTimestamp="2026-04-24 23:37:56 +0000 UTC" firstStartedPulling="2026-04-24 23:37:56.945857972 +0000 UTC m=+27.703589179" lastFinishedPulling="2026-04-24 23:38:13.017852624 +0000 UTC m=+43.775583831" observedRunningTime="2026-04-24 23:38:17.229463005 +0000 UTC m=+47.987194356" watchObservedRunningTime="2026-04-24 23:38:17.553343834 +0000 UTC m=+48.311075065" Apr 24 23:38:17.696616 containerd[2134]: 2026-04-24 23:38:16.909 [INFO][4753] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564" Apr 24 23:38:17.696616 containerd[2134]: 2026-04-24 23:38:16.909 [INFO][4753] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564" iface="eth0" netns="/var/run/netns/cni-eaaa66d0-6bfb-ff7d-9134-fd720076ca2f" Apr 24 23:38:17.696616 containerd[2134]: 2026-04-24 23:38:16.910 [INFO][4753] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564" iface="eth0" netns="/var/run/netns/cni-eaaa66d0-6bfb-ff7d-9134-fd720076ca2f" Apr 24 23:38:17.696616 containerd[2134]: 2026-04-24 23:38:16.914 [INFO][4753] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564" iface="eth0" netns="/var/run/netns/cni-eaaa66d0-6bfb-ff7d-9134-fd720076ca2f" Apr 24 23:38:17.696616 containerd[2134]: 2026-04-24 23:38:16.914 [INFO][4753] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564" Apr 24 23:38:17.696616 containerd[2134]: 2026-04-24 23:38:16.914 [INFO][4753] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564" Apr 24 23:38:17.696616 containerd[2134]: 2026-04-24 23:38:17.500 [INFO][4766] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564" HandleID="k8s-pod-network.8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564" Workload="ip--172--31--28--13-k8s-csi--node--driver--zvsb9-eth0" Apr 24 23:38:17.696616 containerd[2134]: 2026-04-24 23:38:17.500 [INFO][4766] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:17.696616 containerd[2134]: 2026-04-24 23:38:17.500 [INFO][4766] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:17.696616 containerd[2134]: 2026-04-24 23:38:17.562 [WARNING][4766] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564" HandleID="k8s-pod-network.8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564" Workload="ip--172--31--28--13-k8s-csi--node--driver--zvsb9-eth0" Apr 24 23:38:17.696616 containerd[2134]: 2026-04-24 23:38:17.562 [INFO][4766] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564" HandleID="k8s-pod-network.8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564" Workload="ip--172--31--28--13-k8s-csi--node--driver--zvsb9-eth0" Apr 24 23:38:17.696616 containerd[2134]: 2026-04-24 23:38:17.575 [INFO][4766] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:17.696616 containerd[2134]: 2026-04-24 23:38:17.682 [INFO][4753] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564" Apr 24 23:38:17.705626 systemd[1]: run-netns-cni\x2deaaa66d0\x2d6bfb\x2dff7d\x2d9134\x2dfd720076ca2f.mount: Deactivated successfully. Apr 24 23:38:17.720418 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564-shm.mount: Deactivated successfully. Apr 24 23:38:17.736459 containerd[2134]: time="2026-04-24T23:38:17.736054851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zvsb9,Uid:70f656aa-464e-42e4-84a2-cf156c522759,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:17.737023 kubelet[3612]: E0424 23:38:17.736932 3612 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:17.737217 kubelet[3612]: E0424 23:38:17.737060 3612 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zvsb9" Apr 24 23:38:17.737217 kubelet[3612]: E0424 23:38:17.737103 3612 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zvsb9" Apr 24 23:38:17.739838 kubelet[3612]: E0424 23:38:17.737200 3612 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zvsb9_calico-system(70f656aa-464e-42e4-84a2-cf156c522759)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zvsb9_calico-system(70f656aa-464e-42e4-84a2-cf156c522759)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b770d6c657e64d611262f8b55b032e8fe048cf858507502c2bfffea851c7564\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zvsb9" podUID="70f656aa-464e-42e4-84a2-cf156c522759" Apr 24 23:38:17.998094 containerd[2134]: 2026-04-24 23:38:17.824 [INFO][4861] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Apr 24 23:38:17.998094 containerd[2134]: 2026-04-24 23:38:17.825 [INFO][4861] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" iface="eth0" netns="/var/run/netns/cni-45c0f3a9-88f0-5bd1-39e6-2067d8839508" Apr 24 23:38:17.998094 containerd[2134]: 2026-04-24 23:38:17.825 [INFO][4861] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" iface="eth0" netns="/var/run/netns/cni-45c0f3a9-88f0-5bd1-39e6-2067d8839508" Apr 24 23:38:17.998094 containerd[2134]: 2026-04-24 23:38:17.826 [INFO][4861] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" iface="eth0" netns="/var/run/netns/cni-45c0f3a9-88f0-5bd1-39e6-2067d8839508" Apr 24 23:38:17.998094 containerd[2134]: 2026-04-24 23:38:17.826 [INFO][4861] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Apr 24 23:38:17.998094 containerd[2134]: 2026-04-24 23:38:17.826 [INFO][4861] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Apr 24 23:38:17.998094 containerd[2134]: 2026-04-24 23:38:17.924 [INFO][4912] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" HandleID="k8s-pod-network.72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" Apr 24 23:38:17.998094 containerd[2134]: 2026-04-24 23:38:17.925 [INFO][4912] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:17.998094 containerd[2134]: 2026-04-24 23:38:17.925 [INFO][4912] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:17.998094 containerd[2134]: 2026-04-24 23:38:17.947 [WARNING][4912] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" HandleID="k8s-pod-network.72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" Apr 24 23:38:17.998094 containerd[2134]: 2026-04-24 23:38:17.948 [INFO][4912] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" HandleID="k8s-pod-network.72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" Apr 24 23:38:17.998094 containerd[2134]: 2026-04-24 23:38:17.953 [INFO][4912] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:17.998094 containerd[2134]: 2026-04-24 23:38:17.964 [INFO][4861] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Apr 24 23:38:18.010672 containerd[2134]: time="2026-04-24T23:38:18.007240201Z" level=info msg="TearDown network for sandbox \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\" successfully" Apr 24 23:38:18.010672 containerd[2134]: time="2026-04-24T23:38:18.007299193Z" level=info msg="StopPodSandbox for \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\" returns successfully" Apr 24 23:38:18.010672 containerd[2134]: time="2026-04-24T23:38:18.010260037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6845df7cfd-kfbzs,Uid:c768fe2c-5b19-4c7a-88e7-ffec16fc16fe,Namespace:calico-system,Attempt:1,}" Apr 24 23:38:18.014242 systemd[1]: run-netns-cni\x2d45c0f3a9\x2d88f0\x2d5bd1\x2d39e6\x2d2067d8839508.mount: Deactivated successfully. Apr 24 23:38:18.056587 containerd[2134]: 2026-04-24 23:38:17.583 [INFO][4799] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Apr 24 23:38:18.056587 containerd[2134]: 2026-04-24 23:38:17.589 [INFO][4799] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" iface="eth0" netns="/var/run/netns/cni-3591dbf8-5e29-672a-a041-288d1df4c1fe" Apr 24 23:38:18.056587 containerd[2134]: 2026-04-24 23:38:17.589 [INFO][4799] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" iface="eth0" netns="/var/run/netns/cni-3591dbf8-5e29-672a-a041-288d1df4c1fe" Apr 24 23:38:18.056587 containerd[2134]: 2026-04-24 23:38:17.598 [INFO][4799] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" iface="eth0" netns="/var/run/netns/cni-3591dbf8-5e29-672a-a041-288d1df4c1fe" Apr 24 23:38:18.056587 containerd[2134]: 2026-04-24 23:38:17.602 [INFO][4799] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Apr 24 23:38:18.056587 containerd[2134]: 2026-04-24 23:38:17.602 [INFO][4799] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Apr 24 23:38:18.056587 containerd[2134]: 2026-04-24 23:38:17.975 [INFO][4885] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" HandleID="k8s-pod-network.7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Workload="ip--172--31--28--13-k8s-whisker--c67d58b7d--v97dz-eth0" Apr 24 23:38:18.056587 containerd[2134]: 2026-04-24 23:38:17.976 [INFO][4885] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:18.056587 containerd[2134]: 2026-04-24 23:38:17.976 [INFO][4885] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:18.056587 containerd[2134]: 2026-04-24 23:38:18.012 [WARNING][4885] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" HandleID="k8s-pod-network.7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Workload="ip--172--31--28--13-k8s-whisker--c67d58b7d--v97dz-eth0" Apr 24 23:38:18.056587 containerd[2134]: 2026-04-24 23:38:18.013 [INFO][4885] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" HandleID="k8s-pod-network.7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Workload="ip--172--31--28--13-k8s-whisker--c67d58b7d--v97dz-eth0" Apr 24 23:38:18.056587 containerd[2134]: 2026-04-24 23:38:18.018 [INFO][4885] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:18.056587 containerd[2134]: 2026-04-24 23:38:18.033 [INFO][4799] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Apr 24 23:38:18.058121 containerd[2134]: time="2026-04-24T23:38:18.057565561Z" level=info msg="TearDown network for sandbox \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\" successfully" Apr 24 23:38:18.058121 containerd[2134]: time="2026-04-24T23:38:18.057644965Z" level=info msg="StopPodSandbox for \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\" returns successfully" Apr 24 23:38:18.073820 systemd[1]: run-netns-cni\x2d3591dbf8\x2d5e29\x2d672a\x2da041\x2d288d1df4c1fe.mount: Deactivated successfully. Apr 24 23:38:18.122512 containerd[2134]: 2026-04-24 23:38:17.628 [INFO][4806] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Apr 24 23:38:18.122512 containerd[2134]: 2026-04-24 23:38:17.632 [INFO][4806] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" iface="eth0" netns="/var/run/netns/cni-be14e9fd-d285-cae1-dca1-22354420a0ed" Apr 24 23:38:18.122512 containerd[2134]: 2026-04-24 23:38:17.635 [INFO][4806] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" iface="eth0" netns="/var/run/netns/cni-be14e9fd-d285-cae1-dca1-22354420a0ed" Apr 24 23:38:18.122512 containerd[2134]: 2026-04-24 23:38:17.640 [INFO][4806] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" iface="eth0" netns="/var/run/netns/cni-be14e9fd-d285-cae1-dca1-22354420a0ed" Apr 24 23:38:18.122512 containerd[2134]: 2026-04-24 23:38:17.640 [INFO][4806] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Apr 24 23:38:18.122512 containerd[2134]: 2026-04-24 23:38:17.640 [INFO][4806] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Apr 24 23:38:18.122512 containerd[2134]: 2026-04-24 23:38:18.037 [INFO][4889] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" HandleID="k8s-pod-network.ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Workload="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" Apr 24 23:38:18.122512 containerd[2134]: 2026-04-24 23:38:18.038 [INFO][4889] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:18.122512 containerd[2134]: 2026-04-24 23:38:18.039 [INFO][4889] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:18.122512 containerd[2134]: 2026-04-24 23:38:18.086 [WARNING][4889] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" HandleID="k8s-pod-network.ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Workload="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" Apr 24 23:38:18.122512 containerd[2134]: 2026-04-24 23:38:18.086 [INFO][4889] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" HandleID="k8s-pod-network.ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Workload="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" Apr 24 23:38:18.122512 containerd[2134]: 2026-04-24 23:38:18.092 [INFO][4889] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:18.122512 containerd[2134]: 2026-04-24 23:38:18.111 [INFO][4806] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Apr 24 23:38:18.125333 containerd[2134]: time="2026-04-24T23:38:18.124173553Z" level=info msg="TearDown network for sandbox \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\" successfully" Apr 24 23:38:18.125333 containerd[2134]: time="2026-04-24T23:38:18.124229413Z" level=info msg="StopPodSandbox for \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\" returns successfully" Apr 24 23:38:18.131353 containerd[2134]: time="2026-04-24T23:38:18.129921241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b4f49844-22zct,Uid:99243418-611d-4ac7-9803-be2a0a09e3b8,Namespace:calico-system,Attempt:1,}" Apr 24 23:38:18.162055 containerd[2134]: 2026-04-24 23:38:17.672 [INFO][4819] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Apr 24 23:38:18.162055 containerd[2134]: 2026-04-24 23:38:17.674 [INFO][4819] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" iface="eth0" netns="/var/run/netns/cni-d119fe20-35eb-73ec-de6d-21df78ddc752" Apr 24 23:38:18.162055 containerd[2134]: 2026-04-24 23:38:17.678 [INFO][4819] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" iface="eth0" netns="/var/run/netns/cni-d119fe20-35eb-73ec-de6d-21df78ddc752" Apr 24 23:38:18.162055 containerd[2134]: 2026-04-24 23:38:17.679 [INFO][4819] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" iface="eth0" netns="/var/run/netns/cni-d119fe20-35eb-73ec-de6d-21df78ddc752" Apr 24 23:38:18.162055 containerd[2134]: 2026-04-24 23:38:17.680 [INFO][4819] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Apr 24 23:38:18.162055 containerd[2134]: 2026-04-24 23:38:17.686 [INFO][4819] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Apr 24 23:38:18.162055 containerd[2134]: 2026-04-24 23:38:18.070 [INFO][4900] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" HandleID="k8s-pod-network.a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" Apr 24 23:38:18.162055 containerd[2134]: 2026-04-24 23:38:18.072 [INFO][4900] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:18.162055 containerd[2134]: 2026-04-24 23:38:18.094 [INFO][4900] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:18.162055 containerd[2134]: 2026-04-24 23:38:18.125 [WARNING][4900] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" HandleID="k8s-pod-network.a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" Apr 24 23:38:18.162055 containerd[2134]: 2026-04-24 23:38:18.126 [INFO][4900] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" HandleID="k8s-pod-network.a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" Apr 24 23:38:18.162055 containerd[2134]: 2026-04-24 23:38:18.132 [INFO][4900] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:18.162055 containerd[2134]: 2026-04-24 23:38:18.148 [INFO][4819] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Apr 24 23:38:18.164574 containerd[2134]: time="2026-04-24T23:38:18.163772941Z" level=info msg="TearDown network for sandbox \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\" successfully" Apr 24 23:38:18.164574 containerd[2134]: time="2026-04-24T23:38:18.163830289Z" level=info msg="StopPodSandbox for \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\" returns successfully" Apr 24 23:38:18.166213 containerd[2134]: time="2026-04-24T23:38:18.165057097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cprf8,Uid:9ef2e298-8143-4f28-a14a-2f167f054ba4,Namespace:kube-system,Attempt:1,}" Apr 24 23:38:18.181928 systemd[1]: run-netns-cni\x2dd119fe20\x2d35eb\x2d73ec\x2dde6d\x2d21df78ddc752.mount: Deactivated successfully. Apr 24 23:38:18.182608 systemd[1]: run-netns-cni\x2dbe14e9fd\x2dd285\x2dcae1\x2ddca1\x2d22354420a0ed.mount: Deactivated successfully. Apr 24 23:38:18.192309 containerd[2134]: 2026-04-24 23:38:17.633 [INFO][4798] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Apr 24 23:38:18.192309 containerd[2134]: 2026-04-24 23:38:17.634 [INFO][4798] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" iface="eth0" netns="/var/run/netns/cni-30608111-337b-8749-0f1c-f83cb68e8cda" Apr 24 23:38:18.192309 containerd[2134]: 2026-04-24 23:38:17.639 [INFO][4798] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" iface="eth0" netns="/var/run/netns/cni-30608111-337b-8749-0f1c-f83cb68e8cda" Apr 24 23:38:18.192309 containerd[2134]: 2026-04-24 23:38:17.640 [INFO][4798] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" iface="eth0" netns="/var/run/netns/cni-30608111-337b-8749-0f1c-f83cb68e8cda" Apr 24 23:38:18.192309 containerd[2134]: 2026-04-24 23:38:17.640 [INFO][4798] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Apr 24 23:38:18.192309 containerd[2134]: 2026-04-24 23:38:17.640 [INFO][4798] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Apr 24 23:38:18.192309 containerd[2134]: 2026-04-24 23:38:18.085 [INFO][4888] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" HandleID="k8s-pod-network.268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" Apr 24 23:38:18.192309 containerd[2134]: 2026-04-24 23:38:18.087 [INFO][4888] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:18.192309 containerd[2134]: 2026-04-24 23:38:18.133 [INFO][4888] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:18.192309 containerd[2134]: 2026-04-24 23:38:18.163 [WARNING][4888] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" HandleID="k8s-pod-network.268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" Apr 24 23:38:18.192309 containerd[2134]: 2026-04-24 23:38:18.164 [INFO][4888] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" HandleID="k8s-pod-network.268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" Apr 24 23:38:18.192309 containerd[2134]: 2026-04-24 23:38:18.169 [INFO][4888] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:18.192309 containerd[2134]: 2026-04-24 23:38:18.176 [INFO][4798] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Apr 24 23:38:18.199468 containerd[2134]: time="2026-04-24T23:38:18.195703154Z" level=info msg="TearDown network for sandbox \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\" successfully" Apr 24 23:38:18.199468 containerd[2134]: time="2026-04-24T23:38:18.199123010Z" level=info msg="StopPodSandbox for \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\" returns successfully" Apr 24 23:38:18.201703 containerd[2134]: time="2026-04-24T23:38:18.201479990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6845df7cfd-nmdrp,Uid:afad92e3-d16f-4f0f-986f-20a0bd89790e,Namespace:calico-system,Attempt:1,}" Apr 24 23:38:18.206565 kubelet[3612]: I0424 23:38:18.205203 3612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56de8078-593a-4547-8372-d1bfdb078e2b-whisker-ca-bundle\") pod \"56de8078-593a-4547-8372-d1bfdb078e2b\" (UID: \"56de8078-593a-4547-8372-d1bfdb078e2b\") " Apr 24 23:38:18.206565 kubelet[3612]: I0424 23:38:18.205300 3612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7hl2\" (UniqueName: \"kubernetes.io/projected/56de8078-593a-4547-8372-d1bfdb078e2b-kube-api-access-g7hl2\") pod \"56de8078-593a-4547-8372-d1bfdb078e2b\" (UID: \"56de8078-593a-4547-8372-d1bfdb078e2b\") " Apr 24 23:38:18.206565 kubelet[3612]: I0424 23:38:18.205345 3612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/56de8078-593a-4547-8372-d1bfdb078e2b-nginx-config\") pod \"56de8078-593a-4547-8372-d1bfdb078e2b\" (UID: \"56de8078-593a-4547-8372-d1bfdb078e2b\") " Apr 24 23:38:18.206565 kubelet[3612]: I0424 23:38:18.205404 3612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/56de8078-593a-4547-8372-d1bfdb078e2b-whisker-backend-key-pair\") pod \"56de8078-593a-4547-8372-d1bfdb078e2b\" (UID: \"56de8078-593a-4547-8372-d1bfdb078e2b\") " Apr 24 23:38:18.206565 kubelet[3612]: I0424 23:38:18.205956 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56de8078-593a-4547-8372-d1bfdb078e2b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "56de8078-593a-4547-8372-d1bfdb078e2b" (UID: "56de8078-593a-4547-8372-d1bfdb078e2b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:38:18.212468 systemd[1]: run-netns-cni\x2d30608111\x2d337b\x2d8749\x2d0f1c\x2df83cb68e8cda.mount: Deactivated successfully. Apr 24 23:38:18.218277 kubelet[3612]: I0424 23:38:18.218202 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56de8078-593a-4547-8372-d1bfdb078e2b-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "56de8078-593a-4547-8372-d1bfdb078e2b" (UID: "56de8078-593a-4547-8372-d1bfdb078e2b"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:38:18.260087 kubelet[3612]: I0424 23:38:18.258629 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56de8078-593a-4547-8372-d1bfdb078e2b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "56de8078-593a-4547-8372-d1bfdb078e2b" (UID: "56de8078-593a-4547-8372-d1bfdb078e2b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 23:38:18.261654 systemd[1]: var-lib-kubelet-pods-56de8078\x2d593a\x2d4547\x2d8372\x2dd1bfdb078e2b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 24 23:38:18.262922 kubelet[3612]: I0424 23:38:18.262838 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56de8078-593a-4547-8372-d1bfdb078e2b-kube-api-access-g7hl2" (OuterVolumeSpecName: "kube-api-access-g7hl2") pod "56de8078-593a-4547-8372-d1bfdb078e2b" (UID: "56de8078-593a-4547-8372-d1bfdb078e2b"). InnerVolumeSpecName "kube-api-access-g7hl2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:38:18.292033 systemd[1]: var-lib-kubelet-pods-56de8078\x2d593a\x2d4547\x2d8372\x2dd1bfdb078e2b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg7hl2.mount: Deactivated successfully. Apr 24 23:38:18.297545 containerd[2134]: 2026-04-24 23:38:17.852 [INFO][4853] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Apr 24 23:38:18.297545 containerd[2134]: 2026-04-24 23:38:17.852 [INFO][4853] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" iface="eth0" netns="/var/run/netns/cni-5a9faca2-409c-f02c-2fe9-7ead939f64b5" Apr 24 23:38:18.297545 containerd[2134]: 2026-04-24 23:38:17.852 [INFO][4853] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" iface="eth0" netns="/var/run/netns/cni-5a9faca2-409c-f02c-2fe9-7ead939f64b5" Apr 24 23:38:18.297545 containerd[2134]: 2026-04-24 23:38:17.854 [INFO][4853] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" iface="eth0" netns="/var/run/netns/cni-5a9faca2-409c-f02c-2fe9-7ead939f64b5" Apr 24 23:38:18.297545 containerd[2134]: 2026-04-24 23:38:17.855 [INFO][4853] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Apr 24 23:38:18.297545 containerd[2134]: 2026-04-24 23:38:17.855 [INFO][4853] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Apr 24 23:38:18.297545 containerd[2134]: 2026-04-24 23:38:18.140 [INFO][4919] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" HandleID="k8s-pod-network.ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" Apr 24 23:38:18.297545 containerd[2134]: 2026-04-24 23:38:18.141 [INFO][4919] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:18.297545 containerd[2134]: 2026-04-24 23:38:18.171 [INFO][4919] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:18.297545 containerd[2134]: 2026-04-24 23:38:18.211 [WARNING][4919] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" HandleID="k8s-pod-network.ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" Apr 24 23:38:18.297545 containerd[2134]: 2026-04-24 23:38:18.211 [INFO][4919] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" HandleID="k8s-pod-network.ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" Apr 24 23:38:18.297545 containerd[2134]: 2026-04-24 23:38:18.240 [INFO][4919] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:18.297545 containerd[2134]: 2026-04-24 23:38:18.270 [INFO][4853] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Apr 24 23:38:18.304217 containerd[2134]: time="2026-04-24T23:38:18.300955754Z" level=info msg="TearDown network for sandbox \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\" successfully" Apr 24 23:38:18.304217 containerd[2134]: time="2026-04-24T23:38:18.301067594Z" level=info msg="StopPodSandbox for \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\" returns successfully" Apr 24 23:38:18.308948 kubelet[3612]: I0424 23:38:18.308548 3612 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56de8078-593a-4547-8372-d1bfdb078e2b-whisker-ca-bundle\") on node \"ip-172-31-28-13\" DevicePath \"\"" Apr 24 23:38:18.310454 kubelet[3612]: I0424 23:38:18.309331 3612 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g7hl2\" (UniqueName: \"kubernetes.io/projected/56de8078-593a-4547-8372-d1bfdb078e2b-kube-api-access-g7hl2\") on node \"ip-172-31-28-13\" DevicePath \"\"" Apr 24 23:38:18.311155 kubelet[3612]: I0424 23:38:18.311095 3612 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/56de8078-593a-4547-8372-d1bfdb078e2b-nginx-config\") on node \"ip-172-31-28-13\" DevicePath \"\"" Apr 24 23:38:18.311155 kubelet[3612]: I0424 23:38:18.311155 3612 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/56de8078-593a-4547-8372-d1bfdb078e2b-whisker-backend-key-pair\") on node \"ip-172-31-28-13\" DevicePath \"\"" Apr 24 23:38:18.311922 containerd[2134]: time="2026-04-24T23:38:18.311834258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxbn7,Uid:c5340818-2c6e-4850-b9c6-6c45955ef8bc,Namespace:kube-system,Attempt:1,}" Apr 24 23:38:18.312587 containerd[2134]: time="2026-04-24T23:38:18.312494714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zvsb9,Uid:70f656aa-464e-42e4-84a2-cf156c522759,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:18.336912 containerd[2134]: 2026-04-24 23:38:17.824 [INFO][4842] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Apr 24 23:38:18.336912 containerd[2134]: 2026-04-24 23:38:17.827 [INFO][4842] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" iface="eth0" netns="/var/run/netns/cni-20318366-9753-4028-5160-104855c33503" Apr 24 23:38:18.336912 containerd[2134]: 2026-04-24 23:38:17.832 [INFO][4842] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" iface="eth0" netns="/var/run/netns/cni-20318366-9753-4028-5160-104855c33503" Apr 24 23:38:18.336912 containerd[2134]: 2026-04-24 23:38:17.835 [INFO][4842] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" iface="eth0" netns="/var/run/netns/cni-20318366-9753-4028-5160-104855c33503" Apr 24 23:38:18.336912 containerd[2134]: 2026-04-24 23:38:17.835 [INFO][4842] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Apr 24 23:38:18.336912 containerd[2134]: 2026-04-24 23:38:17.835 [INFO][4842] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Apr 24 23:38:18.336912 containerd[2134]: 2026-04-24 23:38:18.162 [INFO][4917] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" HandleID="k8s-pod-network.de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Workload="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" Apr 24 23:38:18.336912 containerd[2134]: 2026-04-24 23:38:18.164 [INFO][4917] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:18.336912 containerd[2134]: 2026-04-24 23:38:18.221 [INFO][4917] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:18.336912 containerd[2134]: 2026-04-24 23:38:18.279 [WARNING][4917] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" HandleID="k8s-pod-network.de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Workload="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" Apr 24 23:38:18.336912 containerd[2134]: 2026-04-24 23:38:18.279 [INFO][4917] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" HandleID="k8s-pod-network.de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Workload="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" Apr 24 23:38:18.336912 containerd[2134]: 2026-04-24 23:38:18.285 [INFO][4917] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:18.336912 containerd[2134]: 2026-04-24 23:38:18.310 [INFO][4842] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Apr 24 23:38:18.339057 containerd[2134]: time="2026-04-24T23:38:18.338233538Z" level=info msg="TearDown network for sandbox \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\" successfully" Apr 24 23:38:18.339057 containerd[2134]: time="2026-04-24T23:38:18.338287718Z" level=info msg="StopPodSandbox for \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\" returns successfully" Apr 24 23:38:18.355475 containerd[2134]: time="2026-04-24T23:38:18.355391018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-vgkrr,Uid:5e7ff394-4d7c-4f1e-b093-f94b4553f9bc,Namespace:calico-system,Attempt:1,}" Apr 24 23:38:18.514551 kubelet[3612]: I0424 23:38:18.513746 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9519b12d-dc11-46ae-98f0-b100cebca911-nginx-config\") pod \"whisker-77f46cb4c9-f97vl\" (UID: \"9519b12d-dc11-46ae-98f0-b100cebca911\") " pod="calico-system/whisker-77f46cb4c9-f97vl" Apr 24 23:38:18.521302 kubelet[3612]: I0424 23:38:18.518600 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9519b12d-dc11-46ae-98f0-b100cebca911-whisker-backend-key-pair\") pod \"whisker-77f46cb4c9-f97vl\" (UID: \"9519b12d-dc11-46ae-98f0-b100cebca911\") " pod="calico-system/whisker-77f46cb4c9-f97vl" Apr 24 23:38:18.521302 kubelet[3612]: I0424 23:38:18.519039 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m54vg\" (UniqueName: \"kubernetes.io/projected/9519b12d-dc11-46ae-98f0-b100cebca911-kube-api-access-m54vg\") pod \"whisker-77f46cb4c9-f97vl\" (UID: \"9519b12d-dc11-46ae-98f0-b100cebca911\") " pod="calico-system/whisker-77f46cb4c9-f97vl" Apr 24 23:38:18.524339 kubelet[3612]: I0424 23:38:18.524013 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9519b12d-dc11-46ae-98f0-b100cebca911-whisker-ca-bundle\") pod \"whisker-77f46cb4c9-f97vl\" (UID: \"9519b12d-dc11-46ae-98f0-b100cebca911\") " pod="calico-system/whisker-77f46cb4c9-f97vl" Apr 24 23:38:18.775198 containerd[2134]: time="2026-04-24T23:38:18.774750292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77f46cb4c9-f97vl,Uid:9519b12d-dc11-46ae-98f0-b100cebca911,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:19.236686 systemd[1]: run-netns-cni\x2d20318366\x2d9753\x2d4028\x2d5160\x2d104855c33503.mount: Deactivated successfully. Apr 24 23:38:19.237038 systemd[1]: run-netns-cni\x2d5a9faca2\x2d409c\x2df02c\x2d2fe9\x2d7ead939f64b5.mount: Deactivated successfully. Apr 24 23:38:19.602094 kubelet[3612]: I0424 23:38:19.602020 3612 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56de8078-593a-4547-8372-d1bfdb078e2b" path="/var/lib/kubelet/pods/56de8078-593a-4547-8372-d1bfdb078e2b/volumes" Apr 24 23:38:19.996411 systemd-networkd[1685]: cali628cfbea431: Link UP Apr 24 23:38:20.006886 systemd-networkd[1685]: cali628cfbea431: Gained carrier Apr 24 23:38:20.018599 (udev-worker)[5211]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:18.813 [ERROR][4949] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.022 [INFO][4949] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0 calico-kube-controllers-5b4f49844- calico-system 99243418-611d-4ac7-9803-be2a0a09e3b8 904 0 2026-04-24 23:37:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b4f49844 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-13 calico-kube-controllers-5b4f49844-22zct eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali628cfbea431 [] [] }} ContainerID="0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" Namespace="calico-system" Pod="calico-kube-controllers-5b4f49844-22zct" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-" Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.025 [INFO][4949] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" Namespace="calico-system" Pod="calico-kube-controllers-5b4f49844-22zct" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.546 [INFO][5131] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" HandleID="k8s-pod-network.0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" Workload="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.677 [INFO][5131] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" HandleID="k8s-pod-network.0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" Workload="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003f7380), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-13", "pod":"calico-kube-controllers-5b4f49844-22zct", "timestamp":"2026-04-24 23:38:19.5469193 +0000 UTC"}, Hostname:"ip-172-31-28-13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400016bce0)} Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.690 [INFO][5131] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.691 [INFO][5131] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.691 [INFO][5131] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-13' Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.709 [INFO][5131] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" host="ip-172-31-28-13" Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.749 [INFO][5131] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-13" Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.801 [INFO][5131] ipam/ipam.go 526: Trying affinity for 192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.810 [INFO][5131] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.821 [INFO][5131] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.821 [INFO][5131] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" host="ip-172-31-28-13" Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.829 [INFO][5131] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31 Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.846 [INFO][5131] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" host="ip-172-31-28-13" Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.865 [INFO][5131] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.193/26] block=192.168.51.192/26 handle="k8s-pod-network.0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" host="ip-172-31-28-13" Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.868 [INFO][5131] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.193/26] handle="k8s-pod-network.0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" host="ip-172-31-28-13" Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.868 [INFO][5131] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:20.201811 containerd[2134]: 2026-04-24 23:38:19.871 [INFO][5131] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.193/26] IPv6=[] ContainerID="0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" HandleID="k8s-pod-network.0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" Workload="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" Apr 24 23:38:20.203727 containerd[2134]: 2026-04-24 23:38:19.932 [INFO][4949] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" Namespace="calico-system" Pod="calico-kube-controllers-5b4f49844-22zct" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0", GenerateName:"calico-kube-controllers-5b4f49844-", Namespace:"calico-system", SelfLink:"", UID:"99243418-611d-4ac7-9803-be2a0a09e3b8", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b4f49844", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"", Pod:"calico-kube-controllers-5b4f49844-22zct", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali628cfbea431", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:20.203727 containerd[2134]: 2026-04-24 23:38:19.933 [INFO][4949] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.193/32] ContainerID="0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" Namespace="calico-system" Pod="calico-kube-controllers-5b4f49844-22zct" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" Apr 24 23:38:20.203727 containerd[2134]: 2026-04-24 23:38:19.934 [INFO][4949] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali628cfbea431 ContainerID="0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" Namespace="calico-system" Pod="calico-kube-controllers-5b4f49844-22zct" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" Apr 24 23:38:20.203727 containerd[2134]: 2026-04-24 23:38:20.075 [INFO][4949] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" Namespace="calico-system" Pod="calico-kube-controllers-5b4f49844-22zct" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" Apr 24 23:38:20.203727 containerd[2134]: 2026-04-24 23:38:20.083 [INFO][4949] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" Namespace="calico-system" Pod="calico-kube-controllers-5b4f49844-22zct" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0", GenerateName:"calico-kube-controllers-5b4f49844-", Namespace:"calico-system", SelfLink:"", UID:"99243418-611d-4ac7-9803-be2a0a09e3b8", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b4f49844", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31", Pod:"calico-kube-controllers-5b4f49844-22zct", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali628cfbea431", MAC:"de:33:86:e2:0b:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:20.203727 containerd[2134]: 2026-04-24 23:38:20.154 [INFO][4949] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31" Namespace="calico-system" Pod="calico-kube-controllers-5b4f49844-22zct" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" Apr 24 23:38:20.227742 (udev-worker)[5210]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:38:20.278789 systemd-networkd[1685]: cali1e25e6309bc: Link UP Apr 24 23:38:20.279954 systemd-networkd[1685]: cali1e25e6309bc: Gained carrier Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:18.719 [ERROR][4948] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:18.891 [INFO][4948] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0 calico-apiserver-6845df7cfd- calico-system c768fe2c-5b19-4c7a-88e7-ffec16fc16fe 907 0 2026-04-24 23:37:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6845df7cfd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-13 calico-apiserver-6845df7cfd-kfbzs eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali1e25e6309bc [] [] }} ContainerID="9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" Namespace="calico-system" Pod="calico-apiserver-6845df7cfd-kfbzs" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-" Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:18.891 [INFO][4948] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" Namespace="calico-system" Pod="calico-apiserver-6845df7cfd-kfbzs" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:19.537 [INFO][5123] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" HandleID="k8s-pod-network.9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:19.704 [INFO][5123] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" HandleID="k8s-pod-network.9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004da30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-13", "pod":"calico-apiserver-6845df7cfd-kfbzs", "timestamp":"2026-04-24 23:38:19.537191308 +0000 UTC"}, Hostname:"ip-172-31-28-13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000185760)} Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:19.706 [INFO][5123] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:19.873 [INFO][5123] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:19.873 [INFO][5123] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-13' Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:19.893 [INFO][5123] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" host="ip-172-31-28-13" Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:19.912 [INFO][5123] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-13" Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:19.938 [INFO][5123] ipam/ipam.go 526: Trying affinity for 192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:19.955 [INFO][5123] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:19.993 [INFO][5123] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:19.993 [INFO][5123] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" host="ip-172-31-28-13" Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:20.016 [INFO][5123] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:20.077 [INFO][5123] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" host="ip-172-31-28-13" Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:20.126 [INFO][5123] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.194/26] block=192.168.51.192/26 handle="k8s-pod-network.9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" host="ip-172-31-28-13" Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:20.126 [INFO][5123] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.194/26] handle="k8s-pod-network.9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" host="ip-172-31-28-13" Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:20.130 [INFO][5123] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:20.348210 containerd[2134]: 2026-04-24 23:38:20.133 [INFO][5123] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.194/26] IPv6=[] ContainerID="9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" HandleID="k8s-pod-network.9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" Apr 24 23:38:20.350631 containerd[2134]: 2026-04-24 23:38:20.209 [INFO][4948] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" Namespace="calico-system" Pod="calico-apiserver-6845df7cfd-kfbzs" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0", GenerateName:"calico-apiserver-6845df7cfd-", Namespace:"calico-system", SelfLink:"", UID:"c768fe2c-5b19-4c7a-88e7-ffec16fc16fe", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6845df7cfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"", Pod:"calico-apiserver-6845df7cfd-kfbzs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1e25e6309bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:20.350631 containerd[2134]: 2026-04-24 23:38:20.219 [INFO][4948] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.194/32] ContainerID="9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" Namespace="calico-system" Pod="calico-apiserver-6845df7cfd-kfbzs" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" Apr 24 23:38:20.350631 containerd[2134]: 2026-04-24 23:38:20.219 [INFO][4948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e25e6309bc ContainerID="9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" Namespace="calico-system" Pod="calico-apiserver-6845df7cfd-kfbzs" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" Apr 24 23:38:20.350631 containerd[2134]: 2026-04-24 23:38:20.285 [INFO][4948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" Namespace="calico-system" Pod="calico-apiserver-6845df7cfd-kfbzs" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" Apr 24 23:38:20.350631 containerd[2134]: 2026-04-24 23:38:20.295 [INFO][4948] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" Namespace="calico-system" Pod="calico-apiserver-6845df7cfd-kfbzs" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0", GenerateName:"calico-apiserver-6845df7cfd-", Namespace:"calico-system", SelfLink:"", UID:"c768fe2c-5b19-4c7a-88e7-ffec16fc16fe", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6845df7cfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc", Pod:"calico-apiserver-6845df7cfd-kfbzs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1e25e6309bc", MAC:"aa:43:d5:88:b3:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:20.350631 containerd[2134]: 2026-04-24 23:38:20.328 [INFO][4948] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc" Namespace="calico-system" Pod="calico-apiserver-6845df7cfd-kfbzs" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" Apr 24 23:38:20.400585 kubelet[3612]: I0424 23:38:20.400290 3612 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:38:20.441555 containerd[2134]: time="2026-04-24T23:38:20.438838565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:20.441555 containerd[2134]: time="2026-04-24T23:38:20.438950213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:20.441555 containerd[2134]: time="2026-04-24T23:38:20.439041269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:20.441555 containerd[2134]: time="2026-04-24T23:38:20.439218137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:20.470509 systemd-networkd[1685]: cali1871404da4e: Link UP Apr 24 23:38:20.471475 systemd-networkd[1685]: cali1871404da4e: Gained carrier Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:19.126 [ERROR][5012] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:19.424 [INFO][5012] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0 coredns-674b8bbfcf- kube-system c5340818-2c6e-4850-b9c6-6c45955ef8bc 909 0 2026-04-24 23:37:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-13 coredns-674b8bbfcf-rxbn7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1871404da4e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbn7" WorkloadEndpoint="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-" Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:19.424 [INFO][5012] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbn7" WorkloadEndpoint="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:19.693 [INFO][5157] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" HandleID="k8s-pod-network.3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:19.778 [INFO][5157] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" HandleID="k8s-pod-network.3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001217d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-13", "pod":"coredns-674b8bbfcf-rxbn7", "timestamp":"2026-04-24 23:38:19.693126005 +0000 UTC"}, Hostname:"ip-172-31-28-13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000184b00)} Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:19.778 [INFO][5157] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:20.129 [INFO][5157] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:20.130 [INFO][5157] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-13' Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:20.152 [INFO][5157] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" host="ip-172-31-28-13" Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:20.180 [INFO][5157] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-13" Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:20.239 [INFO][5157] ipam/ipam.go 526: Trying affinity for 192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:20.253 [INFO][5157] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:20.286 [INFO][5157] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:20.286 [INFO][5157] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" host="ip-172-31-28-13" Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:20.293 [INFO][5157] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7 Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:20.338 [INFO][5157] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" host="ip-172-31-28-13" Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:20.364 [INFO][5157] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.195/26] block=192.168.51.192/26 handle="k8s-pod-network.3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" host="ip-172-31-28-13" Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:20.364 [INFO][5157] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.195/26] handle="k8s-pod-network.3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" host="ip-172-31-28-13" Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:20.364 [INFO][5157] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:20.624106 containerd[2134]: 2026-04-24 23:38:20.364 [INFO][5157] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.195/26] IPv6=[] ContainerID="3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" HandleID="k8s-pod-network.3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" Apr 24 23:38:20.627528 containerd[2134]: 2026-04-24 23:38:20.378 [INFO][5012] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbn7" WorkloadEndpoint="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5340818-2c6e-4850-b9c6-6c45955ef8bc", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"", Pod:"coredns-674b8bbfcf-rxbn7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1871404da4e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:20.627528 containerd[2134]: 2026-04-24 23:38:20.381 [INFO][5012] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.195/32] ContainerID="3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbn7" WorkloadEndpoint="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" Apr 24 23:38:20.627528 containerd[2134]: 2026-04-24 23:38:20.381 [INFO][5012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1871404da4e ContainerID="3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbn7" WorkloadEndpoint="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" Apr 24 23:38:20.627528 containerd[2134]: 2026-04-24 23:38:20.472 [INFO][5012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbn7" WorkloadEndpoint="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" Apr 24 23:38:20.627528 containerd[2134]: 2026-04-24 23:38:20.473 [INFO][5012] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbn7" WorkloadEndpoint="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5340818-2c6e-4850-b9c6-6c45955ef8bc", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7", Pod:"coredns-674b8bbfcf-rxbn7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1871404da4e", MAC:"f2:2f:c5:a4:18:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:20.627528 containerd[2134]: 2026-04-24 23:38:20.550 [INFO][5012] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbn7" WorkloadEndpoint="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" Apr 24 23:38:20.738140 systemd-networkd[1685]: calib0510af2f81: Link UP Apr 24 23:38:20.759456 systemd-networkd[1685]: calib0510af2f81: Gained carrier Apr 24 23:38:20.782412 containerd[2134]: time="2026-04-24T23:38:20.777309498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:20.782412 containerd[2134]: time="2026-04-24T23:38:20.779501022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:20.782412 containerd[2134]: time="2026-04-24T23:38:20.780043098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:20.785116 containerd[2134]: time="2026-04-24T23:38:20.782634966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:19.236 [ERROR][5025] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:19.529 [INFO][5025] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0 goldmane-5b85766d88- calico-system 5e7ff394-4d7c-4f1e-b093-f94b4553f9bc 908 0 2026-04-24 23:37:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-13 goldmane-5b85766d88-vgkrr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib0510af2f81 [] [] }} ContainerID="a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" Namespace="calico-system" Pod="goldmane-5b85766d88-vgkrr" WorkloadEndpoint="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-" Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:19.529 [INFO][5025] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" Namespace="calico-system" Pod="goldmane-5b85766d88-vgkrr" WorkloadEndpoint="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.142 [INFO][5178] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" HandleID="k8s-pod-network.a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" Workload="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.239 [INFO][5178] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" HandleID="k8s-pod-network.a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" Workload="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000122160), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-13", "pod":"goldmane-5b85766d88-vgkrr", "timestamp":"2026-04-24 23:38:20.142677807 +0000 UTC"}, Hostname:"ip-172-31-28-13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004c3080)} Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.244 [INFO][5178] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.365 [INFO][5178] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.366 [INFO][5178] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-13' Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.373 [INFO][5178] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" host="ip-172-31-28-13" Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.389 [INFO][5178] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-13" Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.416 [INFO][5178] ipam/ipam.go 526: Trying affinity for 192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.422 [INFO][5178] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.432 [INFO][5178] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.432 [INFO][5178] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" host="ip-172-31-28-13" Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.437 [INFO][5178] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.470 [INFO][5178] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" host="ip-172-31-28-13" Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.548 [INFO][5178] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.196/26] block=192.168.51.192/26 handle="k8s-pod-network.a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" host="ip-172-31-28-13" Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.550 [INFO][5178] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.196/26] handle="k8s-pod-network.a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" host="ip-172-31-28-13" Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.550 [INFO][5178] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:20.873134 containerd[2134]: 2026-04-24 23:38:20.550 [INFO][5178] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.196/26] IPv6=[] ContainerID="a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" HandleID="k8s-pod-network.a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" Workload="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" Apr 24 23:38:20.883779 containerd[2134]: 2026-04-24 23:38:20.624 [INFO][5025] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" Namespace="calico-system" Pod="goldmane-5b85766d88-vgkrr" WorkloadEndpoint="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"5e7ff394-4d7c-4f1e-b093-f94b4553f9bc", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"", Pod:"goldmane-5b85766d88-vgkrr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib0510af2f81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:20.883779 containerd[2134]: 2026-04-24 23:38:20.624 [INFO][5025] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.196/32] ContainerID="a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" Namespace="calico-system" Pod="goldmane-5b85766d88-vgkrr" WorkloadEndpoint="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" Apr 24 23:38:20.883779 containerd[2134]: 2026-04-24 23:38:20.624 [INFO][5025] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0510af2f81 ContainerID="a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" Namespace="calico-system" Pod="goldmane-5b85766d88-vgkrr" WorkloadEndpoint="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" Apr 24 23:38:20.883779 containerd[2134]: 2026-04-24 23:38:20.779 [INFO][5025] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" Namespace="calico-system" Pod="goldmane-5b85766d88-vgkrr" WorkloadEndpoint="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" Apr 24 23:38:20.883779 containerd[2134]: 2026-04-24 23:38:20.788 [INFO][5025] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" Namespace="calico-system" Pod="goldmane-5b85766d88-vgkrr" WorkloadEndpoint="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"5e7ff394-4d7c-4f1e-b093-f94b4553f9bc", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d", Pod:"goldmane-5b85766d88-vgkrr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib0510af2f81", MAC:"1e:c8:62:ae:e0:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:20.883779 containerd[2134]: 2026-04-24 23:38:20.854 [INFO][5025] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d" Namespace="calico-system" Pod="goldmane-5b85766d88-vgkrr" WorkloadEndpoint="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" Apr 24 23:38:20.902440 systemd-networkd[1685]: calib93debd6a71: Link UP Apr 24 23:38:20.913153 systemd-networkd[1685]: calib93debd6a71: Gained carrier Apr 24 23:38:20.982764 systemd[1]: run-containerd-runc-k8s.io-9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc-runc.wtMM8C.mount: Deactivated successfully. Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:19.113 [ERROR][4979] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:19.316 [INFO][4979] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0 calico-apiserver-6845df7cfd- calico-system afad92e3-d16f-4f0f-986f-20a0bd89790e 903 0 2026-04-24 23:37:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6845df7cfd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-13 calico-apiserver-6845df7cfd-nmdrp eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib93debd6a71 [] [] }} ContainerID="45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" Namespace="calico-system" Pod="calico-apiserver-6845df7cfd-nmdrp" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-" Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:19.316 [INFO][4979] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" Namespace="calico-system" Pod="calico-apiserver-6845df7cfd-nmdrp" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.105 [INFO][5153] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" HandleID="k8s-pod-network.45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.246 [INFO][5153] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" HandleID="k8s-pod-network.45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d310), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-13", "pod":"calico-apiserver-6845df7cfd-nmdrp", "timestamp":"2026-04-24 23:38:20.105609171 +0000 UTC"}, Hostname:"ip-172-31-28-13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000184840)} Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.246 [INFO][5153] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.574 [INFO][5153] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.575 [INFO][5153] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-13' Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.587 [INFO][5153] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" host="ip-172-31-28-13" Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.632 [INFO][5153] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-13" Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.702 [INFO][5153] ipam/ipam.go 526: Trying affinity for 192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.737 [INFO][5153] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.765 [INFO][5153] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.769 [INFO][5153] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" host="ip-172-31-28-13" Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.785 [INFO][5153] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.809 [INFO][5153] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" host="ip-172-31-28-13" Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.849 [INFO][5153] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.197/26] block=192.168.51.192/26 handle="k8s-pod-network.45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" host="ip-172-31-28-13" Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.850 [INFO][5153] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.197/26] handle="k8s-pod-network.45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" host="ip-172-31-28-13" Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.850 [INFO][5153] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:21.056169 containerd[2134]: 2026-04-24 23:38:20.850 [INFO][5153] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.197/26] IPv6=[] ContainerID="45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" HandleID="k8s-pod-network.45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" Apr 24 23:38:21.062937 containerd[2134]: 2026-04-24 23:38:20.875 [INFO][4979] cni-plugin/k8s.go 418: Populated endpoint ContainerID="45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" Namespace="calico-system" Pod="calico-apiserver-6845df7cfd-nmdrp" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0", GenerateName:"calico-apiserver-6845df7cfd-", Namespace:"calico-system", SelfLink:"", UID:"afad92e3-d16f-4f0f-986f-20a0bd89790e", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6845df7cfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"", Pod:"calico-apiserver-6845df7cfd-nmdrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib93debd6a71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:21.062937 containerd[2134]: 2026-04-24 23:38:20.876 [INFO][4979] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.197/32] ContainerID="45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" Namespace="calico-system" Pod="calico-apiserver-6845df7cfd-nmdrp" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" Apr 24 23:38:21.062937 containerd[2134]: 2026-04-24 23:38:20.876 [INFO][4979] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib93debd6a71 ContainerID="45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" Namespace="calico-system" Pod="calico-apiserver-6845df7cfd-nmdrp" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" Apr 24 23:38:21.062937 containerd[2134]: 2026-04-24 23:38:20.945 [INFO][4979] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" Namespace="calico-system" Pod="calico-apiserver-6845df7cfd-nmdrp" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" Apr 24 23:38:21.062937 containerd[2134]: 2026-04-24 23:38:20.960 [INFO][4979] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" Namespace="calico-system" Pod="calico-apiserver-6845df7cfd-nmdrp" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0", GenerateName:"calico-apiserver-6845df7cfd-", Namespace:"calico-system", SelfLink:"", UID:"afad92e3-d16f-4f0f-986f-20a0bd89790e", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6845df7cfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c", Pod:"calico-apiserver-6845df7cfd-nmdrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib93debd6a71", MAC:"9a:bf:a6:bb:21:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:21.062937 containerd[2134]: 2026-04-24 23:38:21.004 [INFO][4979] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c" Namespace="calico-system" Pod="calico-apiserver-6845df7cfd-nmdrp" WorkloadEndpoint="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" Apr 24 23:38:21.180354 containerd[2134]: time="2026-04-24T23:38:21.173631760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:21.180354 containerd[2134]: time="2026-04-24T23:38:21.173764108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:21.180354 containerd[2134]: time="2026-04-24T23:38:21.173822020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:21.180354 containerd[2134]: time="2026-04-24T23:38:21.174065680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:21.329095 containerd[2134]: time="2026-04-24T23:38:21.317257973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:21.383794 systemd-networkd[1685]: cali443fe73dd5f: Link UP Apr 24 23:38:21.385796 systemd-networkd[1685]: cali443fe73dd5f: Gained carrier Apr 24 23:38:21.390589 containerd[2134]: time="2026-04-24T23:38:21.377122853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b4f49844-22zct,Uid:99243418-611d-4ac7-9803-be2a0a09e3b8,Namespace:calico-system,Attempt:1,} returns sandbox id \"0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31\"" Apr 24 23:38:21.424562 containerd[2134]: time="2026-04-24T23:38:21.420584298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:21.424562 containerd[2134]: time="2026-04-24T23:38:21.420655038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:21.424562 containerd[2134]: time="2026-04-24T23:38:21.420884670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:21.427235 containerd[2134]: time="2026-04-24T23:38:21.410422050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6845df7cfd-kfbzs,Uid:c768fe2c-5b19-4c7a-88e7-ffec16fc16fe,Namespace:calico-system,Attempt:1,} returns sandbox id \"9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc\"" Apr 24 23:38:21.439714 containerd[2134]: time="2026-04-24T23:38:21.438479202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 24 23:38:21.473199 systemd-networkd[1685]: cali628cfbea431: Gained IPv6LL Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:19.156 [ERROR][4965] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:19.429 [INFO][4965] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0 coredns-674b8bbfcf- kube-system 9ef2e298-8143-4f28-a14a-2f167f054ba4 905 0 2026-04-24 23:37:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-13 coredns-674b8bbfcf-cprf8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali443fe73dd5f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" Namespace="kube-system" Pod="coredns-674b8bbfcf-cprf8" WorkloadEndpoint="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-" Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:19.432 [INFO][4965] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" Namespace="kube-system" Pod="coredns-674b8bbfcf-cprf8" WorkloadEndpoint="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:20.189 [INFO][5158] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" HandleID="k8s-pod-network.7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:20.293 [INFO][5158] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" HandleID="k8s-pod-network.7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d160), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-13", "pod":"coredns-674b8bbfcf-cprf8", "timestamp":"2026-04-24 23:38:20.189851607 +0000 UTC"}, Hostname:"ip-172-31-28-13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000248000)} Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:20.294 [INFO][5158] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:20.850 [INFO][5158] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:20.856 [INFO][5158] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-13' Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:20.869 [INFO][5158] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" host="ip-172-31-28-13" Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:20.982 [INFO][5158] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-13" Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:21.094 [INFO][5158] ipam/ipam.go 526: Trying affinity for 192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:21.108 [INFO][5158] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:21.138 [INFO][5158] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:21.141 [INFO][5158] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" host="ip-172-31-28-13" Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:21.156 [INFO][5158] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487 Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:21.195 [INFO][5158] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" host="ip-172-31-28-13" Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:21.234 [INFO][5158] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.198/26] block=192.168.51.192/26 handle="k8s-pod-network.7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" host="ip-172-31-28-13" Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:21.235 [INFO][5158] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.198/26] handle="k8s-pod-network.7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" host="ip-172-31-28-13" Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:21.235 [INFO][5158] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:21.561270 containerd[2134]: 2026-04-24 23:38:21.235 [INFO][5158] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.198/26] IPv6=[] ContainerID="7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" HandleID="k8s-pod-network.7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" Apr 24 23:38:21.565361 containerd[2134]: 2026-04-24 23:38:21.269 [INFO][4965] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" Namespace="kube-system" Pod="coredns-674b8bbfcf-cprf8" WorkloadEndpoint="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9ef2e298-8143-4f28-a14a-2f167f054ba4", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"", Pod:"coredns-674b8bbfcf-cprf8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali443fe73dd5f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:21.565361 containerd[2134]: 2026-04-24 23:38:21.273 [INFO][4965] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.198/32] ContainerID="7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" Namespace="kube-system" Pod="coredns-674b8bbfcf-cprf8" WorkloadEndpoint="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" Apr 24 23:38:21.565361 containerd[2134]: 2026-04-24 23:38:21.273 [INFO][4965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali443fe73dd5f ContainerID="7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" Namespace="kube-system" Pod="coredns-674b8bbfcf-cprf8" WorkloadEndpoint="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" Apr 24 23:38:21.565361 containerd[2134]: 2026-04-24 23:38:21.445 [INFO][4965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" Namespace="kube-system" Pod="coredns-674b8bbfcf-cprf8" WorkloadEndpoint="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" Apr 24 23:38:21.565361 containerd[2134]: 2026-04-24 23:38:21.447 [INFO][4965] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" Namespace="kube-system" Pod="coredns-674b8bbfcf-cprf8" WorkloadEndpoint="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9ef2e298-8143-4f28-a14a-2f167f054ba4", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487", Pod:"coredns-674b8bbfcf-cprf8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali443fe73dd5f", MAC:"16:62:2b:3d:89:89", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:21.565361 containerd[2134]: 2026-04-24 23:38:21.501 [INFO][4965] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487" Namespace="kube-system" Pod="coredns-674b8bbfcf-cprf8" WorkloadEndpoint="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" Apr 24 23:38:21.665407 systemd-networkd[1685]: cali1871404da4e: Gained IPv6LL Apr 24 23:38:21.751728 containerd[2134]: time="2026-04-24T23:38:21.751385599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:21.756480 containerd[2134]: time="2026-04-24T23:38:21.753158347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:21.756480 containerd[2134]: time="2026-04-24T23:38:21.756075439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:21.760032 containerd[2134]: time="2026-04-24T23:38:21.756399475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:21.794656 systemd-networkd[1685]: cali1e25e6309bc: Gained IPv6LL Apr 24 23:38:21.806047 systemd-networkd[1685]: calib7874e4f294: Link UP Apr 24 23:38:21.811420 systemd-networkd[1685]: calib7874e4f294: Gained carrier Apr 24 23:38:21.895774 containerd[2134]: time="2026-04-24T23:38:21.895404824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:21.895774 containerd[2134]: time="2026-04-24T23:38:21.895553984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:21.895774 containerd[2134]: time="2026-04-24T23:38:21.895606280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:21.899188 containerd[2134]: time="2026-04-24T23:38:21.897803888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:19.475 [ERROR][5001] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:19.702 [INFO][5001] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--13-k8s-csi--node--driver--zvsb9-eth0 csi-node-driver- calico-system 70f656aa-464e-42e4-84a2-cf156c522759 885 0 2026-04-24 23:37:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-13 csi-node-driver-zvsb9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib7874e4f294 [] [] }} ContainerID="5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" Namespace="calico-system" Pod="csi-node-driver-zvsb9" WorkloadEndpoint="ip--172--31--28--13-k8s-csi--node--driver--zvsb9-" Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:19.702 [INFO][5001] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" Namespace="calico-system" Pod="csi-node-driver-zvsb9" WorkloadEndpoint="ip--172--31--28--13-k8s-csi--node--driver--zvsb9-eth0" Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:20.309 [INFO][5188] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" HandleID="k8s-pod-network.5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" Workload="ip--172--31--28--13-k8s-csi--node--driver--zvsb9-eth0" Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:20.362 [INFO][5188] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" HandleID="k8s-pod-network.5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" Workload="ip--172--31--28--13-k8s-csi--node--driver--zvsb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000379d60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-13", "pod":"csi-node-driver-zvsb9", "timestamp":"2026-04-24 23:38:20.309419092 +0000 UTC"}, Hostname:"ip-172-31-28-13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000f86e0)} Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:20.362 [INFO][5188] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:21.243 [INFO][5188] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:21.243 [INFO][5188] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-13' Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:21.256 [INFO][5188] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" host="ip-172-31-28-13" Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:21.300 [INFO][5188] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-13" Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:21.451 [INFO][5188] ipam/ipam.go 526: Trying affinity for 192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:21.519 [INFO][5188] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:21.557 [INFO][5188] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:21.566 [INFO][5188] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" host="ip-172-31-28-13" Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:21.592 [INFO][5188] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0 Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:21.635 [INFO][5188] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" host="ip-172-31-28-13" Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:21.662 [INFO][5188] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.199/26] block=192.168.51.192/26 handle="k8s-pod-network.5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" host="ip-172-31-28-13" Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:21.665 [INFO][5188] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.199/26] handle="k8s-pod-network.5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" host="ip-172-31-28-13" Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:21.675 [INFO][5188] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:21.944522 containerd[2134]: 2026-04-24 23:38:21.681 [INFO][5188] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.199/26] IPv6=[] ContainerID="5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" HandleID="k8s-pod-network.5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" Workload="ip--172--31--28--13-k8s-csi--node--driver--zvsb9-eth0" Apr 24 23:38:21.951256 containerd[2134]: 2026-04-24 23:38:21.721 [INFO][5001] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" Namespace="calico-system" Pod="csi-node-driver-zvsb9" WorkloadEndpoint="ip--172--31--28--13-k8s-csi--node--driver--zvsb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-csi--node--driver--zvsb9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"70f656aa-464e-42e4-84a2-cf156c522759", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"", Pod:"csi-node-driver-zvsb9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib7874e4f294", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:21.951256 containerd[2134]: 2026-04-24 23:38:21.721 [INFO][5001] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.199/32] ContainerID="5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" Namespace="calico-system" Pod="csi-node-driver-zvsb9" WorkloadEndpoint="ip--172--31--28--13-k8s-csi--node--driver--zvsb9-eth0" Apr 24 23:38:21.951256 containerd[2134]: 2026-04-24 23:38:21.722 [INFO][5001] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7874e4f294 ContainerID="5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" Namespace="calico-system" Pod="csi-node-driver-zvsb9" WorkloadEndpoint="ip--172--31--28--13-k8s-csi--node--driver--zvsb9-eth0" Apr 24 23:38:21.951256 containerd[2134]: 2026-04-24 23:38:21.852 [INFO][5001] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" Namespace="calico-system" Pod="csi-node-driver-zvsb9" WorkloadEndpoint="ip--172--31--28--13-k8s-csi--node--driver--zvsb9-eth0" Apr 24 23:38:21.951256 containerd[2134]: 2026-04-24 23:38:21.856 [INFO][5001] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" Namespace="calico-system" Pod="csi-node-driver-zvsb9" WorkloadEndpoint="ip--172--31--28--13-k8s-csi--node--driver--zvsb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-csi--node--driver--zvsb9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"70f656aa-464e-42e4-84a2-cf156c522759", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0", Pod:"csi-node-driver-zvsb9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib7874e4f294", MAC:"ee:7b:f3:9f:27:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:21.951256 containerd[2134]: 2026-04-24 23:38:21.914 [INFO][5001] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0" Namespace="calico-system" Pod="csi-node-driver-zvsb9" WorkloadEndpoint="ip--172--31--28--13-k8s-csi--node--driver--zvsb9-eth0" Apr 24 23:38:21.973034 containerd[2134]: time="2026-04-24T23:38:21.971908748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxbn7,Uid:c5340818-2c6e-4850-b9c6-6c45955ef8bc,Namespace:kube-system,Attempt:1,} returns sandbox id \"3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7\"" Apr 24 23:38:21.998424 containerd[2134]: time="2026-04-24T23:38:21.998223824Z" level=info msg="CreateContainer within sandbox \"3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:38:22.106739 systemd-networkd[1685]: cali321fe3533a7: Link UP Apr 24 23:38:22.112725 systemd-networkd[1685]: cali321fe3533a7: Gained carrier Apr 24 23:38:22.254296 kernel: calico-node[5310]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 24 23:38:22.290720 containerd[2134]: time="2026-04-24T23:38:22.290657334Z" level=info msg="CreateContainer within sandbox \"3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"47159c29404ac40804f418a66ca370deda79732133f8b3d9898836a5fdae57b2\"" Apr 24 23:38:22.300810 containerd[2134]: time="2026-04-24T23:38:22.300235014Z" level=info msg="StartContainer for \"47159c29404ac40804f418a66ca370deda79732133f8b3d9898836a5fdae57b2\"" Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:19.662 [ERROR][5115] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:19.771 [INFO][5115] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--13-k8s-whisker--77f46cb4c9--f97vl-eth0 whisker-77f46cb4c9- calico-system 9519b12d-dc11-46ae-98f0-b100cebca911 926 0 2026-04-24 23:38:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77f46cb4c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-13 whisker-77f46cb4c9-f97vl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali321fe3533a7 [] [] }} ContainerID="02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" Namespace="calico-system" Pod="whisker-77f46cb4c9-f97vl" WorkloadEndpoint="ip--172--31--28--13-k8s-whisker--77f46cb4c9--f97vl-" Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:19.776 [INFO][5115] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" Namespace="calico-system" Pod="whisker-77f46cb4c9-f97vl" WorkloadEndpoint="ip--172--31--28--13-k8s-whisker--77f46cb4c9--f97vl-eth0" Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:20.588 [INFO][5202] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" HandleID="k8s-pod-network.02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" Workload="ip--172--31--28--13-k8s-whisker--77f46cb4c9--f97vl-eth0" Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:20.681 [INFO][5202] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" HandleID="k8s-pod-network.02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" Workload="ip--172--31--28--13-k8s-whisker--77f46cb4c9--f97vl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039d600), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-13", "pod":"whisker-77f46cb4c9-f97vl", "timestamp":"2026-04-24 23:38:20.588305273 +0000 UTC"}, Hostname:"ip-172-31-28-13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000e22c0)} Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:20.681 [INFO][5202] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:21.680 [INFO][5202] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:21.680 [INFO][5202] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-13' Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:21.690 [INFO][5202] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" host="ip-172-31-28-13" Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:21.729 [INFO][5202] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-13" Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:21.810 [INFO][5202] ipam/ipam.go 526: Trying affinity for 192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:21.843 [INFO][5202] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:21.882 [INFO][5202] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-28-13" Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:21.882 [INFO][5202] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" host="ip-172-31-28-13" Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:21.895 [INFO][5202] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309 Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:21.938 [INFO][5202] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" host="ip-172-31-28-13" Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:21.977 [INFO][5202] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.200/26] block=192.168.51.192/26 handle="k8s-pod-network.02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" host="ip-172-31-28-13" Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:21.979 [INFO][5202] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.200/26] handle="k8s-pod-network.02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" host="ip-172-31-28-13" Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:21.979 [INFO][5202] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:22.317681 containerd[2134]: 2026-04-24 23:38:21.979 [INFO][5202] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.200/26] IPv6=[] ContainerID="02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" HandleID="k8s-pod-network.02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" Workload="ip--172--31--28--13-k8s-whisker--77f46cb4c9--f97vl-eth0" Apr 24 23:38:22.319092 containerd[2134]: 2026-04-24 23:38:22.006 [INFO][5115] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" Namespace="calico-system" Pod="whisker-77f46cb4c9-f97vl" WorkloadEndpoint="ip--172--31--28--13-k8s-whisker--77f46cb4c9--f97vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-whisker--77f46cb4c9--f97vl-eth0", GenerateName:"whisker-77f46cb4c9-", Namespace:"calico-system", SelfLink:"", UID:"9519b12d-dc11-46ae-98f0-b100cebca911", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77f46cb4c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"", Pod:"whisker-77f46cb4c9-f97vl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali321fe3533a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:22.319092 containerd[2134]: 2026-04-24 23:38:22.008 [INFO][5115] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.200/32] ContainerID="02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" Namespace="calico-system" Pod="whisker-77f46cb4c9-f97vl" WorkloadEndpoint="ip--172--31--28--13-k8s-whisker--77f46cb4c9--f97vl-eth0" Apr 24 23:38:22.319092 containerd[2134]: 2026-04-24 23:38:22.009 [INFO][5115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali321fe3533a7 ContainerID="02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" Namespace="calico-system" Pod="whisker-77f46cb4c9-f97vl" WorkloadEndpoint="ip--172--31--28--13-k8s-whisker--77f46cb4c9--f97vl-eth0" Apr 24 23:38:22.319092 containerd[2134]: 2026-04-24 23:38:22.128 [INFO][5115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" Namespace="calico-system" Pod="whisker-77f46cb4c9-f97vl" WorkloadEndpoint="ip--172--31--28--13-k8s-whisker--77f46cb4c9--f97vl-eth0" Apr 24 23:38:22.319092 containerd[2134]: 2026-04-24 23:38:22.162 [INFO][5115] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" Namespace="calico-system" Pod="whisker-77f46cb4c9-f97vl" WorkloadEndpoint="ip--172--31--28--13-k8s-whisker--77f46cb4c9--f97vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-whisker--77f46cb4c9--f97vl-eth0", GenerateName:"whisker-77f46cb4c9-", Namespace:"calico-system", SelfLink:"", UID:"9519b12d-dc11-46ae-98f0-b100cebca911", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77f46cb4c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309", Pod:"whisker-77f46cb4c9-f97vl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali321fe3533a7", MAC:"4a:79:4a:86:c1:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:22.319092 containerd[2134]: 2026-04-24 23:38:22.225 [INFO][5115] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309" Namespace="calico-system" Pod="whisker-77f46cb4c9-f97vl" WorkloadEndpoint="ip--172--31--28--13-k8s-whisker--77f46cb4c9--f97vl-eth0" Apr 24 23:38:22.347936 containerd[2134]: time="2026-04-24T23:38:22.338020302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:22.347936 containerd[2134]: time="2026-04-24T23:38:22.339917934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:22.347936 containerd[2134]: time="2026-04-24T23:38:22.339949530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:22.347936 containerd[2134]: time="2026-04-24T23:38:22.345393978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:22.433964 systemd-networkd[1685]: calib0510af2f81: Gained IPv6LL Apr 24 23:38:22.520550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3721905366.mount: Deactivated successfully. Apr 24 23:38:22.552116 containerd[2134]: time="2026-04-24T23:38:22.551208235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-vgkrr,Uid:5e7ff394-4d7c-4f1e-b093-f94b4553f9bc,Namespace:calico-system,Attempt:1,} returns sandbox id \"a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d\"" Apr 24 23:38:22.689248 systemd-networkd[1685]: calib93debd6a71: Gained IPv6LL Apr 24 23:38:22.827010 systemd-networkd[1685]: cali443fe73dd5f: Gained IPv6LL Apr 24 23:38:22.844860 containerd[2134]: time="2026-04-24T23:38:22.844796181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6845df7cfd-nmdrp,Uid:afad92e3-d16f-4f0f-986f-20a0bd89790e,Namespace:calico-system,Attempt:1,} returns sandbox id \"45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c\"" Apr 24 23:38:22.886351 systemd-resolved[2021]: Under memory pressure, flushing caches. Apr 24 23:38:22.890731 systemd-journald[1612]: Under memory pressure, flushing caches. Apr 24 23:38:22.890848 containerd[2134]: time="2026-04-24T23:38:22.888256305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cprf8,Uid:9ef2e298-8143-4f28-a14a-2f167f054ba4,Namespace:kube-system,Attempt:1,} returns sandbox id \"7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487\"" Apr 24 23:38:22.886429 systemd-resolved[2021]: Flushed all caches. Apr 24 23:38:22.919029 containerd[2134]: time="2026-04-24T23:38:22.915589221Z" level=info msg="CreateContainer within sandbox \"7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:38:22.957239 containerd[2134]: time="2026-04-24T23:38:22.945669693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:22.957239 containerd[2134]: time="2026-04-24T23:38:22.945783681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:22.957239 containerd[2134]: time="2026-04-24T23:38:22.945813765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:22.957239 containerd[2134]: time="2026-04-24T23:38:22.946065345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:23.073514 containerd[2134]: time="2026-04-24T23:38:23.073420110Z" level=info msg="StartContainer for \"47159c29404ac40804f418a66ca370deda79732133f8b3d9898836a5fdae57b2\" returns successfully" Apr 24 23:38:23.154549 containerd[2134]: time="2026-04-24T23:38:23.154270686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zvsb9,Uid:70f656aa-464e-42e4-84a2-cf156c522759,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0\"" Apr 24 23:38:23.197389 containerd[2134]: time="2026-04-24T23:38:23.197328090Z" level=info msg="CreateContainer within sandbox \"7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4cb835fbac4834ba80e8e4cbe7b8f43eafc2bad30cb01bccfe499ea3e764bde\"" Apr 24 23:38:23.203353 containerd[2134]: time="2026-04-24T23:38:23.201635346Z" level=info msg="StartContainer for \"e4cb835fbac4834ba80e8e4cbe7b8f43eafc2bad30cb01bccfe499ea3e764bde\"" Apr 24 23:38:23.201831 systemd-networkd[1685]: calib7874e4f294: Gained IPv6LL Apr 24 23:38:23.212685 containerd[2134]: time="2026-04-24T23:38:23.211333902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77f46cb4c9-f97vl,Uid:9519b12d-dc11-46ae-98f0-b100cebca911,Namespace:calico-system,Attempt:0,} returns sandbox id \"02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309\"" Apr 24 23:38:23.414585 containerd[2134]: time="2026-04-24T23:38:23.414260575Z" level=info msg="StartContainer for \"e4cb835fbac4834ba80e8e4cbe7b8f43eafc2bad30cb01bccfe499ea3e764bde\" returns successfully" Apr 24 23:38:23.743296 kubelet[3612]: I0424 23:38:23.742874 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rxbn7" podStartSLOduration=50.742842945 podStartE2EDuration="50.742842945s" podCreationTimestamp="2026-04-24 23:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:38:23.736151361 +0000 UTC m=+54.493882616" watchObservedRunningTime="2026-04-24 23:38:23.742842945 +0000 UTC m=+54.500574164" Apr 24 23:38:23.778885 kubelet[3612]: I0424 23:38:23.777463 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cprf8" podStartSLOduration=50.777441885 podStartE2EDuration="50.777441885s" podCreationTimestamp="2026-04-24 23:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:38:23.777349257 +0000 UTC m=+54.535080488" watchObservedRunningTime="2026-04-24 23:38:23.777441885 +0000 UTC m=+54.535173104" Apr 24 23:38:23.969315 systemd-networkd[1685]: cali321fe3533a7: Gained IPv6LL Apr 24 23:38:24.153452 systemd-networkd[1685]: vxlan.calico: Link UP Apr 24 23:38:24.153475 systemd-networkd[1685]: vxlan.calico: Gained carrier Apr 24 23:38:25.826788 containerd[2134]: time="2026-04-24T23:38:25.826024715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:25.828707 containerd[2134]: time="2026-04-24T23:38:25.828648899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 24 23:38:25.830450 containerd[2134]: time="2026-04-24T23:38:25.830371571Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:25.836974 containerd[2134]: time="2026-04-24T23:38:25.836889516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:25.839022 containerd[2134]: time="2026-04-24T23:38:25.838935912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 4.39878691s" Apr 24 23:38:25.839198 containerd[2134]: time="2026-04-24T23:38:25.839038824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 24 23:38:25.841857 containerd[2134]: time="2026-04-24T23:38:25.841792596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:38:25.899371 containerd[2134]: time="2026-04-24T23:38:25.899193036Z" level=info msg="CreateContainer within sandbox \"0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 24 23:38:25.929323 containerd[2134]: time="2026-04-24T23:38:25.929161032Z" level=info msg="CreateContainer within sandbox \"0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3e84aad29ac3d0269a3de514a04b5d9cab8ad844b2d85b607006a6caf4a99a9b\"" Apr 24 23:38:25.933079 containerd[2134]: time="2026-04-24T23:38:25.932658192Z" level=info msg="StartContainer for \"3e84aad29ac3d0269a3de514a04b5d9cab8ad844b2d85b607006a6caf4a99a9b\"" Apr 24 23:38:26.017803 systemd-networkd[1685]: vxlan.calico: Gained IPv6LL Apr 24 23:38:26.120022 containerd[2134]: time="2026-04-24T23:38:26.118923909Z" level=info msg="StartContainer for \"3e84aad29ac3d0269a3de514a04b5d9cab8ad844b2d85b607006a6caf4a99a9b\" returns successfully" Apr 24 23:38:26.904471 kubelet[3612]: I0424 23:38:26.901866 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b4f49844-22zct" podStartSLOduration=26.497161663 podStartE2EDuration="30.901841197s" podCreationTimestamp="2026-04-24 23:37:56 +0000 UTC" firstStartedPulling="2026-04-24 23:38:21.436686066 +0000 UTC m=+52.194417285" lastFinishedPulling="2026-04-24 23:38:25.841365612 +0000 UTC m=+56.599096819" observedRunningTime="2026-04-24 23:38:26.797554776 +0000 UTC m=+57.555286067" watchObservedRunningTime="2026-04-24 23:38:26.901841197 +0000 UTC m=+57.659572416" Apr 24 23:38:28.475805 containerd[2134]: time="2026-04-24T23:38:28.474250117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:28.478390 containerd[2134]: time="2026-04-24T23:38:28.478325341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 24 23:38:28.480794 containerd[2134]: time="2026-04-24T23:38:28.480708301Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:28.486151 containerd[2134]: time="2026-04-24T23:38:28.486061477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:28.488466 containerd[2134]: time="2026-04-24T23:38:28.487883017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.646024253s" Apr 24 23:38:28.488466 containerd[2134]: time="2026-04-24T23:38:28.487939345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 24 23:38:28.491295 containerd[2134]: time="2026-04-24T23:38:28.491010121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 24 23:38:28.498751 containerd[2134]: time="2026-04-24T23:38:28.498373693Z" level=info msg="CreateContainer within sandbox \"9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:38:28.529288 containerd[2134]: time="2026-04-24T23:38:28.529110829Z" level=info msg="CreateContainer within sandbox \"9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f6f19e366625d3ac98e00b28e7604bf02c4d7d7e0d9be1e06c6282bb1d55b4d4\"" Apr 24 23:38:28.532253 containerd[2134]: time="2026-04-24T23:38:28.532170265Z" level=info msg="StartContainer for \"f6f19e366625d3ac98e00b28e7604bf02c4d7d7e0d9be1e06c6282bb1d55b4d4\"" Apr 24 23:38:28.588957 ntpd[2087]: Listen normally on 6 vxlan.calico 192.168.51.192:123 Apr 24 23:38:28.589245 ntpd[2087]: Listen normally on 7 cali628cfbea431 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 24 23:38:28.592052 ntpd[2087]: 24 Apr 23:38:28 ntpd[2087]: Listen normally on 6 vxlan.calico 192.168.51.192:123 Apr 24 23:38:28.592052 ntpd[2087]: 24 Apr 23:38:28 ntpd[2087]: Listen normally on 7 cali628cfbea431 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 24 23:38:28.592052 ntpd[2087]: 24 Apr 23:38:28 ntpd[2087]: Listen normally on 8 cali1e25e6309bc [fe80::ecee:eeff:feee:eeee%5]:123 Apr 24 23:38:28.592052 ntpd[2087]: 24 Apr 23:38:28 ntpd[2087]: Listen normally on 9 cali1871404da4e [fe80::ecee:eeff:feee:eeee%6]:123 Apr 24 23:38:28.592052 ntpd[2087]: 24 Apr 23:38:28 ntpd[2087]: Listen normally on 10 calib0510af2f81 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 24 23:38:28.592052 ntpd[2087]: 24 Apr 23:38:28 ntpd[2087]: Listen normally on 11 calib93debd6a71 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 24 23:38:28.592052 ntpd[2087]: 24 Apr 23:38:28 ntpd[2087]: Listen normally on 12 cali443fe73dd5f [fe80::ecee:eeff:feee:eeee%9]:123 Apr 24 23:38:28.592052 ntpd[2087]: 24 Apr 23:38:28 ntpd[2087]: Listen normally on 13 calib7874e4f294 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 24 23:38:28.592052 ntpd[2087]: 24 Apr 23:38:28 ntpd[2087]: Listen normally on 14 cali321fe3533a7 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 24 23:38:28.592052 ntpd[2087]: 24 Apr 23:38:28 ntpd[2087]: Listen normally on 15 vxlan.calico [fe80::6404:baff:feb8:80b1%12]:123 Apr 24 23:38:28.589337 ntpd[2087]: Listen normally on 8 cali1e25e6309bc [fe80::ecee:eeff:feee:eeee%5]:123 Apr 24 23:38:28.589406 ntpd[2087]: Listen normally on 9 cali1871404da4e [fe80::ecee:eeff:feee:eeee%6]:123 Apr 24 23:38:28.589475 ntpd[2087]: Listen normally on 10 calib0510af2f81 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 24 23:38:28.589544 ntpd[2087]: Listen normally on 11 calib93debd6a71 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 24 23:38:28.589611 ntpd[2087]: Listen normally on 12 cali443fe73dd5f [fe80::ecee:eeff:feee:eeee%9]:123 Apr 24 23:38:28.589680 ntpd[2087]: Listen normally on 13 calib7874e4f294 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 24 23:38:28.589749 ntpd[2087]: Listen normally on 14 cali321fe3533a7 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 24 23:38:28.589836 ntpd[2087]: Listen normally on 15 vxlan.calico [fe80::6404:baff:feb8:80b1%12]:123 Apr 24 23:38:28.678409 containerd[2134]: time="2026-04-24T23:38:28.678285194Z" level=info msg="StartContainer for \"f6f19e366625d3ac98e00b28e7604bf02c4d7d7e0d9be1e06c6282bb1d55b4d4\" returns successfully" Apr 24 23:38:29.548452 containerd[2134]: time="2026-04-24T23:38:29.548300606Z" level=info msg="StopPodSandbox for \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\"" Apr 24 23:38:29.762342 containerd[2134]: 2026-04-24 23:38:29.640 [WARNING][5967] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5340818-2c6e-4850-b9c6-6c45955ef8bc", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7", Pod:"coredns-674b8bbfcf-rxbn7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1871404da4e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:29.762342 containerd[2134]: 2026-04-24 23:38:29.640 [INFO][5967] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Apr 24 23:38:29.762342 containerd[2134]: 2026-04-24 23:38:29.641 [INFO][5967] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" iface="eth0" netns="" Apr 24 23:38:29.762342 containerd[2134]: 2026-04-24 23:38:29.641 [INFO][5967] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Apr 24 23:38:29.762342 containerd[2134]: 2026-04-24 23:38:29.641 [INFO][5967] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Apr 24 23:38:29.762342 containerd[2134]: 2026-04-24 23:38:29.713 [INFO][5976] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" HandleID="k8s-pod-network.ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" Apr 24 23:38:29.762342 containerd[2134]: 2026-04-24 23:38:29.713 [INFO][5976] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:29.762342 containerd[2134]: 2026-04-24 23:38:29.713 [INFO][5976] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:29.762342 containerd[2134]: 2026-04-24 23:38:29.733 [WARNING][5976] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" HandleID="k8s-pod-network.ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" Apr 24 23:38:29.762342 containerd[2134]: 2026-04-24 23:38:29.733 [INFO][5976] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" HandleID="k8s-pod-network.ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" Apr 24 23:38:29.762342 containerd[2134]: 2026-04-24 23:38:29.739 [INFO][5976] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:29.762342 containerd[2134]: 2026-04-24 23:38:29.747 [INFO][5967] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Apr 24 23:38:29.766045 containerd[2134]: time="2026-04-24T23:38:29.764701083Z" level=info msg="TearDown network for sandbox \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\" successfully" Apr 24 23:38:29.766045 containerd[2134]: time="2026-04-24T23:38:29.764744547Z" level=info msg="StopPodSandbox for \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\" returns successfully" Apr 24 23:38:29.767187 containerd[2134]: time="2026-04-24T23:38:29.767116263Z" level=info msg="RemovePodSandbox for \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\"" Apr 24 23:38:29.767637 containerd[2134]: time="2026-04-24T23:38:29.767364711Z" level=info msg="Forcibly stopping sandbox \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\"" Apr 24 23:38:30.076048 containerd[2134]: 2026-04-24 23:38:29.897 [WARNING][5990] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5340818-2c6e-4850-b9c6-6c45955ef8bc", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"3c6d01b8ba475890732de623c13da8898a4dd271408b6a4b4a1285d88f53c2a7", Pod:"coredns-674b8bbfcf-rxbn7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1871404da4e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:30.076048 containerd[2134]: 2026-04-24 23:38:29.898 [INFO][5990] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Apr 24 23:38:30.076048 containerd[2134]: 2026-04-24 23:38:29.898 [INFO][5990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" iface="eth0" netns="" Apr 24 23:38:30.076048 containerd[2134]: 2026-04-24 23:38:29.898 [INFO][5990] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Apr 24 23:38:30.076048 containerd[2134]: 2026-04-24 23:38:29.898 [INFO][5990] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Apr 24 23:38:30.076048 containerd[2134]: 2026-04-24 23:38:30.010 [INFO][6005] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" HandleID="k8s-pod-network.ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" Apr 24 23:38:30.076048 containerd[2134]: 2026-04-24 23:38:30.013 [INFO][6005] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:30.076048 containerd[2134]: 2026-04-24 23:38:30.013 [INFO][6005] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:30.076048 containerd[2134]: 2026-04-24 23:38:30.045 [WARNING][6005] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" HandleID="k8s-pod-network.ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" Apr 24 23:38:30.076048 containerd[2134]: 2026-04-24 23:38:30.046 [INFO][6005] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" HandleID="k8s-pod-network.ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--rxbn7-eth0" Apr 24 23:38:30.076048 containerd[2134]: 2026-04-24 23:38:30.053 [INFO][6005] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:30.076048 containerd[2134]: 2026-04-24 23:38:30.058 [INFO][5990] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16" Apr 24 23:38:30.076048 containerd[2134]: time="2026-04-24T23:38:30.075608689Z" level=info msg="TearDown network for sandbox \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\" successfully" Apr 24 23:38:30.102041 containerd[2134]: time="2026-04-24T23:38:30.097209361Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:38:30.102041 containerd[2134]: time="2026-04-24T23:38:30.097327165Z" level=info msg="RemovePodSandbox \"ec9d34e7a6f6d8ae036223ace18ce95a97710debaf886d305134aff2d6adeb16\" returns successfully" Apr 24 23:38:30.102041 containerd[2134]: time="2026-04-24T23:38:30.101901997Z" level=info msg="StopPodSandbox for \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\"" Apr 24 23:38:30.503150 containerd[2134]: 2026-04-24 23:38:30.359 [WARNING][6026] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0", GenerateName:"calico-kube-controllers-5b4f49844-", Namespace:"calico-system", SelfLink:"", UID:"99243418-611d-4ac7-9803-be2a0a09e3b8", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b4f49844", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31", Pod:"calico-kube-controllers-5b4f49844-22zct", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali628cfbea431", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:30.503150 containerd[2134]: 2026-04-24 23:38:30.360 [INFO][6026] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Apr 24 23:38:30.503150 containerd[2134]: 2026-04-24 23:38:30.360 [INFO][6026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" iface="eth0" netns="" Apr 24 23:38:30.503150 containerd[2134]: 2026-04-24 23:38:30.360 [INFO][6026] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Apr 24 23:38:30.503150 containerd[2134]: 2026-04-24 23:38:30.360 [INFO][6026] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Apr 24 23:38:30.503150 containerd[2134]: 2026-04-24 23:38:30.453 [INFO][6036] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" HandleID="k8s-pod-network.ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Workload="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" Apr 24 23:38:30.503150 containerd[2134]: 2026-04-24 23:38:30.453 [INFO][6036] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:30.503150 containerd[2134]: 2026-04-24 23:38:30.453 [INFO][6036] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:30.503150 containerd[2134]: 2026-04-24 23:38:30.471 [WARNING][6036] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" HandleID="k8s-pod-network.ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Workload="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" Apr 24 23:38:30.503150 containerd[2134]: 2026-04-24 23:38:30.471 [INFO][6036] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" HandleID="k8s-pod-network.ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Workload="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" Apr 24 23:38:30.503150 containerd[2134]: 2026-04-24 23:38:30.474 [INFO][6036] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:30.503150 containerd[2134]: 2026-04-24 23:38:30.486 [INFO][6026] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Apr 24 23:38:30.503150 containerd[2134]: time="2026-04-24T23:38:30.502900491Z" level=info msg="TearDown network for sandbox \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\" successfully" Apr 24 23:38:30.503150 containerd[2134]: time="2026-04-24T23:38:30.502962423Z" level=info msg="StopPodSandbox for \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\" returns successfully" Apr 24 23:38:30.508773 containerd[2134]: time="2026-04-24T23:38:30.508161735Z" level=info msg="RemovePodSandbox for \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\"" Apr 24 23:38:30.508773 containerd[2134]: time="2026-04-24T23:38:30.508217583Z" level=info msg="Forcibly stopping sandbox \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\"" Apr 24 23:38:30.832508 containerd[2134]: 2026-04-24 23:38:30.674 [WARNING][6050] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0", GenerateName:"calico-kube-controllers-5b4f49844-", Namespace:"calico-system", SelfLink:"", UID:"99243418-611d-4ac7-9803-be2a0a09e3b8", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b4f49844", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"0f8f0c6836742cafaff226afa46c8e0574e6dde55ba21691981b7e45f3a7fb31", Pod:"calico-kube-controllers-5b4f49844-22zct", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali628cfbea431", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:30.832508 containerd[2134]: 2026-04-24 23:38:30.679 [INFO][6050] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Apr 24 23:38:30.832508 containerd[2134]: 2026-04-24 23:38:30.679 [INFO][6050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" iface="eth0" netns="" Apr 24 23:38:30.832508 containerd[2134]: 2026-04-24 23:38:30.679 [INFO][6050] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Apr 24 23:38:30.832508 containerd[2134]: 2026-04-24 23:38:30.679 [INFO][6050] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Apr 24 23:38:30.832508 containerd[2134]: 2026-04-24 23:38:30.775 [INFO][6061] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" HandleID="k8s-pod-network.ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Workload="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" Apr 24 23:38:30.832508 containerd[2134]: 2026-04-24 23:38:30.778 [INFO][6061] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:30.832508 containerd[2134]: 2026-04-24 23:38:30.779 [INFO][6061] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:30.832508 containerd[2134]: 2026-04-24 23:38:30.801 [WARNING][6061] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" HandleID="k8s-pod-network.ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Workload="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" Apr 24 23:38:30.832508 containerd[2134]: 2026-04-24 23:38:30.801 [INFO][6061] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" HandleID="k8s-pod-network.ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Workload="ip--172--31--28--13-k8s-calico--kube--controllers--5b4f49844--22zct-eth0" Apr 24 23:38:30.832508 containerd[2134]: 2026-04-24 23:38:30.816 [INFO][6061] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:30.832508 containerd[2134]: 2026-04-24 23:38:30.824 [INFO][6050] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509" Apr 24 23:38:30.833760 containerd[2134]: time="2026-04-24T23:38:30.832569304Z" level=info msg="TearDown network for sandbox \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\" successfully" Apr 24 23:38:30.847155 containerd[2134]: time="2026-04-24T23:38:30.846745288Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:38:30.847155 containerd[2134]: time="2026-04-24T23:38:30.846859948Z" level=info msg="RemovePodSandbox \"ca939bb17a8abcda9b95f0976c113956bd06855877e9b2c627d00cf1051b2509\" returns successfully" Apr 24 23:38:30.848056 containerd[2134]: time="2026-04-24T23:38:30.847959232Z" level=info msg="StopPodSandbox for \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\"" Apr 24 23:38:31.221311 containerd[2134]: 2026-04-24 23:38:31.021 [WARNING][6075] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"5e7ff394-4d7c-4f1e-b093-f94b4553f9bc", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d", Pod:"goldmane-5b85766d88-vgkrr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib0510af2f81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:31.221311 containerd[2134]: 2026-04-24 23:38:31.021 [INFO][6075] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Apr 24 23:38:31.221311 containerd[2134]: 2026-04-24 23:38:31.021 [INFO][6075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" iface="eth0" netns="" Apr 24 23:38:31.221311 containerd[2134]: 2026-04-24 23:38:31.021 [INFO][6075] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Apr 24 23:38:31.221311 containerd[2134]: 2026-04-24 23:38:31.021 [INFO][6075] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Apr 24 23:38:31.221311 containerd[2134]: 2026-04-24 23:38:31.169 [INFO][6082] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" HandleID="k8s-pod-network.de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Workload="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" Apr 24 23:38:31.221311 containerd[2134]: 2026-04-24 23:38:31.170 [INFO][6082] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:31.221311 containerd[2134]: 2026-04-24 23:38:31.170 [INFO][6082] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:31.221311 containerd[2134]: 2026-04-24 23:38:31.196 [WARNING][6082] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" HandleID="k8s-pod-network.de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Workload="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" Apr 24 23:38:31.221311 containerd[2134]: 2026-04-24 23:38:31.197 [INFO][6082] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" HandleID="k8s-pod-network.de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Workload="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" Apr 24 23:38:31.221311 containerd[2134]: 2026-04-24 23:38:31.201 [INFO][6082] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:31.221311 containerd[2134]: 2026-04-24 23:38:31.205 [INFO][6075] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Apr 24 23:38:31.227441 containerd[2134]: time="2026-04-24T23:38:31.225906374Z" level=info msg="TearDown network for sandbox \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\" successfully" Apr 24 23:38:31.227441 containerd[2134]: time="2026-04-24T23:38:31.225954422Z" level=info msg="StopPodSandbox for \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\" returns successfully" Apr 24 23:38:31.239367 containerd[2134]: time="2026-04-24T23:38:31.239313098Z" level=info msg="RemovePodSandbox for \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\"" Apr 24 23:38:31.241295 containerd[2134]: time="2026-04-24T23:38:31.240775658Z" level=info msg="Forcibly stopping sandbox \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\"" Apr 24 23:38:31.700147 containerd[2134]: 2026-04-24 23:38:31.453 [WARNING][6096] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"5e7ff394-4d7c-4f1e-b093-f94b4553f9bc", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d", Pod:"goldmane-5b85766d88-vgkrr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib0510af2f81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:31.700147 containerd[2134]: 2026-04-24 23:38:31.453 [INFO][6096] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Apr 24 23:38:31.700147 containerd[2134]: 2026-04-24 23:38:31.453 [INFO][6096] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" iface="eth0" netns="" Apr 24 23:38:31.700147 containerd[2134]: 2026-04-24 23:38:31.453 [INFO][6096] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Apr 24 23:38:31.700147 containerd[2134]: 2026-04-24 23:38:31.453 [INFO][6096] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Apr 24 23:38:31.700147 containerd[2134]: 2026-04-24 23:38:31.653 [INFO][6104] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" HandleID="k8s-pod-network.de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Workload="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" Apr 24 23:38:31.700147 containerd[2134]: 2026-04-24 23:38:31.653 [INFO][6104] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:31.700147 containerd[2134]: 2026-04-24 23:38:31.653 [INFO][6104] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:31.700147 containerd[2134]: 2026-04-24 23:38:31.685 [WARNING][6104] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" HandleID="k8s-pod-network.de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Workload="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" Apr 24 23:38:31.700147 containerd[2134]: 2026-04-24 23:38:31.685 [INFO][6104] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" HandleID="k8s-pod-network.de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Workload="ip--172--31--28--13-k8s-goldmane--5b85766d88--vgkrr-eth0" Apr 24 23:38:31.700147 containerd[2134]: 2026-04-24 23:38:31.688 [INFO][6104] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:31.700147 containerd[2134]: 2026-04-24 23:38:31.693 [INFO][6096] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc" Apr 24 23:38:31.700147 containerd[2134]: time="2026-04-24T23:38:31.698441417Z" level=info msg="TearDown network for sandbox \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\" successfully" Apr 24 23:38:31.721332 containerd[2134]: time="2026-04-24T23:38:31.720560477Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:38:31.721624 containerd[2134]: time="2026-04-24T23:38:31.721577237Z" level=info msg="RemovePodSandbox \"de764de09548f9688310dfad16ea8deecccad4cfce3ad67f454c2cb85c7d06dc\" returns successfully" Apr 24 23:38:31.723253 containerd[2134]: time="2026-04-24T23:38:31.723206297Z" level=info msg="StopPodSandbox for \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\"" Apr 24 23:38:31.788277 kubelet[3612]: I0424 23:38:31.788186 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6845df7cfd-kfbzs" podStartSLOduration=30.757638346 podStartE2EDuration="37.787004897s" podCreationTimestamp="2026-04-24 23:37:54 +0000 UTC" firstStartedPulling="2026-04-24 23:38:21.460259778 +0000 UTC m=+52.217990997" lastFinishedPulling="2026-04-24 23:38:28.489626341 +0000 UTC m=+59.247357548" observedRunningTime="2026-04-24 23:38:28.806688482 +0000 UTC m=+59.564419725" watchObservedRunningTime="2026-04-24 23:38:31.787004897 +0000 UTC m=+62.544736116" Apr 24 23:38:32.186771 containerd[2134]: 2026-04-24 23:38:31.977 [WARNING][6119] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0", GenerateName:"calico-apiserver-6845df7cfd-", Namespace:"calico-system", SelfLink:"", UID:"c768fe2c-5b19-4c7a-88e7-ffec16fc16fe", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6845df7cfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc", Pod:"calico-apiserver-6845df7cfd-kfbzs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1e25e6309bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:32.186771 containerd[2134]: 2026-04-24 23:38:31.981 [INFO][6119] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Apr 24 23:38:32.186771 containerd[2134]: 2026-04-24 23:38:31.981 [INFO][6119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" iface="eth0" netns="" Apr 24 23:38:32.186771 containerd[2134]: 2026-04-24 23:38:31.981 [INFO][6119] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Apr 24 23:38:32.186771 containerd[2134]: 2026-04-24 23:38:31.981 [INFO][6119] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Apr 24 23:38:32.186771 containerd[2134]: 2026-04-24 23:38:32.076 [INFO][6128] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" HandleID="k8s-pod-network.72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" Apr 24 23:38:32.186771 containerd[2134]: 2026-04-24 23:38:32.076 [INFO][6128] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:32.186771 containerd[2134]: 2026-04-24 23:38:32.076 [INFO][6128] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:32.186771 containerd[2134]: 2026-04-24 23:38:32.143 [WARNING][6128] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" HandleID="k8s-pod-network.72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" Apr 24 23:38:32.186771 containerd[2134]: 2026-04-24 23:38:32.143 [INFO][6128] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" HandleID="k8s-pod-network.72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" Apr 24 23:38:32.186771 containerd[2134]: 2026-04-24 23:38:32.163 [INFO][6128] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:32.186771 containerd[2134]: 2026-04-24 23:38:32.175 [INFO][6119] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Apr 24 23:38:32.192643 containerd[2134]: time="2026-04-24T23:38:32.188167323Z" level=info msg="TearDown network for sandbox \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\" successfully" Apr 24 23:38:32.192643 containerd[2134]: time="2026-04-24T23:38:32.188215095Z" level=info msg="StopPodSandbox for \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\" returns successfully" Apr 24 23:38:32.192643 containerd[2134]: time="2026-04-24T23:38:32.191674191Z" level=info msg="RemovePodSandbox for \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\"" Apr 24 23:38:32.192643 containerd[2134]: time="2026-04-24T23:38:32.191738931Z" level=info msg="Forcibly stopping sandbox \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\"" Apr 24 23:38:32.626110 systemd[1]: Started sshd@7-172.31.28.13:22-20.229.252.112:56732.service - OpenSSH per-connection server daemon (20.229.252.112:56732). Apr 24 23:38:32.655172 containerd[2134]: 2026-04-24 23:38:32.406 [WARNING][6144] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0", GenerateName:"calico-apiserver-6845df7cfd-", Namespace:"calico-system", SelfLink:"", UID:"c768fe2c-5b19-4c7a-88e7-ffec16fc16fe", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6845df7cfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"9d526d80a5206ad2acc678f3052f7ce1cc8c828a3fbbd4bba3f54cc3b99243fc", Pod:"calico-apiserver-6845df7cfd-kfbzs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1e25e6309bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:32.655172 containerd[2134]: 2026-04-24 23:38:32.411 [INFO][6144] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Apr 24 23:38:32.655172 containerd[2134]: 2026-04-24 23:38:32.411 [INFO][6144] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" iface="eth0" netns="" Apr 24 23:38:32.655172 containerd[2134]: 2026-04-24 23:38:32.411 [INFO][6144] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Apr 24 23:38:32.655172 containerd[2134]: 2026-04-24 23:38:32.411 [INFO][6144] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Apr 24 23:38:32.655172 containerd[2134]: 2026-04-24 23:38:32.591 [INFO][6156] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" HandleID="k8s-pod-network.72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" Apr 24 23:38:32.655172 containerd[2134]: 2026-04-24 23:38:32.592 [INFO][6156] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:32.655172 containerd[2134]: 2026-04-24 23:38:32.592 [INFO][6156] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:32.655172 containerd[2134]: 2026-04-24 23:38:32.632 [WARNING][6156] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" HandleID="k8s-pod-network.72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" Apr 24 23:38:32.655172 containerd[2134]: 2026-04-24 23:38:32.632 [INFO][6156] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" HandleID="k8s-pod-network.72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--kfbzs-eth0" Apr 24 23:38:32.655172 containerd[2134]: 2026-04-24 23:38:32.637 [INFO][6156] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:32.655172 containerd[2134]: 2026-04-24 23:38:32.649 [INFO][6144] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62" Apr 24 23:38:32.661841 containerd[2134]: time="2026-04-24T23:38:32.661090493Z" level=info msg="TearDown network for sandbox \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\" successfully" Apr 24 23:38:32.674449 containerd[2134]: time="2026-04-24T23:38:32.674385389Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:38:32.675118 containerd[2134]: time="2026-04-24T23:38:32.675057761Z" level=info msg="RemovePodSandbox \"72f69749c3156b92b8f29874f0e2c8e37da5940b4f6679fe70a2253bc5d9db62\" returns successfully" Apr 24 23:38:32.677420 containerd[2134]: time="2026-04-24T23:38:32.677367461Z" level=info msg="StopPodSandbox for \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\"" Apr 24 23:38:33.027619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1357572369.mount: Deactivated successfully. Apr 24 23:38:33.050170 containerd[2134]: 2026-04-24 23:38:32.823 [WARNING][6177] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0", GenerateName:"calico-apiserver-6845df7cfd-", Namespace:"calico-system", SelfLink:"", UID:"afad92e3-d16f-4f0f-986f-20a0bd89790e", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6845df7cfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c", Pod:"calico-apiserver-6845df7cfd-nmdrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib93debd6a71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:33.050170 containerd[2134]: 2026-04-24 23:38:32.826 [INFO][6177] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Apr 24 23:38:33.050170 containerd[2134]: 2026-04-24 23:38:32.830 [INFO][6177] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" iface="eth0" netns="" Apr 24 23:38:33.050170 containerd[2134]: 2026-04-24 23:38:32.830 [INFO][6177] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Apr 24 23:38:33.050170 containerd[2134]: 2026-04-24 23:38:32.830 [INFO][6177] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Apr 24 23:38:33.050170 containerd[2134]: 2026-04-24 23:38:32.969 [INFO][6185] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" HandleID="k8s-pod-network.268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" Apr 24 23:38:33.050170 containerd[2134]: 2026-04-24 23:38:32.972 [INFO][6185] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:33.050170 containerd[2134]: 2026-04-24 23:38:32.972 [INFO][6185] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:33.050170 containerd[2134]: 2026-04-24 23:38:33.007 [WARNING][6185] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" HandleID="k8s-pod-network.268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" Apr 24 23:38:33.050170 containerd[2134]: 2026-04-24 23:38:33.008 [INFO][6185] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" HandleID="k8s-pod-network.268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" Apr 24 23:38:33.050170 containerd[2134]: 2026-04-24 23:38:33.013 [INFO][6185] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:33.050170 containerd[2134]: 2026-04-24 23:38:33.032 [INFO][6177] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Apr 24 23:38:33.050170 containerd[2134]: time="2026-04-24T23:38:33.050111463Z" level=info msg="TearDown network for sandbox \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\" successfully" Apr 24 23:38:33.050170 containerd[2134]: time="2026-04-24T23:38:33.050150571Z" level=info msg="StopPodSandbox for \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\" returns successfully" Apr 24 23:38:33.054898 containerd[2134]: time="2026-04-24T23:38:33.053384811Z" level=info msg="RemovePodSandbox for \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\"" Apr 24 23:38:33.054898 containerd[2134]: time="2026-04-24T23:38:33.053442831Z" level=info msg="Forcibly stopping sandbox \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\"" Apr 24 23:38:33.420514 containerd[2134]: 2026-04-24 23:38:33.299 [WARNING][6204] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0", GenerateName:"calico-apiserver-6845df7cfd-", Namespace:"calico-system", SelfLink:"", UID:"afad92e3-d16f-4f0f-986f-20a0bd89790e", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6845df7cfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c", Pod:"calico-apiserver-6845df7cfd-nmdrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib93debd6a71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:33.420514 containerd[2134]: 2026-04-24 23:38:33.300 [INFO][6204] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Apr 24 23:38:33.420514 containerd[2134]: 2026-04-24 23:38:33.300 [INFO][6204] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" iface="eth0" netns="" Apr 24 23:38:33.420514 containerd[2134]: 2026-04-24 23:38:33.300 [INFO][6204] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Apr 24 23:38:33.420514 containerd[2134]: 2026-04-24 23:38:33.300 [INFO][6204] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Apr 24 23:38:33.420514 containerd[2134]: 2026-04-24 23:38:33.380 [INFO][6212] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" HandleID="k8s-pod-network.268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" Apr 24 23:38:33.420514 containerd[2134]: 2026-04-24 23:38:33.380 [INFO][6212] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:33.420514 containerd[2134]: 2026-04-24 23:38:33.381 [INFO][6212] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:33.420514 containerd[2134]: 2026-04-24 23:38:33.402 [WARNING][6212] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" HandleID="k8s-pod-network.268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" Apr 24 23:38:33.420514 containerd[2134]: 2026-04-24 23:38:33.402 [INFO][6212] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" HandleID="k8s-pod-network.268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Workload="ip--172--31--28--13-k8s-calico--apiserver--6845df7cfd--nmdrp-eth0" Apr 24 23:38:33.420514 containerd[2134]: 2026-04-24 23:38:33.405 [INFO][6212] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:33.420514 containerd[2134]: 2026-04-24 23:38:33.413 [INFO][6204] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c" Apr 24 23:38:33.422263 containerd[2134]: time="2026-04-24T23:38:33.420556877Z" level=info msg="TearDown network for sandbox \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\" successfully" Apr 24 23:38:33.426742 containerd[2134]: time="2026-04-24T23:38:33.426490745Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:38:33.426742 containerd[2134]: time="2026-04-24T23:38:33.426600929Z" level=info msg="RemovePodSandbox \"268aa57a4ddc33367aead0b186c7d4dbd22c9f145b0195490ff8a58975f4113c\" returns successfully" Apr 24 23:38:33.427345 containerd[2134]: time="2026-04-24T23:38:33.427294361Z" level=info msg="StopPodSandbox for \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\"" Apr 24 23:38:33.733366 sshd[6166]: Accepted publickey for core from 20.229.252.112 port 56732 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:33.736233 sshd[6166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:33.763607 systemd-logind[2104]: New session 8 of user core. Apr 24 23:38:33.773586 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 23:38:33.941080 containerd[2134]: 2026-04-24 23:38:33.535 [WARNING][6226] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9ef2e298-8143-4f28-a14a-2f167f054ba4", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487", Pod:"coredns-674b8bbfcf-cprf8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali443fe73dd5f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:33.941080 containerd[2134]: 2026-04-24 23:38:33.543 [INFO][6226] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Apr 24 23:38:33.941080 containerd[2134]: 2026-04-24 23:38:33.545 [INFO][6226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" iface="eth0" netns="" Apr 24 23:38:33.941080 containerd[2134]: 2026-04-24 23:38:33.547 [INFO][6226] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Apr 24 23:38:33.941080 containerd[2134]: 2026-04-24 23:38:33.550 [INFO][6226] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Apr 24 23:38:33.941080 containerd[2134]: 2026-04-24 23:38:33.793 [INFO][6233] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" HandleID="k8s-pod-network.a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" Apr 24 23:38:33.941080 containerd[2134]: 2026-04-24 23:38:33.801 [INFO][6233] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:33.941080 containerd[2134]: 2026-04-24 23:38:33.803 [INFO][6233] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:33.941080 containerd[2134]: 2026-04-24 23:38:33.872 [WARNING][6233] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" HandleID="k8s-pod-network.a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" Apr 24 23:38:33.941080 containerd[2134]: 2026-04-24 23:38:33.872 [INFO][6233] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" HandleID="k8s-pod-network.a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" Apr 24 23:38:33.941080 containerd[2134]: 2026-04-24 23:38:33.904 [INFO][6233] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:33.941080 containerd[2134]: 2026-04-24 23:38:33.924 [INFO][6226] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Apr 24 23:38:33.941080 containerd[2134]: time="2026-04-24T23:38:33.940878824Z" level=info msg="TearDown network for sandbox \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\" successfully" Apr 24 23:38:33.941080 containerd[2134]: time="2026-04-24T23:38:33.940915652Z" level=info msg="StopPodSandbox for \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\" returns successfully" Apr 24 23:38:33.946474 containerd[2134]: time="2026-04-24T23:38:33.946081148Z" level=info msg="RemovePodSandbox for \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\"" Apr 24 23:38:33.946474 containerd[2134]: time="2026-04-24T23:38:33.946144916Z" level=info msg="Forcibly stopping sandbox \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\"" Apr 24 23:38:34.159377 containerd[2134]: 2026-04-24 23:38:34.065 [WARNING][6250] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9ef2e298-8143-4f28-a14a-2f167f054ba4", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-13", ContainerID:"7b17d8aca7bb410f623429dd5e6c94a66f874e358efd107052ca58bca6920487", Pod:"coredns-674b8bbfcf-cprf8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali443fe73dd5f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:34.159377 containerd[2134]: 2026-04-24 23:38:34.065 [INFO][6250] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Apr 24 23:38:34.159377 containerd[2134]: 2026-04-24 23:38:34.065 [INFO][6250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" iface="eth0" netns="" Apr 24 23:38:34.159377 containerd[2134]: 2026-04-24 23:38:34.065 [INFO][6250] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Apr 24 23:38:34.159377 containerd[2134]: 2026-04-24 23:38:34.065 [INFO][6250] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Apr 24 23:38:34.159377 containerd[2134]: 2026-04-24 23:38:34.130 [INFO][6257] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" HandleID="k8s-pod-network.a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" Apr 24 23:38:34.159377 containerd[2134]: 2026-04-24 23:38:34.130 [INFO][6257] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:34.159377 containerd[2134]: 2026-04-24 23:38:34.130 [INFO][6257] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:34.159377 containerd[2134]: 2026-04-24 23:38:34.146 [WARNING][6257] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" HandleID="k8s-pod-network.a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" Apr 24 23:38:34.159377 containerd[2134]: 2026-04-24 23:38:34.146 [INFO][6257] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" HandleID="k8s-pod-network.a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Workload="ip--172--31--28--13-k8s-coredns--674b8bbfcf--cprf8-eth0" Apr 24 23:38:34.159377 containerd[2134]: 2026-04-24 23:38:34.150 [INFO][6257] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:34.159377 containerd[2134]: 2026-04-24 23:38:34.154 [INFO][6250] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002" Apr 24 23:38:34.161251 containerd[2134]: time="2026-04-24T23:38:34.160552613Z" level=info msg="TearDown network for sandbox \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\" successfully" Apr 24 23:38:34.167534 containerd[2134]: time="2026-04-24T23:38:34.167478437Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:38:34.167956 containerd[2134]: time="2026-04-24T23:38:34.167908805Z" level=info msg="RemovePodSandbox \"a1f78317c2ebd196dfb1639196c4b6ad3606d7c0c14d854343179267bc62a002\" returns successfully" Apr 24 23:38:34.169100 containerd[2134]: time="2026-04-24T23:38:34.169043357Z" level=info msg="StopPodSandbox for \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\"" Apr 24 23:38:34.485509 containerd[2134]: 2026-04-24 23:38:34.304 [WARNING][6271] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" WorkloadEndpoint="ip--172--31--28--13-k8s-whisker--c67d58b7d--v97dz-eth0" Apr 24 23:38:34.485509 containerd[2134]: 2026-04-24 23:38:34.306 [INFO][6271] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Apr 24 23:38:34.485509 containerd[2134]: 2026-04-24 23:38:34.306 [INFO][6271] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" iface="eth0" netns="" Apr 24 23:38:34.485509 containerd[2134]: 2026-04-24 23:38:34.306 [INFO][6271] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Apr 24 23:38:34.485509 containerd[2134]: 2026-04-24 23:38:34.306 [INFO][6271] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Apr 24 23:38:34.485509 containerd[2134]: 2026-04-24 23:38:34.432 [INFO][6285] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" HandleID="k8s-pod-network.7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Workload="ip--172--31--28--13-k8s-whisker--c67d58b7d--v97dz-eth0" Apr 24 23:38:34.485509 containerd[2134]: 2026-04-24 23:38:34.433 [INFO][6285] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:34.485509 containerd[2134]: 2026-04-24 23:38:34.433 [INFO][6285] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:34.485509 containerd[2134]: 2026-04-24 23:38:34.456 [WARNING][6285] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" HandleID="k8s-pod-network.7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Workload="ip--172--31--28--13-k8s-whisker--c67d58b7d--v97dz-eth0" Apr 24 23:38:34.485509 containerd[2134]: 2026-04-24 23:38:34.456 [INFO][6285] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" HandleID="k8s-pod-network.7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Workload="ip--172--31--28--13-k8s-whisker--c67d58b7d--v97dz-eth0" Apr 24 23:38:34.485509 containerd[2134]: 2026-04-24 23:38:34.459 [INFO][6285] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:34.485509 containerd[2134]: 2026-04-24 23:38:34.475 [INFO][6271] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Apr 24 23:38:34.487299 containerd[2134]: time="2026-04-24T23:38:34.486073482Z" level=info msg="TearDown network for sandbox \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\" successfully" Apr 24 23:38:34.487299 containerd[2134]: time="2026-04-24T23:38:34.486115842Z" level=info msg="StopPodSandbox for \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\" returns successfully" Apr 24 23:38:34.489500 containerd[2134]: time="2026-04-24T23:38:34.488457762Z" level=info msg="RemovePodSandbox for \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\"" Apr 24 23:38:34.489500 containerd[2134]: time="2026-04-24T23:38:34.488516958Z" level=info msg="Forcibly stopping sandbox \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\"" Apr 24 23:38:34.595296 containerd[2134]: time="2026-04-24T23:38:34.595218835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:34.599632 containerd[2134]: time="2026-04-24T23:38:34.599546167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 24 23:38:34.602232 containerd[2134]: time="2026-04-24T23:38:34.602157823Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:34.610808 containerd[2134]: time="2026-04-24T23:38:34.610043143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:34.612729 containerd[2134]: time="2026-04-24T23:38:34.612656587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 6.121590078s" Apr 24 23:38:34.612729 containerd[2134]: time="2026-04-24T23:38:34.612723403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 24 23:38:34.618035 containerd[2134]: time="2026-04-24T23:38:34.617849539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:38:34.631212 containerd[2134]: time="2026-04-24T23:38:34.631154791Z" level=info msg="CreateContainer within sandbox \"a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 24 23:38:34.674510 containerd[2134]: time="2026-04-24T23:38:34.674382091Z" level=info msg="CreateContainer within sandbox \"a0c1be1309fe5382033d2fbbf0870c09a5834e7b31b8514a431439851294ce9d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d148891ff5e00871ada1c3f97716ce0cc0308d0678c80129f97a0021ec9609bd\"" Apr 24 23:38:34.678571 containerd[2134]: time="2026-04-24T23:38:34.678440227Z" level=info msg="StartContainer for \"d148891ff5e00871ada1c3f97716ce0cc0308d0678c80129f97a0021ec9609bd\"" Apr 24 23:38:34.696441 sshd[6166]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:34.710610 systemd[1]: sshd@7-172.31.28.13:22-20.229.252.112:56732.service: Deactivated successfully. Apr 24 23:38:34.723142 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 23:38:34.725392 systemd-logind[2104]: Session 8 logged out. Waiting for processes to exit. Apr 24 23:38:34.727326 containerd[2134]: 2026-04-24 23:38:34.598 [WARNING][6300] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" WorkloadEndpoint="ip--172--31--28--13-k8s-whisker--c67d58b7d--v97dz-eth0" Apr 24 23:38:34.727326 containerd[2134]: 2026-04-24 23:38:34.598 [INFO][6300] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Apr 24 23:38:34.727326 containerd[2134]: 2026-04-24 23:38:34.598 [INFO][6300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" iface="eth0" netns="" Apr 24 23:38:34.727326 containerd[2134]: 2026-04-24 23:38:34.598 [INFO][6300] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Apr 24 23:38:34.727326 containerd[2134]: 2026-04-24 23:38:34.598 [INFO][6300] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Apr 24 23:38:34.727326 containerd[2134]: 2026-04-24 23:38:34.666 [INFO][6314] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" HandleID="k8s-pod-network.7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Workload="ip--172--31--28--13-k8s-whisker--c67d58b7d--v97dz-eth0" Apr 24 23:38:34.727326 containerd[2134]: 2026-04-24 23:38:34.668 [INFO][6314] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:34.727326 containerd[2134]: 2026-04-24 23:38:34.668 [INFO][6314] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:34.727326 containerd[2134]: 2026-04-24 23:38:34.701 [WARNING][6314] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" HandleID="k8s-pod-network.7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Workload="ip--172--31--28--13-k8s-whisker--c67d58b7d--v97dz-eth0" Apr 24 23:38:34.727326 containerd[2134]: 2026-04-24 23:38:34.701 [INFO][6314] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" HandleID="k8s-pod-network.7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Workload="ip--172--31--28--13-k8s-whisker--c67d58b7d--v97dz-eth0" Apr 24 23:38:34.727326 containerd[2134]: 2026-04-24 23:38:34.705 [INFO][6314] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:34.727326 containerd[2134]: 2026-04-24 23:38:34.716 [INFO][6300] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748" Apr 24 23:38:34.728589 containerd[2134]: time="2026-04-24T23:38:34.727656272Z" level=info msg="TearDown network for sandbox \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\" successfully" Apr 24 23:38:34.729892 systemd-logind[2104]: Removed session 8. Apr 24 23:38:34.745649 containerd[2134]: time="2026-04-24T23:38:34.745209440Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:38:34.745649 containerd[2134]: time="2026-04-24T23:38:34.745318724Z" level=info msg="RemovePodSandbox \"7352d486e24c070545e320572be3bbb0ceb162e39e265b6c2047649f5578c748\" returns successfully" Apr 24 23:38:34.855537 containerd[2134]: time="2026-04-24T23:38:34.855421688Z" level=info msg="StartContainer for \"d148891ff5e00871ada1c3f97716ce0cc0308d0678c80129f97a0021ec9609bd\" returns successfully" Apr 24 23:38:34.960222 kubelet[3612]: I0424 23:38:34.957889 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-vgkrr" podStartSLOduration=29.975939106 podStartE2EDuration="41.957865953s" podCreationTimestamp="2026-04-24 23:37:53 +0000 UTC" firstStartedPulling="2026-04-24 23:38:22.635434748 +0000 UTC m=+53.393165967" lastFinishedPulling="2026-04-24 23:38:34.617361607 +0000 UTC m=+65.375092814" observedRunningTime="2026-04-24 23:38:34.956071041 +0000 UTC m=+65.713802284" watchObservedRunningTime="2026-04-24 23:38:34.957865953 +0000 UTC m=+65.715597172" Apr 24 23:38:35.051154 containerd[2134]: time="2026-04-24T23:38:35.050253725Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:35.054925 containerd[2134]: time="2026-04-24T23:38:35.054503597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 24 23:38:35.075388 containerd[2134]: time="2026-04-24T23:38:35.075039329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 453.599714ms" Apr 24 23:38:35.075388 containerd[2134]: time="2026-04-24T23:38:35.075105353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 24 23:38:35.085098 containerd[2134]: time="2026-04-24T23:38:35.085029449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 24 23:38:35.092614 containerd[2134]: time="2026-04-24T23:38:35.092512997Z" level=info msg="CreateContainer within sandbox \"45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:38:35.117277 containerd[2134]: time="2026-04-24T23:38:35.116718318Z" level=info msg="CreateContainer within sandbox \"45fc1f498534cae1af5c351c09927b240116143a5c063f8e11fefa1aeb1cce3c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8a7d74553dc307f319ff1e96dcbfb904ef105ab84974c3b6bfc35b6d51a88b8d\"" Apr 24 23:38:35.121018 containerd[2134]: time="2026-04-24T23:38:35.120514602Z" level=info msg="StartContainer for \"8a7d74553dc307f319ff1e96dcbfb904ef105ab84974c3b6bfc35b6d51a88b8d\"" Apr 24 23:38:35.293975 containerd[2134]: time="2026-04-24T23:38:35.293649666Z" level=info msg="StartContainer for \"8a7d74553dc307f319ff1e96dcbfb904ef105ab84974c3b6bfc35b6d51a88b8d\" returns successfully" Apr 24 23:38:35.982828 kubelet[3612]: I0424 23:38:35.982721 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6845df7cfd-nmdrp" podStartSLOduration=29.752585962 podStartE2EDuration="41.982698334s" podCreationTimestamp="2026-04-24 23:37:54 +0000 UTC" firstStartedPulling="2026-04-24 23:38:22.851533041 +0000 UTC m=+53.609264248" lastFinishedPulling="2026-04-24 23:38:35.081645413 +0000 UTC m=+65.839376620" observedRunningTime="2026-04-24 23:38:35.980715082 +0000 UTC m=+66.738446301" watchObservedRunningTime="2026-04-24 23:38:35.982698334 +0000 UTC m=+66.740429541" Apr 24 23:38:36.555151 containerd[2134]: time="2026-04-24T23:38:36.554065005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:36.557776 containerd[2134]: time="2026-04-24T23:38:36.556387473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 24 23:38:36.559792 containerd[2134]: time="2026-04-24T23:38:36.559676577Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:36.570718 containerd[2134]: time="2026-04-24T23:38:36.570640653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:36.574708 containerd[2134]: time="2026-04-24T23:38:36.574577037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.489480976s" Apr 24 23:38:36.575536 containerd[2134]: time="2026-04-24T23:38:36.574704225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 24 23:38:36.580700 containerd[2134]: time="2026-04-24T23:38:36.577949001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 24 23:38:36.608072 containerd[2134]: time="2026-04-24T23:38:36.607748721Z" level=info msg="CreateContainer within sandbox \"5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 24 23:38:36.688006 containerd[2134]: time="2026-04-24T23:38:36.681598125Z" level=info msg="CreateContainer within sandbox \"5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0c9dea053b469574abd1abf0728ac177fd0b065182f9dd0dcc2d0a3afdcd639b\"" Apr 24 23:38:36.688006 containerd[2134]: time="2026-04-24T23:38:36.687141621Z" level=info msg="StartContainer for \"0c9dea053b469574abd1abf0728ac177fd0b065182f9dd0dcc2d0a3afdcd639b\"" Apr 24 23:38:36.697080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1322821065.mount: Deactivated successfully. Apr 24 23:38:36.982634 kubelet[3612]: I0424 23:38:36.979691 3612 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:38:37.050917 containerd[2134]: time="2026-04-24T23:38:37.049754431Z" level=info msg="StartContainer for \"0c9dea053b469574abd1abf0728ac177fd0b065182f9dd0dcc2d0a3afdcd639b\" returns successfully" Apr 24 23:38:38.206075 containerd[2134]: time="2026-04-24T23:38:38.204740517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:38.207430 containerd[2134]: time="2026-04-24T23:38:38.206665233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 24 23:38:38.210058 containerd[2134]: time="2026-04-24T23:38:38.209754513Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:38.215749 containerd[2134]: time="2026-04-24T23:38:38.215591337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:38.219875 containerd[2134]: time="2026-04-24T23:38:38.219764517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.63965524s" Apr 24 23:38:38.220975 containerd[2134]: time="2026-04-24T23:38:38.220873941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 24 23:38:38.225221 containerd[2134]: time="2026-04-24T23:38:38.224860941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 24 23:38:38.235376 containerd[2134]: time="2026-04-24T23:38:38.235314321Z" level=info msg="CreateContainer within sandbox \"02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 23:38:38.273114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4162732549.mount: Deactivated successfully. Apr 24 23:38:38.279595 containerd[2134]: time="2026-04-24T23:38:38.279490137Z" level=info msg="CreateContainer within sandbox \"02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"faab1e66141f6f56eec8cb6ce1956eda193feb6c0289f3934c419c36259a21f3\"" Apr 24 23:38:38.283772 containerd[2134]: time="2026-04-24T23:38:38.283692153Z" level=info msg="StartContainer for \"faab1e66141f6f56eec8cb6ce1956eda193feb6c0289f3934c419c36259a21f3\"" Apr 24 23:38:38.447552 containerd[2134]: time="2026-04-24T23:38:38.447409846Z" level=info msg="StartContainer for \"faab1e66141f6f56eec8cb6ce1956eda193feb6c0289f3934c419c36259a21f3\" returns successfully" Apr 24 23:38:39.817582 containerd[2134]: time="2026-04-24T23:38:39.817118137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:39.820104 containerd[2134]: time="2026-04-24T23:38:39.819759613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 24 23:38:39.822473 containerd[2134]: time="2026-04-24T23:38:39.822379057Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:39.830912 containerd[2134]: time="2026-04-24T23:38:39.830807569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:39.835403 containerd[2134]: time="2026-04-24T23:38:39.835182097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.610249684s" Apr 24 23:38:39.835403 containerd[2134]: time="2026-04-24T23:38:39.835253713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 24 23:38:39.838663 containerd[2134]: time="2026-04-24T23:38:39.838059337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 24 23:38:39.847297 containerd[2134]: time="2026-04-24T23:38:39.847225609Z" level=info msg="CreateContainer within sandbox \"5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 24 23:38:39.878757 systemd[1]: Started sshd@8-172.31.28.13:22-20.229.252.112:53952.service - OpenSSH per-connection server daemon (20.229.252.112:53952). Apr 24 23:38:39.915798 containerd[2134]: time="2026-04-24T23:38:39.915466021Z" level=info msg="CreateContainer within sandbox \"5d8740612a41a4648e5718e70c0b2928c73839f6c27ccb19c77eb5cd034ed9e0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fe39da6a36840600b89f427b42e71119b31f88b1c77db451106fde8702397083\"" Apr 24 23:38:39.916945 containerd[2134]: time="2026-04-24T23:38:39.916863841Z" level=info msg="StartContainer for \"fe39da6a36840600b89f427b42e71119b31f88b1c77db451106fde8702397083\"" Apr 24 23:38:40.081874 containerd[2134]: time="2026-04-24T23:38:40.081357934Z" level=info msg="StartContainer for \"fe39da6a36840600b89f427b42e71119b31f88b1c77db451106fde8702397083\" returns successfully" Apr 24 23:38:40.773184 kubelet[3612]: I0424 23:38:40.772932 3612 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 24 23:38:40.773184 kubelet[3612]: I0424 23:38:40.773064 3612 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 24 23:38:40.924083 sshd[6558]: Accepted publickey for core from 20.229.252.112 port 53952 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:40.927958 sshd[6558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:40.940088 systemd-logind[2104]: New session 9 of user core. Apr 24 23:38:40.948931 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 23:38:41.044463 kubelet[3612]: I0424 23:38:41.038767 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zvsb9" podStartSLOduration=28.360620096 podStartE2EDuration="45.038740055s" podCreationTimestamp="2026-04-24 23:37:56 +0000 UTC" firstStartedPulling="2026-04-24 23:38:23.159637182 +0000 UTC m=+53.917368401" lastFinishedPulling="2026-04-24 23:38:39.837757021 +0000 UTC m=+70.595488360" observedRunningTime="2026-04-24 23:38:41.033869999 +0000 UTC m=+71.791601242" watchObservedRunningTime="2026-04-24 23:38:41.038740055 +0000 UTC m=+71.796471286" Apr 24 23:38:41.762487 kubelet[3612]: I0424 23:38:41.762366 3612 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:38:41.862318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3685907535.mount: Deactivated successfully. Apr 24 23:38:41.882065 sshd[6558]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:41.894411 systemd[1]: sshd@8-172.31.28.13:22-20.229.252.112:53952.service: Deactivated successfully. Apr 24 23:38:41.907976 systemd-logind[2104]: Session 9 logged out. Waiting for processes to exit. Apr 24 23:38:41.913281 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 23:38:41.920321 systemd-logind[2104]: Removed session 9. Apr 24 23:38:41.930500 containerd[2134]: time="2026-04-24T23:38:41.930394455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:41.933738 containerd[2134]: time="2026-04-24T23:38:41.933645303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 24 23:38:41.936568 containerd[2134]: time="2026-04-24T23:38:41.936429399Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:41.945841 containerd[2134]: time="2026-04-24T23:38:41.945457768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:41.948250 containerd[2134]: time="2026-04-24T23:38:41.947936356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.109805919s" Apr 24 23:38:41.948250 containerd[2134]: time="2026-04-24T23:38:41.948049492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 24 23:38:41.960669 containerd[2134]: time="2026-04-24T23:38:41.960249796Z" level=info msg="CreateContainer within sandbox \"02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 23:38:41.989544 containerd[2134]: time="2026-04-24T23:38:41.989453152Z" level=info msg="CreateContainer within sandbox \"02b4f1013a084bdb1db01358ee745692fcec04a43dc1fcb59b10b04940f63309\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"dfc3cd48ac2be835e8c6c5666a27ee08bae16c0ce735d4a06c1b28f1538b633f\"" Apr 24 23:38:41.991505 containerd[2134]: time="2026-04-24T23:38:41.991410340Z" level=info msg="StartContainer for \"dfc3cd48ac2be835e8c6c5666a27ee08bae16c0ce735d4a06c1b28f1538b633f\"" Apr 24 23:38:42.148582 containerd[2134]: time="2026-04-24T23:38:42.148174237Z" level=info msg="StartContainer for \"dfc3cd48ac2be835e8c6c5666a27ee08bae16c0ce735d4a06c1b28f1538b633f\" returns successfully" Apr 24 23:38:47.060689 systemd[1]: Started sshd@9-172.31.28.13:22-20.229.252.112:57904.service - OpenSSH per-connection server daemon (20.229.252.112:57904). Apr 24 23:38:47.335794 kubelet[3612]: I0424 23:38:47.331468 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-77f46cb4c9-f97vl" podStartSLOduration=10.597504565 podStartE2EDuration="29.33144237s" podCreationTimestamp="2026-04-24 23:38:18 +0000 UTC" firstStartedPulling="2026-04-24 23:38:23.216480043 +0000 UTC m=+53.974211262" lastFinishedPulling="2026-04-24 23:38:41.95041786 +0000 UTC m=+72.708149067" observedRunningTime="2026-04-24 23:38:43.044526049 +0000 UTC m=+73.802257280" watchObservedRunningTime="2026-04-24 23:38:47.33144237 +0000 UTC m=+78.089173589" Apr 24 23:38:48.096342 sshd[6668]: Accepted publickey for core from 20.229.252.112 port 57904 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:48.099519 sshd[6668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:48.109658 systemd-logind[2104]: New session 10 of user core. Apr 24 23:38:48.117471 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 23:38:48.962463 sshd[6668]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:48.969842 systemd[1]: sshd@9-172.31.28.13:22-20.229.252.112:57904.service: Deactivated successfully. Apr 24 23:38:48.980098 systemd-logind[2104]: Session 10 logged out. Waiting for processes to exit. Apr 24 23:38:48.981417 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 23:38:48.987773 systemd-logind[2104]: Removed session 10. Apr 24 23:38:54.139549 systemd[1]: Started sshd@10-172.31.28.13:22-20.229.252.112:57914.service - OpenSSH per-connection server daemon (20.229.252.112:57914). Apr 24 23:38:55.186886 sshd[6767]: Accepted publickey for core from 20.229.252.112 port 57914 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:55.189776 sshd[6767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:55.199090 systemd-logind[2104]: New session 11 of user core. Apr 24 23:38:55.204631 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 23:38:56.024178 sshd[6767]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:56.033470 systemd[1]: sshd@10-172.31.28.13:22-20.229.252.112:57914.service: Deactivated successfully. Apr 24 23:38:56.045604 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 23:38:56.048914 systemd-logind[2104]: Session 11 logged out. Waiting for processes to exit. Apr 24 23:38:56.054284 systemd-logind[2104]: Removed session 11. Apr 24 23:38:56.205388 systemd[1]: Started sshd@11-172.31.28.13:22-20.229.252.112:57920.service - OpenSSH per-connection server daemon (20.229.252.112:57920). Apr 24 23:38:57.257463 sshd[6781]: Accepted publickey for core from 20.229.252.112 port 57920 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:57.260882 sshd[6781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:57.272920 systemd-logind[2104]: New session 12 of user core. Apr 24 23:38:57.277059 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 23:38:58.185132 sshd[6781]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:58.194452 systemd[1]: sshd@11-172.31.28.13:22-20.229.252.112:57920.service: Deactivated successfully. Apr 24 23:38:58.202826 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 23:38:58.205194 systemd-logind[2104]: Session 12 logged out. Waiting for processes to exit. Apr 24 23:38:58.207662 systemd-logind[2104]: Removed session 12. Apr 24 23:38:58.362834 systemd[1]: Started sshd@12-172.31.28.13:22-20.229.252.112:36568.service - OpenSSH per-connection server daemon (20.229.252.112:36568). Apr 24 23:38:59.337468 systemd[1]: run-containerd-runc-k8s.io-d148891ff5e00871ada1c3f97716ce0cc0308d0678c80129f97a0021ec9609bd-runc.wPmd1V.mount: Deactivated successfully. Apr 24 23:38:59.407909 sshd[6817]: Accepted publickey for core from 20.229.252.112 port 36568 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:59.412875 sshd[6817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:59.432147 systemd-logind[2104]: New session 13 of user core. Apr 24 23:38:59.438816 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 23:39:00.240318 sshd[6817]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:00.248330 systemd[1]: sshd@12-172.31.28.13:22-20.229.252.112:36568.service: Deactivated successfully. Apr 24 23:39:00.255915 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 23:39:00.256854 systemd-logind[2104]: Session 13 logged out. Waiting for processes to exit. Apr 24 23:39:00.261641 systemd-logind[2104]: Removed session 13. Apr 24 23:39:05.417111 systemd[1]: Started sshd@13-172.31.28.13:22-20.229.252.112:36578.service - OpenSSH per-connection server daemon (20.229.252.112:36578). Apr 24 23:39:06.458971 sshd[6868]: Accepted publickey for core from 20.229.252.112 port 36578 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:06.462743 sshd[6868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:06.472682 systemd-logind[2104]: New session 14 of user core. Apr 24 23:39:06.477783 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 23:39:07.389668 sshd[6868]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:07.399089 systemd[1]: sshd@13-172.31.28.13:22-20.229.252.112:36578.service: Deactivated successfully. Apr 24 23:39:07.406695 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 23:39:07.410211 systemd-logind[2104]: Session 14 logged out. Waiting for processes to exit. Apr 24 23:39:07.413067 systemd-logind[2104]: Removed session 14. Apr 24 23:39:07.561565 systemd[1]: Started sshd@14-172.31.28.13:22-20.229.252.112:51288.service - OpenSSH per-connection server daemon (20.229.252.112:51288). Apr 24 23:39:08.605541 sshd[6905]: Accepted publickey for core from 20.229.252.112 port 51288 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:08.610037 sshd[6905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:08.627125 systemd-logind[2104]: New session 15 of user core. Apr 24 23:39:08.632614 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 23:39:09.793127 sshd[6905]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:09.802616 systemd[1]: sshd@14-172.31.28.13:22-20.229.252.112:51288.service: Deactivated successfully. Apr 24 23:39:09.808947 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 23:39:09.809304 systemd-logind[2104]: Session 15 logged out. Waiting for processes to exit. Apr 24 23:39:09.814280 systemd-logind[2104]: Removed session 15. Apr 24 23:39:09.966508 systemd[1]: Started sshd@15-172.31.28.13:22-20.229.252.112:51296.service - OpenSSH per-connection server daemon (20.229.252.112:51296). Apr 24 23:39:10.967292 sshd[6917]: Accepted publickey for core from 20.229.252.112 port 51296 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:10.970187 sshd[6917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:10.979741 systemd-logind[2104]: New session 16 of user core. Apr 24 23:39:10.987888 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 23:39:12.868617 sshd[6917]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:12.875652 systemd[1]: sshd@15-172.31.28.13:22-20.229.252.112:51296.service: Deactivated successfully. Apr 24 23:39:12.884794 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 23:39:12.887542 systemd-logind[2104]: Session 16 logged out. Waiting for processes to exit. Apr 24 23:39:12.891888 systemd-logind[2104]: Removed session 16. Apr 24 23:39:13.046685 systemd[1]: Started sshd@16-172.31.28.13:22-20.229.252.112:51306.service - OpenSSH per-connection server daemon (20.229.252.112:51306). Apr 24 23:39:14.102660 sshd[6944]: Accepted publickey for core from 20.229.252.112 port 51306 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:14.105817 sshd[6944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:14.116683 systemd-logind[2104]: New session 17 of user core. Apr 24 23:39:14.126648 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 23:39:15.240390 sshd[6944]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:15.249714 systemd-logind[2104]: Session 17 logged out. Waiting for processes to exit. Apr 24 23:39:15.250862 systemd[1]: sshd@16-172.31.28.13:22-20.229.252.112:51306.service: Deactivated successfully. Apr 24 23:39:15.259817 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 23:39:15.264585 systemd-logind[2104]: Removed session 17. Apr 24 23:39:15.396752 systemd[1]: Started sshd@17-172.31.28.13:22-20.229.252.112:51310.service - OpenSSH per-connection server daemon (20.229.252.112:51310). Apr 24 23:39:16.373375 sshd[6956]: Accepted publickey for core from 20.229.252.112 port 51310 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:16.377940 sshd[6956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:16.387320 systemd-logind[2104]: New session 18 of user core. Apr 24 23:39:16.399806 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 23:39:17.161378 sshd[6956]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:17.169708 systemd[1]: sshd@17-172.31.28.13:22-20.229.252.112:51310.service: Deactivated successfully. Apr 24 23:39:17.176620 systemd-logind[2104]: Session 18 logged out. Waiting for processes to exit. Apr 24 23:39:17.177950 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 23:39:17.182588 systemd-logind[2104]: Removed session 18. Apr 24 23:39:17.341130 systemd[1]: run-containerd-runc-k8s.io-cfd044777a0ffe1dce14c9c237d19b074640ceda4e2c8e83d6c82658e8da84b9-runc.DYsA2h.mount: Deactivated successfully. Apr 24 23:39:22.330526 systemd[1]: Started sshd@18-172.31.28.13:22-20.229.252.112:45366.service - OpenSSH per-connection server daemon (20.229.252.112:45366). Apr 24 23:39:23.340416 sshd[6992]: Accepted publickey for core from 20.229.252.112 port 45366 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:23.343261 sshd[6992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:23.352865 systemd-logind[2104]: New session 19 of user core. Apr 24 23:39:23.360667 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 23:39:24.143515 sshd[6992]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:24.152689 systemd[1]: sshd@18-172.31.28.13:22-20.229.252.112:45366.service: Deactivated successfully. Apr 24 23:39:24.159189 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 23:39:24.161781 systemd-logind[2104]: Session 19 logged out. Waiting for processes to exit. Apr 24 23:39:24.164741 systemd-logind[2104]: Removed session 19. Apr 24 23:39:29.330674 systemd[1]: Started sshd@19-172.31.28.13:22-20.229.252.112:43360.service - OpenSSH per-connection server daemon (20.229.252.112:43360). Apr 24 23:39:30.403042 sshd[7026]: Accepted publickey for core from 20.229.252.112 port 43360 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:30.409063 sshd[7026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:30.434152 systemd-logind[2104]: New session 20 of user core. Apr 24 23:39:30.439651 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 23:39:31.289187 sshd[7026]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:31.307302 systemd[1]: sshd@19-172.31.28.13:22-20.229.252.112:43360.service: Deactivated successfully. Apr 24 23:39:31.320346 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 23:39:31.324020 systemd-logind[2104]: Session 20 logged out. Waiting for processes to exit. Apr 24 23:39:31.333306 systemd-logind[2104]: Removed session 20. Apr 24 23:39:33.854022 update_engine[2110]: I20260424 23:39:33.851820 2110 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 24 23:39:33.854022 update_engine[2110]: I20260424 23:39:33.851894 2110 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 24 23:39:33.854022 update_engine[2110]: I20260424 23:39:33.852420 2110 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 24 23:39:33.854022 update_engine[2110]: I20260424 23:39:33.853319 2110 omaha_request_params.cc:62] Current group set to lts Apr 24 23:39:33.854022 update_engine[2110]: I20260424 23:39:33.853505 2110 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 24 23:39:33.854022 update_engine[2110]: I20260424 23:39:33.853528 2110 update_attempter.cc:643] Scheduling an action processor start. Apr 24 23:39:33.854022 update_engine[2110]: I20260424 23:39:33.853564 2110 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 24 23:39:33.854022 update_engine[2110]: I20260424 23:39:33.853634 2110 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 24 23:39:33.854022 update_engine[2110]: I20260424 23:39:33.853741 2110 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 24 23:39:33.854022 update_engine[2110]: I20260424 23:39:33.853762 2110 omaha_request_action.cc:272] Request: Apr 24 23:39:33.854022 update_engine[2110]: Apr 24 23:39:33.854022 update_engine[2110]: Apr 24 23:39:33.854022 update_engine[2110]: Apr 24 23:39:33.854022 update_engine[2110]: Apr 24 23:39:33.854022 update_engine[2110]: Apr 24 23:39:33.854022 update_engine[2110]: Apr 24 23:39:33.854022 update_engine[2110]: Apr 24 23:39:33.854022 update_engine[2110]: Apr 24 23:39:33.854022 update_engine[2110]: I20260424 23:39:33.853780 2110 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 23:39:33.855458 locksmithd[2165]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 24 23:39:33.863469 update_engine[2110]: I20260424 23:39:33.863386 2110 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 23:39:33.864150 update_engine[2110]: I20260424 23:39:33.864058 2110 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 23:39:33.867012 update_engine[2110]: E20260424 23:39:33.866935 2110 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 23:39:33.867107 update_engine[2110]: I20260424 23:39:33.867076 2110 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 24 23:39:36.455557 systemd[1]: Started sshd@20-172.31.28.13:22-20.229.252.112:49462.service - OpenSSH per-connection server daemon (20.229.252.112:49462). Apr 24 23:39:37.450026 sshd[7044]: Accepted publickey for core from 20.229.252.112 port 49462 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:37.452627 sshd[7044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:37.464566 systemd-logind[2104]: New session 21 of user core. Apr 24 23:39:37.473085 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 23:39:38.265344 sshd[7044]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:38.272676 systemd[1]: sshd@20-172.31.28.13:22-20.229.252.112:49462.service: Deactivated successfully. Apr 24 23:39:38.280778 systemd-logind[2104]: Session 21 logged out. Waiting for processes to exit. Apr 24 23:39:38.281977 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 23:39:38.285268 systemd-logind[2104]: Removed session 21. Apr 24 23:39:43.445489 systemd[1]: Started sshd@21-172.31.28.13:22-20.229.252.112:49476.service - OpenSSH per-connection server daemon (20.229.252.112:49476). Apr 24 23:39:43.851597 update_engine[2110]: I20260424 23:39:43.850826 2110 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 23:39:43.851597 update_engine[2110]: I20260424 23:39:43.851209 2110 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 23:39:43.851597 update_engine[2110]: I20260424 23:39:43.851508 2110 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 23:39:43.852818 update_engine[2110]: E20260424 23:39:43.852689 2110 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 23:39:43.852818 update_engine[2110]: I20260424 23:39:43.852777 2110 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 24 23:39:44.459743 sshd[7079]: Accepted publickey for core from 20.229.252.112 port 49476 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:44.464386 sshd[7079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:44.473254 systemd-logind[2104]: New session 22 of user core. Apr 24 23:39:44.477550 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 24 23:39:45.271291 sshd[7079]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:45.278604 systemd[1]: sshd@21-172.31.28.13:22-20.229.252.112:49476.service: Deactivated successfully. Apr 24 23:39:45.289029 systemd[1]: session-22.scope: Deactivated successfully. Apr 24 23:39:45.291199 systemd-logind[2104]: Session 22 logged out. Waiting for processes to exit. Apr 24 23:39:45.294652 systemd-logind[2104]: Removed session 22. Apr 24 23:39:47.339877 systemd[1]: run-containerd-runc-k8s.io-cfd044777a0ffe1dce14c9c237d19b074640ceda4e2c8e83d6c82658e8da84b9-runc.OKJfXJ.mount: Deactivated successfully.